FAIRNESS IN SEMI-SUPERVISED LEARNING: UNLABELED DATA HELP TO REDUCE DISCRIMINATION

Abstract : Machine learning is widely deployed in society, unleashing its power in a wide rangeof applications owing to the advent of big data.One emerging problem faced by machine learning is the discrimination from data, and such discrimination is reflected in the eventual decisions made by the algorithms. Recent study has proved that increasing the size of training (labeled) data will promote the fairness criteria with model performance being maintained. Taking the popularity of graph-based approaches in semi-supervised learning, we study this problem both on conventional label propagation method and graph neural networks, where various fairness criteria can be flexibly integrated.
 EXISTING SYSTEM :
 ? Our developed algorithms are proved to be non-trivial extensions to the existing supervised models with fairness constraints ? Existing fair learning methods mainly focus on supervised and unsupervised learning, and cannot be directly applied to SSL. As far as we know, only has explored fairness in SSL. Chzhen et al. studied Bayes classifier under the fairness metric of equal opportunity, where labeled data is used to learn the output conditional probability, and unlabeled data is used for to calibrate threshold in the post-processing phase.
 DISADVANTAGE :
 ? Our first approach, fair semi-supervised margin classifiers (FSMC), is formulated as an optimization problem, where the objective function includes a loss for both the classifier and label propagation, and fairness constraints over labeled and unlabeled data. ? This step can be solved by a convex problem and convex-concave programming when disparate impact and disparate mistreatment are used as fairness metrics respectively ? we propose algorithms to solve optimization problems when disparate impact and disparate mistreatment are integrated as fairness metrics in the graph-based regularization
 PROPOSED SYSTEM :
 ? In recent years, many fairness metrics have been proposed to define what is fairness in machine learning. ? we conduct extensive experiments to validate the effectiveness of our proposed methods. ? The rest of this paper is organized as follows. The preliminaries is given in Section The first proposed method FSMC is given in Sect. ? Many fairness constraints have been proposed to enforce various fairness metrics, such as disparate impact and disparate mistreatment, and these fairness constraints can be used in our framework
 ADVANTAGE :
 ? Machine learning algorithms, as useful decision-making tools, are widely used in the society ? In real-world machine learning tasks, a large amount of data used for training is necessary, and is often a combination of labeled and unlabeled data ? Unlabeled data is abundant in era of big data and, if it could be used as training data, we may be able to make a better compromise between fairness and accuracy ? This step can be solved by a convex problem and convex-concave programming when disparate impact and disparate mistreatment are used as fairness metrics respectively.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com