Abstract : Sell-side analysts’ recommendations are primarily targeted at institutional investors mandated to invest across many companies within client-mandated equity benchmarks, such as the FTSE/JSE All-Share index. Given the numerous sell-side recommendations for a single stock, making unbiased investment decisions is not often straightforward for portfolio managers. This study explores the use of historical sell-side recommendations to create an unbiased fusion of analyst forecasts such that bidirectional accuracy is optimised using random forest, extreme gradient boosting, deep neural networks, and logistic regression.
 ? During the learning process, decision trees are added at each stage, and the existing trees in the ensemble are not replaced ? The contribution of the weak learner to the ensemble is based on the gradient descent optimisation process performed on each weak learner. ? The calculated contribution of each tree is then based on minimising the overall error of the composite learner
 ? Our research focused on a binary classification problem, that is upward or downward classes. ? Let true positives (TP) refer to the number of positive classes correctly predicted by the classifier. ? Gradient boosting is an optimisation problem, where the goal is to locate the minimum or minimise the model’s loss function by adding weak learners using gradient descent.
 ? In recent years, many fairness metrics have been proposed to define what is fairness in machine learning. ? we conduct extensive experiments to validate the effectiveness of our proposed methods. ? The rest of this paper is organized as follows. The preliminaries is given in Section The first proposed method FSMC is given in Sect. ? Many fairness constraints have been proposed to enforce various fairness metrics, such as disparate impact and disparate mistreatment, and these fairness constraints can be used in our framework
 ? Machine learning algorithms, as useful decision-making tools, are widely used in the society ? In real-world machine learning tasks, a large amount of data used for training is necessary, and is often a combination of labeled and unlabeled data ? Unlabeled data is abundant in era of big data and, if it could be used as training data, we may be able to make a better compromise between fairness and accuracy ? This step can be solved by a convex problem and convex-concave programming when disparate impact and disparate mistreatment are used as fairness metrics respectively
Download DOC Download PPT

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com