Semisupervised Feature Selection via Generalized Uncorrelated Constraint and Manifold Embedding

Abstract : Ridge regression is frequently utilized by both supervised learning and semisupervised learning. However, the results cannot obtain the closed-form solution and perform manifold structure when ridge regression is directly applied to semisupervised learning. To address this issue, we propose a novel semisupervised feature selection method under generalized uncorrelated constraint, namely SFS. The generalized uncorrelated constraint equips the framework with the elegant closed-form solution and is introduced to the ridge regression with embedding the manifold structure. The manifold structure and closed-form solution can better save data's topology information compared to the deep network with gradient descent. Furthermore, the full rank constraint of the projection matrix also avoids the occurrence of excessive row sparsity. The scale factor of the constraint that can be adaptively obtained also provides the subspace constraint more flexibility. Experimental results on data sets validate the superiority of our method to the state-of-the-art semisupervised feature selection methods.
 EXISTING SYSTEM :
 ? Although the influence on the graph structure of noise is reduced by the structure learning and graph sparsity, NDFS can only work in the situation in which a linear relationship between the features and the clustering pseudo tags exists; moreover, the clustering tag technique employed by NDFS cannot fully capture the local structure information underlying the original data. ? We only consider the computational complexity theoretically. The time consumption may be different in real applications because we have not considered the influence of iteration in the above analysis. ? We analysis the computational complexity and the running time of the proposed method and then compare it with several compared algorithms.
 DISADVANTAGE :
 ? Semisupervised feature selection focuses on the problem of using a small number of labeled data and a large number of unlabeled data for feature selection. ? Most semi-supervised feature selection methods score the features based on a ranking criterion, such as Laplacian score, Pearson’s correlation coefficient and so on. ? Naturally, this problem can be converted to select the nearest k points around any data point to calculate the transition probability. ? Xue et al. presented a self-adaptive algorithm based on EC method to solve the local optimal stagnation problem caused by a large number of irrelevant features.
 PROPOSED SYSTEM :
 • We combine structure learning and feature selection to propose a new feature selection framework. Since the MDS method is employed in the proposed framework to preserve the original space structure, which is reconstructed in a low-dimensional space, the proposed framework can preserve both the global structure and local structure underlying the original gene data. • Moreover, an effective algorithm is developed to solve the optimization problem based on the proposed scheme. • Comparative experiments with some classical schemes on real tumor datasets demonstrate the effectiveness of the proposed method. • The alternating direction method of multipliers (ADMM) is proposed to handle nonconvex optimization related to the proposed framework.
 ADVANTAGE :
 ? The performance of our approach is compared with the state-of-the-art methods on eight real-world data sets, and the experimental results show that the proposed MMFS is effective in unsupervised feature selection. ? As an effective mean to remove irrelevant features from high-dimensional data without reducing performance, feature selection has attracted many attentions in recent years. ? In other words, wrapper method wraps the classifier and feature selection into a black box, and evaluates the performance of the selected feature according to its accuracy on the feature subset. ? Comprehensive experiments are performed on eight benchmark data sets, which show the good performance of the proposed approach compared with the state-of-the-art unsupervised feature selection methods.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com