Embedded Feature Selection Based on Relevance Vector Machines With an Approximated Marginal Likelihood and Its Industrial Application

ABSTARCT :  Feature selection is of great importance to make prediction for process variables in industrial production. An embedded feature selection method, based on relevance vector machines with an approximated marginal likelihood function, is proposed in this study. By setting hierarchical prior distributions over the model weights and the parameters of the automatic relevance determination kernel function, respectively, a Gaussian approximation method is designed to approximate the intractable exact marginal likelihood by using the law of the total expectation and the total covariance. Furthermore, in this study, the joint posterior distribution over the model weights and the kernel parameters is estimated by combining the Gibbs sampling with a Laplace approximation. Thus, feature selection is performed by examining the posterior over the kernel parameters. To verify the performance of the proposed method, a series of benchmark datasets and two practical industrial datasets are employed. The experimental results demonstrate that the proposed method not only produces higher prediction accuracy than other methods but also performs better in feature selection, especially in industrial case.
 EXISTING SYSTEM :
 ? Many different heuristics possibly exist for evaluating RVM training. In this paper, we chose the regression root-mean-square error (RMSE) as a measure for this decision. ? To solve a reinforcement learning (RL) problem, an agent must develop a good estimate of the sum of future reinforcements, called the value of the current state and action. ? A common problem in RL in continuous actions and states is that there is an infinite number of state and action pairs. ? A variety of function approximation methods have been studied in RL. Cellular approximations such as CMAC tilecoding and radial basis function have been applied to various RL problems.
 DISADVANTAGE :
 ? In our study, we address the problem of the degeneracy of covariance function in RVM using k-NN rule. ? To resolve this problem, priors are decorrelated by adding a white noise Gaussian process to the model right before normalization. ? The model is called the relevance sample feature machine (RSFM) and is able to simultaneously choose the relevance instances and also the relevance features for regression or classification problems. ? Extensive experiments on artificial datasets and real-world datasets shows its effectiveness and flexibility on representing regression problem with higher levels of sparsity and better performance than classical RVM.
 PROPOSED SYSTEM :
 • The proposed method, called probabilistic feature selection and classification vector machine (PFCVMLP ), is able to simultaneously select relevant features and samples for classification tasks. • Feature selection, as a dimensionality reduction technique, has been extensively studied in machine learning and data mining, and various feature selection methods have been proposed. • Unlike traditional sparse Bayesian classifiers, like PCVM and RVM, the proposed algorithm simultaneously selects the relevant features and samples, which leads to a robust classifier for high-dimensional data sets. • Both the emotional EEG and gene expression experiments indicate that the proposed classifier and feature selection co-learning algorithm is capable of generating a sparse solution.
 ADVANTAGE :
 ? The performance of two classifiers is significantly different if the corresponding average ranks for them differ by at least the critical difference (CD). ? According to the Nemenyi test, the performance of two classifiers is significantly different if their ranks are higher than at least the critical difference (CD). Moreover, the two classifiers have no significant difference when they are in the same group. ? An empirical evaluation of the effects of choice of prior structure and the link between Bayesian wavelet shrinkage and RVM regression are presented. ? According to their result, they could outperform RVM performance in terms of goodness of fit and achieved sparsity as well as computational performance.
Download DOC Download PPT

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com