Feature Selection Based on a Sparse Neural-Network Layer With Normalizing Constraints
ABSTARCT :
Feature selection (FS) is an important step in machine learning since it has been shown to improve prediction accuracy while suppressing the curse of dimensionality of high-dimensional data. Neural networks have experienced tremendous success in solving many nonlinear learning problems. Here, we propose a new neural-network-based FS approach that introduces two constraints, the satisfaction of which leads to a sparse FS layer. We performed extensive experiments on synthetic and real-world data to evaluate the performance of our proposed FS method. In the experiments, we focus on high-dimensional, low-sample-size data since they represent the main challenge for FS. The results confirm that the proposed FS method based on a sparse neural-network layer with normalizing constraints (SNeL-FS) is able to select the important features and yields superior performance compared to other conventional FS methods.
EXISTING SYSTEM :
? There exist exact or approximate formulas of the leave-one-out error for a number of learning machines.
? Even though recent attempts have been made to facilitate the interpretability of deep neural networks (DNNs), existing methods are susceptible to noise and lack of robustness.
? Existing methods either fit a simple model in the local region around the input or locally perturb the input to see how the prediction changes.
? We use synthetic data to compare the performance of DeepPINK to existing methods in the literature.
? We did not compare with SVR with nonlinear kernel because we could not find any feature importance measure with nonlinear kernel in existing software packages.
DISADVANTAGE :
? The top-down influence is especially effective when dealing with high noise or difficult segmentation problems.
? Although the modules are simple, their combined behaviors can be complex, and new algorithms can be plugged in to rewire the behavior, e.g., a fast pathway for low noise, and an iterative mode for complex problems such as MNIST-2.
? It is a problem of segmentation, not denoising. In fact, such segmentation requires semantic understanding of the object.
? The issue of whether top-down connections and iterative processing are useful for object recognition has been a point of hot contention.
PROPOSED SYSTEM :
• The proposed method calculates the correlation coefficients of second, fourth, and sixth-order cumulants with the proposed formula and selects an effective feature according to the calculated values.
• In the proposed method, the deep learning-based AMC method is used to measure and compare the classification performance.
• Therefore, in this paper, we use only the features that greatly affect the classification performance through the proposed algorithm to reduce the computational complexity and to identify the received signal quickly while using the basic DNN structure algorithm.
• The proposed method uses correlation as a data analysis method to select effective features.
ADVANTAGE :
? Although its performance may degrade, it is capable of performing denoising and segmentation tasks with different levels of difficulties using the same underlying architecture.
? We have validated the performance of aNN on the MNIST variation dataset. We obtained accuracy better than or competitive to state of art.
? Contrary to random background dataset, in this problem, more iterations conclusively lead to better performance.
? In the context of image-denoising, feedforward neural networks have been shown to have good performance.
? In our work, we find that feedforward processing is sufficient for good performance on clean digits.
|