Composite Kernel of Mutual Learning on Mid-Level Features for Hyperspectral Image Classification
ABSTARCT :
By training different models and averaging their predictions, the performance of the machine-learning algorithm can be improved. The performance optimization of multiple models is supposed to generalize further data well. This requires the knowledge transfer of generalization information between models. In this article, a multiple kernel mutual learning method based on transfer learning of combined mid-level features is proposed for hyperspectral classification. Three-layer homogenous superpixels are computed on the image formed by PCA, which is used for computing mid-level features. The three mid-level features include: 1) the sparse reconstructed feature; 2) combined mean feature; and 3) uniqueness. The sparse reconstruction feature is obtained by a joint sparse representation model under the constraint of three-scale superpixels' boundaries and regions. The combined mean features are computed with average values of spectra in multilayer superpixels, and the uniqueness is obtained by the superposed manifold ranking values of multilayer superpixels. Next, three kernels of samples in different feature spaces are computed for mutual learning by minimizing the divergence. Then, a combined kernel is constructed to optimize the sample distance measurement and applied by employing SVM training to build classifiers. Experiments are performed on real hyperspectral datasets, and the corresponding results demonstrated that the proposed method can perform significantly better than several state-of-the-art competitive algorithms based on MKL and deep learning.
EXISTING SYSTEM :
? The proposed method further considers contextual information using MRF while the existing methods do not utilize this prior.
? However, existing HSI data sets suffer from a significant class imbalance problem, where many classes do not have enough samples to characterize the spectral information.
? The performance of existing CNN models is biased toward the majority classes, which possess more samples for the training.
? Our 3D-HyperGAMO generates the new samples in the distribution of existing class-specific samples for minority classes from noise.
? The proposed model outperforms existing models in most of the cases. Moreover, the performance gain obtained by the proposed model is more significant for small training sets.
DISADVANTAGE :
? This problem is usually reduced by introducing a feature selection/extraction step before training the hyperspectral classifier with the basic objective of reducing the high input dimensionality.
? However, an important problem in the context of hyperspectral data is the high number of spectral bands and relatively low number of labeled training samples, which poses the well-known Hughes phenomenon.
? The properties of kernel methods make them well-suited to tackle the problem of hyperspectral image classification since they can handle large input spaces efficiently, work with a relatively low number of labeled training samples, and deal with noisy samples in a robust way.
PROPOSED SYSTEM :
• In the proposed method, we adopt a fine-tuning (FT) strategy to largely reduce the cost of retraining CNN after each round, and thus reduce the computational complexity.
• Although there have been several works combining AL with deep learning for HSI classification, our proposed method has its specific characteristics.
• Therefore, this proposed method can be regarded as a further attempt to combine AL with a deep neural network as well as consider more contextual information in order to reduce the labeling cost.
• To expand the training set for training deep CNN, a novel pixel-pair method is proposed.
ADVANTAGE :
? We take advantage of two especially interesting properties of kernel methods: (i) their good performance when working with high input dimensional spaces, and (ii) the properties derived from Mercer’s conditions by which a scaled summation of (positive definite) kernel matrices are valid kernels, which have provided good results in other domains .
? Among all the available kernel machines, we focus on SVMs, which have recently demonstrated superior performance in the context of hyperspectral image classification.
? However, performance can be improved by including both spectral and textural information in the classifier.
? There is superior performance of cross-information and weighted summation kernels with respect to the usual stacked approach.
|