L1 Sparsity-Regularized Attention Multiple-Instance Network for Hyperspectral Target Detection

Abstract : Attention-based deep multiple-instance learning (MIL) has been applied to many machine-learning tasks with imprecise training labels. It is also appealing in hyperspectral target detection, which only requires the label of an area containing some targets, relaxing the effort of labeling the individual pixel in the scene. This article proposes an L1 sparsity-regularized attention multiple-instance neural network (L1-attention MINN) for hyperspectral target detection with imprecise labels that enforces the discrimination of false-positive instances from positively labeled bags. The sparsity constraint applied to the attention estimated for the positive training bags strictly complies with the definition of MIL and maintains better discriminative ability. The proposed algorithm has been evaluated on both simulated and real-field hyperspectral (subpixel) target detection tasks, where advanced performance has been achieved over the state-of-the-art comparisons, showing the effectiveness of the proposed method for target detection from imprecisely labeled hyperspectral data.
 EXISTING SYSTEM :
 ? This approach is useful if two targets exist in every positive bag, but not in negative bags. ? With this, MTMI-ACE and MTMI-SMF remove the need for domain-specific knowledge for how many target signatures may exist; while still being adjustable by changing the value of a to encourage more or less target signature uniqueness. ? These algorithms were developed with the assumption that both targets exist in each positive bag instead of the traditional approach of a single target, which is how this dataset was created. ? The training dataset comprised of spatial polygons designating where on the landscape ‘pure’ patches of species existed.
 DISADVANTAGE :
 ? The detection problem can then be posed as a binary hypothesis test with two competing hypotheses: target absent (H0) or target present (H1) and a detector can be designed using the generalized likelihood ratio test (GLRT) approach. ? Multiple instance learning is a variation on supervised learning for problems with imprecisely-labeled training data. ? Values close to zero indicate that the associated wavelength is not informative for the target detection problem. ? To help illustrate this and to help better visualize the ability of MI-ACE and MI-SMF to identify discriminative features, MI-ACE and MI-SMF were also applied to an MIL detection problem constructed using the AR Face Data Set.
 PROPOSED SYSTEM :
 • The proposed methods address the problems above by directly considering the multiple-instance, imprecisely labeled dataset. • This framework alleviates the need to have accurate labels which are inherently challenging to collect. Since the introduction of MIL, numerous MIL algorithms have been proposed. • Two of the earliest proposed MIL algorithms are the Diverse Density algorithm and the Expectation-Maximization with the Diverse Density (EM-DD) algorithm. • The MILMD-SMF algorithm, which is most similar to our proposed algorithm, did have comparable results to the MTMI algorithm. • Comprehensive experiments show that the proposed MTMI-ACE and MTMI-SMF algorithms are effective in learning discriminative target concepts.
 ADVANTAGE :
 ? In other words, in addition to optimizing SMF and ACE performance, the resulting signatures estimated by MI-SMF and MI-ACE are interpretable and provide insight into what are the discriminative, salient features of the target. ? However, for the second data set, neither MI-SMF or MI-ACE recover the true target signature from the data but instead estimate a target concept that maximizes target detection performance. ? We train MI-ACE or MI-SMF on data collected in one set of environmental conditions and test on data collected in different environmental conditions, the performance of the methods will depend on whether the relative magnitudes of the target and background materials are similar to each other across the environmental conditions.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com