Joint Sparse Locality-Aware Regression for Robust Discriminative Learning
ABSTARCT :
With the dramatic increase of dimensions in the data representation, extracting latent low-dimensional features becomes of the utmost importance for efficient classification. Aiming at the problems of weakly discriminating marginal representation and difficulty in revealing the data manifold structure in most of the existing linear discriminant methods, we propose a more powerful discriminant feature extraction framework, namely, joint sparse locality-aware regression (JSLAR). In our model, we formulate a new strategy induced by the nonsquared LS2 norm for enhancing the local intraclass compactness of the data manifold, which can achieve the joint learning of the locality-aware graph structure and the desirable projection matrix. Besides, we formulate a weighted retargeted regression to perform the marginal representation learning adaptively instead of using the general average interclass margin. To alleviate the disturbance of outliers and prevent overfitting, we measure the regression term and locality-aware term together with the regularization term by forcing the row sparsity with the joint L2,1 norms. Then, we derive an effective iterative algorithm for solving the proposed model. The experimental results over a range of benchmark databases demonstrate that the proposed JSLAR outperforms some state-of-the-art approaches.
EXISTING SYSTEM :
? We present a well-designed tracking algorithm that aims to learn visual dictionary and nonlinear classification function jointly enlighten by above mentioned nonlinear learning theory under a semi-supervised framework.
? Therefore, the proposed method could overcome several limitations arisen in most of existing visual tracking approaches efficiently.
? The value of the label is 1 for a sample overlapped with the target region completely, and 0 if no overlap exists between them.
? Some inaccuracy may exist in the center location of the estimated target. To refine the estimated target state, we train a correlation filter using the holistic target template cropped in the initial frame.
DISADVANTAGE :
? In the original regression, the strict zero-one target matrix cannot be approximated as an ideal low-dimensional embedding, thus we prefer the flexible formulation in problem which helps to realize the learning of margin representation.
? We transform the problem into an smooth optimization problem that is jointly convex for all variables.
? A significant way to address these issues is dimensionality reduction (DR), which transforms the original high-dimensional spatial data into a low-dimensional subspace by some resultful means.
? Feature selection (FS) and feature extraction (FE) are two main tech - niques for processing the DR problems of high-dimensional data.
PROPOSED SYSTEM :
• In contrast, the proposed tracking approach can construct a dictionary that fully reflects the intrinsic manifold structure of visual data and introduces more discriminative ability in a unified learning framework.
• Based on this inference, a nonlinear learning theory using Local Coordinate Coding (LCC) is proposed in .
• The proposed tracking approach performs well on the benchmark and even better than the newly proposed works such as KCF and TGPR.
• These methods are easy to obtain appearance models with more discriminative power than that in other method, while our method only trains the proposed model with samples from several frames.
ADVANTAGE :
? To appropriately reduce the dimensionality of data and improve the computational efficiency while maintaining high classification performance, preserving the local manifold structure is crucial to success.
? However, the strict zero-one targets are too harsh on the marginal representation to yield superior classification performance.
? We theoretically prove the convergence of the proposed algorithm and experimentally verify its superior classification and generalization performance on multiple databases.
? We analyze the irrationality of LDA optimization criteria, and establish more discriminative optimization criteria on the premise of ensuring higher generalization performance by replacing intra-class scatter and inter-class scatter.
|