Weak Estimator-Based Stochastic Searching on the Line in Dynamic Dual Environments

Abstract : Stochastic point location deals with the problem of finding a target point on a real line through a learning mechanism (LM) with the stochastic environment (SE) offering directional information. The SE can be further categorized into an informative or deceptive one, according to whether p is above 0.5 or not, where p is the probability of providing a correct suggestion of a direction to LM. Several attempts have been made for LM to work in both types of environments, but none of them considers a dynamically changing environment where p varies with time. A dynamic dual environment involves fierce changes that frequently cause its environment to switch from an informative one to a deceptive one, or vice versa. This article presents a novel weak estimator-based adaptive step search solution, to enable LM to track the target in a dynamic dual environment, with the help of a weak estimator. The experimental results show that the proposed solution is feasible and efficient.
 EXISTING SYSTEM :
 ? The fluctuation in the SGD helps the objective function jump to another possible minimum. However, the fluctuation in SGD always exists, which may more or less slow down the process of converging. ? Some optimization problems in practical applications, the derivative of the objective function may not exist or is not easy to calculate. ? Derivative-free optimization methods are mainl y used in the case that the derivative of the objective functio n may not exist or be difficult to calculate. There are two main ideas in derivative-free optimization methods. ? There are many integrated powerful toolkits. We summarize the existing common optimization toolkits and present them.
 DISADVANTAGE :
 ? The fluctuation in the SGD helps the objective function jump to another possible minimum. However, the fluctuation in SGD always exists, which may more or less slow down the process of converging. ? Some optimization problems in practical applications, the derivative of the objective function may not exist or is not easy to calculate. ? Derivative-free optimization methods are mainl y used in the case that the derivative of the objective functio n may not exist or be difficult to calculate. There are two main ideas in derivative-free optimization methods. ? There are many integrated powerful toolkits. We summarize the existing common optimization toolkits and present them.
 PROPOSED SYSTEM :
 • In order to mitigate the cost of computation, some parallelization methods were proposed. • The batch gradient descent has high computational complexity in each iteration for large-scale data and does not allow online update, stochastic gradient descent (SGD) was proposed. • Due to a large amount of redundant information in the training samples, the SGD methods are very popular since they were proposed. • The stochastic average gradient (SAG) method [36] is a variance reduction method proposed to improve the convergence speed. • A linearly convergent method was proposed, which combines the L-BFGS method in with the variance reduction technique.
 ADVANTAGE :
 ? The improvement in the performance of the recursive dual filter with local iterations (IEKF) can be assessed from the results given in. ? We assess the impact of system and observation nonlinearities on the EKF performance, by comparing the recursive dual estimation with traditional joint estimation. ? The estimation errors for both joint and dual estimation show satisfactory tracking performance. ? Moreover, dynamic features can be treated effectively and efficiently by the removal or addition to a bank of filters, one assigned per feature. ? The computation efficiency is significantly enhanced by employing the fact that the sparseness of the inverse of state covariance matrix (Fisher information matrix) is conserved during filtering.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com