Beneficial Perturbation Network for Designing General Adaptive Artificial Intelligence Systems

Abstract :  The human brain is the gold standard of adaptive learning. It not only can learn and benefit from experience, but also can adapt to new situations. In contrast, deep neural networks only learn one sophisticated but fixed mapping from inputs to outputs. This limits their applicability to more dynamic situations, where the input to output mapping may change with different contexts. A salient example is continual learninglearning new independent tasks sequentially without forgetting previous tasks. Continual learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby a previously learned mapping of an old task is erased when learning new mappings for new tasks. Herein, we propose a new biologically plausible type of deep neural network with extra, out-of-network, task-dependent biasing units to accommodate these dynamic situations. This allows, for the first time, a single network to learn potentially unlimited parallel input to output mappings, and to switch on the fly between them at runtime. Biasing units are programed by leveraging beneficial perturbations (opposite to well-known adversarial perturbations) for each task. Beneficial perturbations for a given task bias the network toward that task, essentially switching the network into a different mode to process that task. This largely eliminates catastrophic interference between tasks. Our approach is memory-efficient and parameter-efficient, can accommodate many tasks, and achieves the state-of-the-art performance across different tasks and domains.
 EXISTING SYSTEM :
 ? They demonstrated the problem of directly using algorithms like FGSM to produce adversarial text samples, i.e., the output result is scrambled text which is easily recognized as a noise sample by human eyes. ? The Shared TextFool code base can be used to produce adversarial text samples, with the main idea of modifying existing words in the text samples through their synonyms and spelling errors. ? we summarize the application of adversarial attacks in four fields and existing defense methods against adversarial attacks.
 DISADVANTAGE :
 ? While interacting with new environments that are not fully known to an individual, it is able to quickly learn and adapt its behavior to achieve goals as well as possible, in awide range of environments, situations, tasks, and problems. ? In contrast, deep neural networks only learn one sophisticated but fixed mapping between inputs and outputs, thereby limiting their application in more complex and dynamic situations in which the mapping rules are not kept the same but change according to different tasks or contexts.
 PROPOSED SYSTEM :
 • In order to improve the robustness of neural network against the adversarial attack, researchers have proposed a mass of adversarial defense methods, which can be divided into three main categories: modifying data, modifying models and using auxiliary tools. • Based on this idea, they proposed a new algorithm called Dense Adversary Generation (DAG), which produces large categories of adversarial samples suitable for segmentation and detection in the most advanced deep network. • They formulate the high-level class loss with the proposed low-level multi-scale feature loss into GAN framework to jointly train a better generator.
 ADVANTAGE :
 ? BPN achieves the state-of-the-art performance across different data sets and domains: to demonstrate it, we consider a sequence of eight unrelated object recognition data sets (Experiments). ? After training on the eight complex data sets sequentially, the average test accuracy of BPN is better than the state-of-the-art. ? We consider 100 tasks; each new task has ten classes. ? We used a fully connected neural network with four hidden layers of 128 ReLu Units (a core network with small capacity) to compare the performances of different methods.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com