An Input Weights Dependent Complex-Valued Learning Algorithm Based on Wirtinger Calculus
ABSTARCT :
Complex-valued neural network is a kind of learning model which can deal with problems in complex domain. Fully complex extreme learning machine (CELM) is a much faster training algorithm than the complex backpropagation (CBP) scheme. However, it is at the cost of using more hidden nodes to obtain the comparable performance. An upper-layer-solution-aware algorithm has been proposed for training single-hidden layer feedforward neural networks, which performs much better than its counterparts, pseudo-inverse learning (PIL)/extreme learning machine and gradient decent-based backpropagation neural networks. Consequently, there exist two challenges that need to be dealt with: 1) How to combine the advantages of CBP and CELM to develop a novel complex learning algorithm? and 2) What is the convergent behavior of the presented algorithm? In this article, an input weights dependent complex-valued (IWDCV) learning algorithm based on Wirtinger calculus has been proposed, which effectively solves the nonanalytic problem of the common activation functions during training neural networks. In addition, the monotonicity of the error function and the deterministic convergence of the proposed model have been strictly proved, which theoretically guarantee the efficiency and effectiveness of the given model, IWDCV. Finally, for real and complex-valued problems, a variety of simulations have been done to demonstrate the comparable performance of the proposed algorithm which support the theoretical observations as well.
EXISTING SYSTEM :
? Starting from existing deep-learning architectures, we identify the limitations that prevent these approaches from incorporating complex-valued data.
? we then extend the existing systems with the missing mathematical foundations needed to support complex data.
? While computer vision applications were quick to adopt deep learning, not all domains can leverage the benefits of existing neural network libraries.
? The image quality prediction process requires an understanding of the existing noise level in the fully-sampled reference data.
? However, existing synthesis methods for reconstructing MR fingerprinting data are based on incomplete simulation models.
DISADVANTAGE :
? It is sensitive to phase structure, and we suggest it serves as a regularized model for problems where such structure is important.
? These algorithms avoid hand crafting solutions to specific problems by opting instead to ”learn” and adapt according to a set of examples called the training set.
? Many problems in computer vision are complicated enough to pose significant difficulties for ad-hoc algorithms.
? The machine learning approach avoids tailoring specific algorithms for these problems, by allowing computer programs to learn to solve such problems themselves.
? These difficulties mainly focus on activation functions and the optimization problem.
PROPOSED SYSTEM :
• Our proposed method complements these prior works and can be used in conjunction with them for future improvements.
• The gains shown from our proposed cardioid activation highlight the importance of activation function design on network performance.
• Without compromising parameter map quality, the proposed neural network methods can produce parameter maps two orders of magnitude faster than the baseline dictionary matching methods when considering B0 maps in addition to T1 and T2.
• The network trained with the proposed empirical residual model learns a parameter mapping function that performs well on both the clean training signal and the in vivo test signal.
ADVANTAGE :
? This was one of the major breakthroughs which allowed for a new level of performance in many computer vision tasks, such as image classification , object detection , and face recognition.
? The ability to transfer smoothly between them might create some intermediate operator that would increase performance.
? The momentum coefficient and learning rate were chosen to maximize the performance over the training set.
? In, synchronization was introduced to neural networks via complex numbers, and was used for segmenting images into separate objects.
? Pooling is used to induce invariance to small translations, which is a characteristic of natural images.
|