A MULTIPLE GRADIENT DESCENT DESIGN FOR MULTI-TASK LEARNING ON EDGE COMPUTING MULTI-OBJECTIVE MACHINE LEARNING APPROACH

      

ABSTARCT :

In multi-task learning, multiple tasks are solved jointly, sharing inductive bias between them. Multi-task learning is inherently a multi-objective problem because different tasks may conflict, necessitating a trade-off. A common compromise is to optimize a proxy objective that minimizes a weighted linear combination of pertask losses. However, this workaround is only valid when the tasks do not compete, which is rarely the case.In this paper, we explicitly cast multi-task learning as multi-objective optimization, with the overall objective of finding a Pareto optimal solution. To this end, we use algorithms developed in the gradient-based multiobjective optimization literature.

EXISTING SYSTEM :

? A variety of algorithms for multi-objective optimization exist. One such approach is the multiple-gradient descent algorithm (MGDA), which uses gradient-based optimization and provably converges to a point on the Pareto set (Désidéri, 2012). MGDA is well-suited for multi-task learning with deep networks. ? It can use the gradients of each task and solve an optimization problem to decide on an update over the shared parameters. However, there are two technical problems that hinder the applicability of MGDA on a large scale

DISADVANTAGE :

? The problem of finding Pareto optimal solutions given multiple criteria is called multi-objective optimization. ? It can use the gradients of each task and solve an optimization problem to decide on an update over the shared parameters ? In this paper, we develop a Frank-Wolfe-based optimizer that scales to high-dimensional problems ? We empirically evaluate the presented method on three different problems

PROPOSED SYSTEM :

? In order to understand the role of the approximation proposed in Section 3.3, we compare the final performance and training time of our algorithm with and without the presented approximation in Table 2 (runtime measured on a single Titan Xp GPU). ? For a small number of tasks (3 for scene understanding), training time is reduced by 40%. For the multi-label classification experiment (40 tasks), the presented approximation accelerates learningAlthough many hypothesis classes and loss functions have been proposed in the MTL literature

ADVANTAGE :

? Stein’s paradox was an early motivation for multi-task learning (MTL) (Caruana, 1997), a learning paradigm in which data from multiple tasks is used with the hope to obtain superior performance over learning each task independently. ? Potential advantages of MTL go beyond the direct implications of Stein’s paradox, since even seemingly unrelated real world tasks have strong dependencies due to the shared processes that give rise to the data

Download DOC Download PPT

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Chat on WhatsApp