Multi-Agent Deep Reinforcement Learning-Empowered Channel Allocation in Vehicular Networks

Abstract : With the rapid development of vehicular networks, vehicle-to-everything (V2X) communications have huge number of tasks to be calculated, which brings challenges to the scarce network resources. Cloud servers can alleviate the terrible situation regarding the lack of computing abilities of vehicular user equipment (VUE), but the limited resources, the dynamic environment of vehicles, and the long distances between the cloud servers and VUE induce some potential issues, such as extra communication delay and energy consumptionFortunately, mobile edge computing (MEC), a promising computing paradigm, can ameliorate the above problems by enhancing the computing abilities of VUE through allocating the computational resources to VUEIn this paper, we propose a joint optimization algorithm based on a deep reinforcement learning algorithm named the double deep Q network (double DQN) to minimize the cost constituted of energy consumption, the latency of computation, and communication with the proper policy. The proposed algorithm is more suitable for dynamic scenarios and requires low-latency vehicular scenarios in the real world. Compared with other reinforcement learning algorithms, the algorithm we proposed algorithm improve the performance in terms of convergence, defined cost, and speed by around 30%, 15%, and 17%.
 EXISTING SYSTEM :
 ? The remainder of this paper is organized as follows. In Section 2 we review the related work and explain the motivations of this paper. A detailed introduction to the model system and problem formulation are described in Section 3. ? In Section 4, we introduce some brief background about deep reinforcement learning and present our proposed algorithm. ? Then the parameters, results, and analysis of simulations in this paper are represented in Section 5. In the final section, Section 6, we conclude the entire paper. .
 DISADVANTAGE :
 ? To solve the dynamic problem caused by the high speed of the vehicular environment, we propose a joint optimization algorithm based on double DQN, which comprehensively considers joint optimization, including offloading strategy, allocation of computational resources, and communication resources. Through building the neural networks to approximate the reward value of the whole system, our algorithm solves the joint optimization problem that traditional methods find hard to solve
 PROPOSED SYSTEM :
 ? With the instruction of the MEC servers, the VUE offload the task to the assigned MEC server with the corresponding transmission power. As displayed in Figure 4, the single gray flash symbol indicates that there is interference produced by VUE reusing the identical radio resources with cellular users. ? Apart from the single flash symbol, the double gray flash symbols represent the coexistence of the interference produced by reusing the same radio resources and the interference produced by the user, who is in thy reusing the same radio resources and the interference produced by the user, who is in the adjacent cell and reuse the identical communication resources
 ADVANTAGE :
 ? Actually, a considerably large number of researchers and papers have paid a lot of attention to this field. ? The two main resources, the computation offloading and communication resources, need to be considered for optimization. ? To read easily and clearly, we classified the papers we refer to into two categories intuitively. In Table 2, we offer the summary comparison of the references based on some features, such as year, focus on computation offloading and communication resources, and the methods used

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com