Cooperative Service Placement and Scheduling in Edge Clouds A Deadline-Driven Approach

      

ABSTARCT :

Mobile edge computing enables resource-limited edge clouds (ECs) in federation to help each other with resource-hungry yet delay-sensitive service requests. Contrary to common practice, we acknowledge that mobile services are heterogeneous and the limited storage resources of ECs allow only a subset of services to be placed at the same time. This paper presents a jointly optimized design of cooperative placement and scheduling framework, named JCPS, that pursues social cost minimization over time while ensuring diverse user demands. Our main contribution is a novel perspective on cost reduction by exploiting the spatial-temporal diversities in workload and resource cost among federated ECs. To build a practical edge cloud federation system, we have to consider two major challenges: user deadline preference and ECs strategic behaviors. We first formulate and solve the problem of spatially strategic optimization without deadline awareness, which is proved NP -hard. By leveraging user deadline tolerance, we develop a Lyapunov-based deadline-driven joint cooperative mechanism under the scenario where the workload and resource information of ECs are known for one-shot global cost minimization. The service priority imposed by deadline urgency drives time-critical placement and scheduling, which, combined with cooperative control, enables workloads migrated across different times and ECs.

EXISTING SYSTEM :

? Some researchers have contributed to addressing the problem of data placement for scientific workflows. ? On the one hand, the public cloud is good at providing high reliability and large capacity with the resource-sharing feature. ? However, most of the traditional solutions for data placement focus on deterministic cloud environments, which lead to the excessive data transmission time of scientific workflows. ? The DPSO-FGA can rationally place the scientific workflow data while meeting the requirements of data privacy and the capacity limitations of data centers. ? With the widespread applications of Big Data technologies, the amount of data generated by modern network environments is greatly increasing.

DISADVANTAGE :

? We first formulate the task-scheduling problem in such a cloud-fog environment as a multi-dimensional 0-1 knapsack problem that is NP-hard, and then propose an efficient algorithmic solution based on ant colony optimization heuristic. ? The optimization problem formulationaims to minimize the timeaverage energy consumption for task executions of all users. ? In order to solve this problem, this paper proposes a scheduling algorithm based on ACO which has been widely used to solve complex combinatorial optimization problems. ? The above papers on task scheduling for fog computing do not consider job deadlines, which are becoming increasingly more important and impact the Quality of Service (QoS).

PROPOSED SYSTEM :

• The approach proposed in introduces a computing migration solution for the next generation networks. • They proposed a resource allocation method consisting of a fast heuristic-based incremental allocation method to allocate resources dynamically based on operational cost. • The scheme proposed in by Pham tries to optimize gateway placement and multihop routing in the NFV-enabled IoT(NIoT) and the service placement in the MEC and cloud layers. • The proposed approach in studies the workflow scheduling in MEC by formulating the scheduling as an integer problem, aiming to handle different tasks while mitigating the makespan. • The scheduling approach proposed in provides a location-aware and proximity-aware multiple workflow scheduling on the MEC servers.

ADVANTAGE :

? Extensive experimental results show that our proposed optimization and solution significantly improves the system performance compared with existing heuristics. ? In order to evaluate the performance of the proposed algorithm, a series of experiments is carried out and the main parameters. ? Extensive simulations are conducted to evaluate the performance of the proposed algorithm. ? Moreover, placing computing resources at the edge of the network allows fog nodes to efficiently process latency-sensitive tasks in a timely manner, while large-scale and latency-tolerant tasks can still be efficiently processed by the cloud that is equipped with more computing power .

Download DOC Download PPT

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Chat on WhatsApp