Autonomic Resource Management for Fog Computing

      

ABSTARCT :

Fog computing extends cloud computing capabilities to the edge of the network and reduces high latency and network congestion. This paradigm enables portions of a transaction to be executed at a fog server and other portions at the cloud. Fog servers are generally not as robust as cloud servers; at peak loads, the data that cannot be processed by fog servers is processed by cloud servers. The data that need to be processed by the cloud is sent over a WAN. Therefore, only a fraction of the total data needs to travel through the WAN, as compared with a pure cloud computing paradigm. Additionally, the fog/cloud computing paradigm reduces the cloud processing load when compared with the pure cloud computing model. This paper presents a multiclass closed-form analytic queuing network model that is used by an autonomic controller to dynamically change the fraction of processing between edge and cloud servers in order to maximize a utility function of response time and cost. Experiments show that the controller can maintain a high utility in the presence of a wide variations of request arrival rates for various workloads.

EXISTING SYSTEM :

? This wide geographical distribution of resources allows MECs to complement existing large-scale cloud platforms, making it possible to perform computation and data processing both at centralized datacenters and at the network edge. ? MECs have emerged as distributed platforms that can complement existing cloud systems to overcome barriers to the success of MEC-native applications (e.g., IoT applications, autonomous vehicles, etc.). ? These studies showed that additional engineering work is needed to adapt existing cloud-native applications to MEC-like environments. ? We also introduced two workload prediction models for MECs that exploit the correlation between workload changes in neighboring EDCs.

DISADVANTAGE :

? The task distribution problem in Fog computing can be solved to a great extent if the applications are placed considering the future processing commitments of the Fog nodes. ? This technique either maximizes or minimizes one particular objective function while placing the applications in Fog environments. ? Although optimization provides the best mathematical solution of a problem, this technique takes more time to operate than prioritization. ? However, very few of them consider that the availability of renewable energy is subjected to uncertainty and environmental context and take the required measures to solve the problem.

PROPOSED SYSTEM :

• This approach yielded highly accurate predictions, showing that the proposed methods could be used to develop an efficient proactive auto-scaler to provision and de-provision resources in MECs as required to meet end-users’ demands. • In, the authors proposed a novel mobility-aware online service placement framework to achieve a desirable balance between user latency and migration cost. • We also developed a network communication profiling tool to identify the aspects of these applications that reduce the benefits they derive from deployment on MECs, and proposed design improvements that would allow such applications to better exploit MECs’ capabilities.

ADVANTAGE :

? In this technique, the current synthesis of application and resources are implied to predict the future performance trends. ? Moreover, the strategy needs to monitor the performance of the applications in consistent manner. ? The dispatch order of the inputs in such workload can be shuffled as per the availability of resources to ensure the desired performance of the application. ? They also monitor the status and performance of the resources and conduct application maintenance operations including service backup and replication. ? Therefore, efficient management of applications is necessary to fully exploit the capabilities of Fog nodes.

Download DOC Download PPT

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Chat on WhatsApp