Adaptive Fog A Modelling and Optimization Framework for Fog Computing in Intelligent Transportation Systems

Abstract : Fog computing has been advocated as an enabling technology for computationally intensive services in smart connected vehicles. Most existing works focus on analyzing the queueing and workload processing latencies associated with fog computing, ignoring the fact that wireless access latency can sometimes dominate the overall latency. This motivates the work in this paper, where we report on a five-month measurement study of the wireless access latency between connected vehicles and a fog/cloud computing system supported by commercially available LTE networks. We propose AdaptiveFog, a novel framework for autonomous and dynamic switching between different LTE networks that implement a fog/cloud infrastructure. AdaptiveFog’s main objective is to maximize the service confidence level, defined as the probability that the latency of a given service type is below some threshold. To quantify the performance gap between different LTE networks, we introduce a novel statistical distance metric, called weighted Kantorovich-Rubinstein (K-R) distance. Two scenarios based on finite- and infinite-horizon optimization of short-term and long-term confidence are investigated. For each scenario, a simple threshold policy based on weighted K-R distance is proposed and proved to maximize the latency confidence for smart vehicles. Extensive analysis and simulations are performed based on our latency measurements. Our results show that AdaptiveFog achieves around 30% to 50% improvement in the confidence levels of fog and cloud latencies, respectively.
 EXISTING SYSTEM :
 ? Most existing works focus on analyzing and optimizing the queueing and workload processing latencies, ignoring the fact that the access latency between vehicles and fog/cloud servers can sometimes dominate the end-to-end service latency. ? Most existing works focus on developing new methods and architectures to improve the utilization of fog resources with reduced costs. ? Most existing works assume that simply deploying fog servers at the eNB (LTE base station) location can achieve a negligible RTT between the UE and the fog server. ? Existing works as well as our measurement confirmed that the vehicle’s future location and speed mainly depend on its current location and speed.
 DISADVANTAGE :
 ? We then formulate the MNO selection and server adaptation problem as a Markov decision process. ? These measurements are used to evaluate the impact of handover, driving speed, MNO network, fog/cloud server, and location on the service latency. ? To investigate the impact of mobility on service latency in practical system, we analyze the latency traces at different driving speeds. ? The forecasting window used in the optimal decision-making process will also impact the UE policy for switching between different MNOs. ? We first evaluate the impact of applying AdaptiveFog on the PDF of the RTT for driving scenario.
 PROPOSED SYSTEM :
 • An effective heuristic method was proposed to deploy fog servers based on the knowledge of road traffic within each deployment area. • A novel networking and server adaptation framework, called AdaptiveFog, has been proposed for vehicles to autonomously and dynamically connect with different LTE networks and fog or cloud servers. • We propose a novel optimization framework, AdaptiveFog, for a vehicle to dynamically switch between MNO networks that implement fog and cloud services on the move. • We propose AdaptiveFog, a novel framework for autonomous and dynamic switching between different LTE operators that implement fog/cloud infrastructure.
 ADVANTAGE :
 ? Based on our measurements, we observe that none of the MNOs consistently offers better latency performance than the other. ? A weighted Kantorovich-Rubinstein (K-R) metric is then introduced to quantify the performance difference between the confidence levels of various MNO networks, taking into consideration the heterogeneity in the demands and priorities of different services. ? We use the term cloud server to denote the high-performance server installed at the CDC. ? There have been quite a few studies on the performance of vehicular networks supported by a wireless infrastructure. ? Cloud Server corresponds to a high-performance server located at the CDC to provide on-demand computational services for Ues.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com