Online Performance Modeling and Prediction for Single-VM Applications in Multi-Tenant Clouds
ABSTARCT :
In this paper, we propose online learning methodologies for performance modeling and prediction of applications that run repetitively on multi-tenant clouds. Based on a few micro-benchmarks to probe in-situ perceivable performance of major components of the target VM, we proposed both periodic model retraining and progressive modeling approaches to address the changes in the intensity of resource contentions of a VM over time and its effects on the target application. Both Regression and Neural-Network models are considered to adopt to recent changes in resource contention. With 17 representative applications from PARSEC, Nas Parallel and CloudSuite benchmarks being considered, we have extensively evaluated the proposed online schemes for the prediction accuracy of the resulting models and associated overheads on both a private and public clouds. The evaluation results show that, even on the private cloud with high and radically changed resource contention, the average prediction errors of the considered models can be less than 20%. With the neural-network progressive models, the average prediction errors can be reduced by about 7% with much reduced overheads (up to 265X). For public clouds with less resource contentions, the prediction errors is less than 4% for the considered models with our proposed online schemes.
EXISTING SYSTEM :
? The success of cloud computing builds largely upon on-demand supply of virtual machines (VMs) that provide the abstraction of a physical machine on shared resources.
? Unfortunately, despite recent advances in virtualization technology, there still exists an unpredictable performance gap between the real and desired performance.
? Most related to our work are several existing resource configuration frameworks such as DejaVu , JustRunIt.
? The key differences of Matrix lie in a comprehensive framework to predict and maintain the desired performance of a new workload while minimizing the operating cost.
DISADVANTAGE :
? The objective of such a memory access pattern is to ensure that each data access is issued to off-core memory rather than the caches.
? In each benchmark suite, the selection of these benchmarks is a combination of technical difficulties (e.g., compilation problems), benchmarks’ resource requirements (needs of multiple VM instances or more memory) and budget limitation to run the costly experiments on AWS and GCE clouds.
? Although some recent studies have addressed this problem, there are still some limitations.
? The large prediction error with dedup was caused by OS thread scheduling issue rather than the models mispredicting the impact of resource contention.
PROPOSED SYSTEM :
• In this work, we propose Matrix, a novel performance and resource management system that ensures the desired performance of an application achieved on a VM.
• In this paper, we propose the concept of Relative Performance (RP) as the “equivalence” metric that measures the ratio between the desired performance and that of running in a VM.
• we propose a performance and resource management system, Matrix, that targets at delivering predictable VM performance with the best cost efficiency.
• To achieve this goal, Matrix utilizes clustering models with probability estimates to predict the performance of new workloads in a virtualized environment, chooses a suitable VM type, and dynamically adjusts the resource configuration of a VM on the fly.
ADVANTAGE :
? Most existing studies on performance prediction for virtual machines (VMs) in multi-tenant clouds are at system level and generally require access to performance counters in hypervisors.
? In this work, we propose uPredict, a user-level profiler-based performance predictive framework for single-VM applications in multi-tenant clouds.
? Such hardware resource sharing among multiple tenants causes resource contention, which in turn degrades the performance of applications running on clouds.
? Although these studies can predict an application’s average performance on various VMs and/or different cloud services, they cannot be utilized to predict the in-situ performance of an application while taking the runtime resource contention into consideration.
|