Resource Management for Latency-Sensitive IoT Applications with Satisfiability
ABSTARCT :
Satisfying the software requirements of emerging service-based Internet of Things (IoT) applications has become challenging for cloud-centric architectures, as applications demand fast response times and availability of computational resources closer to end-users. As such, meeting application demands must occur at runtime, facing uncertainty and in a decentralized manner, something that must be reflected in system deployment. We propose a decentralized resource management technique and accompanying technical framework for the deployment of service-based IoT applications on resource-constrained devices. Faithful to services engineering, applications we consider are composed of interdependent tasks, which in the IoT setting may be concretized as e.g., containerized microservices or serverless functions. A deployment for an arbitrary application is found at runtime through satisfiability; the mapping produced is compliant with tasks' individual resource requirements and latency constraints by construction. Our approach ensures seamless deployment at runtime, assuming no design-time knowledge of device resources or the current network topology. We evaluate the applicability and realizability of our technique over resource-constrained edge devices and particularly in the absence of cloud resources.
EXISTING SYSTEM :
? Resource management concerns controling resources’ utilization in a heterogeneous IoT network where edge devices have limited computational resources and high uncertainty exists due to the dynamic nature of the IoT.
? The prevalence of Internet of Things (IoT) in contemporary settings has induced systems composed of heterogeneous devices, computing infrastructures, and cloud services.
? Using SMT, we ensure that if a satisfiable solution exists, always satisfies the given constraints of latency and privacy.
? We note the high computational demands of our technique, but point out that for relevant problem sizes, it is highly feasible and it exhibits guarantees if a mapping exists, it is found.
DISADVANTAGE :
? Our framework encodes the resource allocation problem within Satisfiability Modulo Theories (SMT), where placement of tasks on edge nodes generates constraints in first-order logic while latency SLAs are encoded with integer linear arithmetic.
? Our obtained results demonstrate its efficiency for relevant problems, particularly on resource-constrained single-board edge devices.
? The energy cost amounts to the execution of an SMT solver against the formula – later, we demonstrate that it is quite feasible to do so on singleboard computers for relevant problems.
? Resource management techniques have been sought by the community focusing on different aspects of the problem.
PROPOSED SYSTEM :
? Our proposed technical framework utilizes satisfiability modulo theories (SMT), a generalization of the boolean satisfiability (SAT) problem.
? In this paper, we proposed a novel decentralized resource management technique and accompanying technical framework for deployment of latency-sensitive IoT applications at the edge of the network.
? In this paper, we propose a novel technique combining resource allocation and resource sharing for task allocation.
? In this paper, we propose a novel decentralized resource management framework with the purpose of deploying IoT applications at the edge of the network based on a set of objectives.
ADVANTAGE :
? Utilizing distributed computing resources within an IoT system is challenging, as applications often have stringent performance and deployment requirements.
? We evaluate the applicability and performance of our technique, especially compared to the absence of cloud resources.
? To evaluate our technique and accompanying technical framework, we consider two evaluation goals; applicability and performance.
? To quantitatively evaluate our task allocation framework, we consider as a performance metric the execution time required to obtain a distribution of tasks to the collaborators.
? If the focus is on other objectives that scale linearly with the fanout degree of a service (such as bottleneck avoidance), the overall performance will improve significantly.
|