Dynamic Resource Provisioning for Container-based Virtualization Application using Hybrid Model Approach

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

ASTU

Abstract

Container-based virtualization is a novel technology that cloud providers are using to provide end-users with cloud services. This technology has many advantages for executing an application (for example, it is lightweight, quick to deploy, and resource-efficient). Cloud providers use cloud technology to offer almost unlimited computing resources. Dynamic resource provisioning is managed effectively by using virtualization-based containerization technologies. However, container-based cloud applications need advanced auto-scaling techniques that swiftly and automatically provision and de-provisioning cloud resources in response to dynamic workload fluctuations without human intervention. This thesis presented a hybrid approach with a deep learning-based method to do auto-scaling of containers in response to dynamic workload changes during run-time to address this difficulty. The four steps of monitoring, analyzing, planning, and executing the control loop are followed by the proposed auto-scaler architecture. To establish the proper scaling measures throughout the analysis and planning phase, the monitor component continuously gathers several sorts of data (hypertext transfer protocol request statistics, central processing unit, and memory consumption). They use a Deep long short-term memory-based prediction model based on long short-term memory (LSTM) to predict future hypertext transfer Protocol request workload and estimate the number of containers required to handle requests in advance, preventing delays brought on by starting or terminating running containers. From both the provider and consumer viewpoints, the suggested method improves resource provision and reduces costs. The experimental findings demonstrate that the hybrid approach model dynamically provisions resources to an application quickly and maintains higher resource utilization than both horizontal and vertical elasticity. Furthermore, when the long short-term memory model is used, the predicted workload aids in using the least number of replicas and, central processing Unit (CPU) utilization to handle the future workload. The proposed deep LSTM framework approach improves CPU utilisation values from 0.999997 to 0.999999, as well as elasticity speedup time values from 1.170 to 1.529.

Description

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By