Architecture-Based Dynamic Reconfiguration of Composite Components on Model-Based Reinforcement Learning

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

ASTU

Abstract

Software systems are subjects to continuously change and they are also being affected by dynamic environments such as changes in user requirements, priority changes over user requirements, execution resource changes and performance reduction. Therefore, systems should have reconfigured their architectural elements dynamically at runtime to mitigate unpredictable circumstances which are unknown at design time as well as to meet some quality goals. Architecture-based self-adaptive is a well-known approach to tackle uncertainties at runtime by providing functionalities described in monitor-analyze-plan-execute and knowledge feedback loop to detect the environments, analyze the system constraints, planning new configuration and perform adaptation on the target system. For choosing the best configuration, a decision-making algorithm has vital roles. Among that, utility theory is used in existing studies which considers each quality utility, the impact of qualities on adaptation action and quality prioritization one over the other. However, utility theory has static adaptation behaviors that can’t alleviate uncertainties exist at runtime. Consequently, this work proposed new frameworks called Dynamic software reconfiguration architecture in composite components by adding new features on the existing Rainbow self-adaptive framework which used utility theory for decision making purpose. Dynamic software reconfiguration architecture in composite components combines the utility theory principle and Model-based Reinforcement Learning strategy to answer how systems adapt themselves to a new behavior. To show the effectiveness of the proposed framework Information delivery selfadaptive system will be demonstrated that can modifying its runtime behavior according to the operating environment changes. Finally, the study presents results of proposed solution by performing reconfiguration in different ways. As can be seen, the combination of Model-based reinforcement learning and utility theory would yield better decision-making capability for adaptation process. Besides that, the framework also evaluated on the basis of Self-adaptive fitness value with Rainbow, and it exhibited better adaptation fitness.

Description

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By