12 research outputs found
A Deep Recurrent Q Network Towards Self-adapting Distributed Microservices Architecture (in press)
One desired aspect of microservices architecture is the ability to self-adapt its own architecture and behaviour in response to changes in the operational environment. To achieve the desired high levels of self-adaptability, this research implements the distributed microservices architectures model, as informed by the MAPE-K model. The proposed architecture employs a multi adaptation agents supported by a centralised controller, that can observe the environment and execute a suitable adaptation action. The adaptation planning is managed by a deep recurrent Q-network (DRQN). It is argued that such integration between DRQN and MDP agents in a MAPE-K model offers distributed microservice architecture with self-adaptability and high levels of availability and scalability. Integrating DRQN into the adaptation process improves the effectiveness of the adaptation and reduces any adaptation risks, including resources over-provisioning and thrashing. The performance of DRQN is evaluated against deep Q-learning and policy gradient algorithms including: i) deep q-network (DQN), ii) dulling deep Q-network (DDQN), iii) a policy gradient neural network (PGNN), and iv) deep deterministic policy gradient (DDPG). The DRQN implementation in this paper manages to outperform the above mentioned algorithms in terms of total reward, less adaptation time, lower error rates, plus faster convergence and training times. We strongly believe that DRQN is more suitable for driving the adaptation in distributed services-oriented architecture and offers better performance than other dynamic decision-making algorithms
A deep recurrent Q network towards self-adapting distributed microservice architecture
One desired aspect of microservice architecture is the ability to self-adapt its own architecture and behavior in response to changes in the operational environment. To achieve the desired high levels of self-adaptability, this research implements distributed microservice architecture model running a swarm cluster, as informed by the Monitor, Analyze, Plan, and Execute over a shared Knowledge (MAPE-K) model. The proposed architecture employs multiadaptation agents supported by a centralized controller, which can observe the environment and execute a suitable adaptation action. The adaptation planning is managed by a deep recurrent Q-learning network (DRQN). It is argued that such integration between DRQN and Markov decision process (MDP) agents in a MAPE-K model offers distributed microservice architecture with self-adaptability and high levels of availability and scalability. Integrating DRQN into the adaptation process improves the effectiveness of the adaptation and reduces any adaptation risks, including resource overprovisioning and thrashing. The performance of DRQN is evaluated against deep Q-learning and policy gradient algorithms, including (1) a deep Q-learning network (DQN), (2) a dueling DQN (DDQN), (3) a policy gradient neural network, and (4) deep deterministic policy gradient. The DRQN implementation in this paper manages to outperform the aforementioned algorithms in terms of total reward, less adaptation time, lower error rates, plus faster convergence and training time. We strongly believe that DRQN is more suitable for driving the adaptation in distributed services-oriented architecture and offers better performance than other dynamic decision-making algorithms
A Systematic Review on Osmotic Computing
Osmotic computing in association with related computing paradigms (cloud, fog, and edge) emerges as a promising solution for handling bulk of security-critical as well as latency-sensitive data generated by the digital devices. It is a growing research domain that studies deployment, migration, and optimization of applications in the form of microservices across cloud/edge infrastructure. It presents dynamically tailored microservices in technology-centric environments by exploiting edge and cloud platforms. Osmotic computing promotes digital transformation and furnishes benefits to transportation, smart cities, education, and healthcare. In this article, we present a comprehensive analysis of osmotic computing through a systematic literature review approach. To ensure high-quality review, we conduct an advanced search on numerous digital libraries to extracting related studies. The advanced search strategy identifies 99 studies, from which 29 relevant studies are selected for a thorough review. We present a summary of applications in osmotic computing build on their key features. On the basis of the observations, we outline the research challenges for the applications in this research field. Finally, we discuss the security issues resolved and unresolved in osmotic computing
Dynamical Modeling of Cloud Applications for Runtime Performance Management
Cloud computing has quickly grown to become an essential component in many modern-day software applications. It allows consumers, such as a provider of some web service, to quickly and on demand obtain the necessary computational resources to run their applications. It is desirable for these service providers to keep the running cost of their cloud application low while adhering to various performance constraints. This is made difficult due to the dynamics imposed by, e.g., resource contentions or changing arrival rate of users, and the fact that there exist multiple ways of influencing the performance of a running cloud application. To facilitate decision making in this environment, performance models can be introduced that relate the workload and different actions to important performance metrics.In this thesis, such performance models of cloud applications are studied. In particular, we focus on modeling using queueing theory and on the fluid model for approximating the often intractable dynamics of the queue lengths. First, existing results on how the fluid model can be obtained from the mean-field approximation of a closed queueing network are simplified and extended to allow for mixed networks. The queues are allowed to follow the processor sharing or delay disciplines, and can have multiple classes with phase-type service times. An improvement to this fluid model is then presented to increase accuracy when the \emph{system size}, i.e., number of servers, initial population, and arrival rate, is small. Furthermore, a closed-form approximation of the response time CDF is presented. The methods are tested in a series of simulation experiments and shown to be accurate. This mean-field fluid model is then used to derive a general fluid model for microservices with interservice delays. The model is shown to be completely extractable at runtime in a distributed fashion. It is further evaluated on a simple microservice application and found to accurately predict important performance metrics in most cases. Furthermore, a method is devised to reduce the cost of a running application by tuning load balancing parameters between replicas. The method is built on gradient stepping by applying automatic differentiation to the fluid model. This allows for arbitrarily defined cost functions and constraints, most notably including different response time percentiles. The method is tested on a simple application distributed over multiple computing clusters and is shown to reduce costs while adhering to percentile constraints. Finally, modeling of request cloning is studied using the novel concept of synchronized service. This allows certain forms of cloning over servers, each modeled with a single queue, to be equivalently expressed as one single queue. The concept is very general regarding the involved queueing discipline and distributions, but instead introduces new, less realistic assumptions. How the equivalent queue model is affected by relaxing these assumptions is studied considering the processor sharing discipline, and an extension to enable modeling of speculative execution is made. In a simulation campaign, it is shown that these relaxations only has a minor effect in certain cases
Designing and Deploying Internet of Things Applications in the Industry: An Empirical Investigation
RÉSUMÉ : L’Internet des objets (IdO) a pour objectif de permettre la connectivité à presque tous les objets trouvés dans l’espace physique. Il étend la connectivité aux objets de tous les jours et o˙re la possibilité de surveiller, de suivre, de se connecter et d’intéragir plus eÿcacement avec les actifs industriels. Dans l’industrie de nos jours, les réseaux de capteurs connectés surveillent les mouvements logistiques, fabriquent des machines et aident les organisations à améliorer leur eÿcacité et à réduire les coûts. Cependant, la conception et l’implémentation d’un réseau IdO restent, aujourd’hui, une tâche particulièrement diÿcile. Nous constatons un haut niveau de fragmentation dans le paysage de l’IdO, les développeurs se complaig-nent régulièrement de la diÿculté à intégrer diverses technologies avec des divers objets trouvés dans les systèmes IdO et l’absence des directives et/ou des pratiques claires pour le développement et le déploiement d’application IdO sûres et eÿcaces. Par conséquent, analyser et comprendre les problèmes liés au développement et au déploiement de l’IdO sont primordiaux pour permettre à l’industrie d’exploiter son plein potentiel.
Dans cette thèse, nous examinons les interactions des spécialistes de l’IdO sur le sites Web populaire, Stack Overflow et Stack Exchange, afin de comprendre les défis et les problèmes auxquels ils sont confrontés lors du développement et du déploiement de di˙érentes appli-cations de l’IdO. Ensuite, nous examinons le manque d’interopérabilité entre les techniques développées pour l’IdO, nous étudions les défis que leur intégration pose et nous fournissons des directives aux praticiens intéressés par la connexion des réseaux et des dispositifs de l’IdO pour développer divers services et applications. D’autre part, la sécurité étant essen-tielle au succès de cette technologie, nous étudions les di˙érentes menaces et défis de sécurité sur les di˙érentes couches de l’architecture des systèmes de l’IdO et nous proposons des contre-mesures.
Enfin, nous menons une série d’expériences qui vise à comprendre les avantages et les incon-vénients des déploiements ’serverful’ et ’serverless’ des applications de l’IdO afin de fournir aux praticiens des directives et des recommandations fondées sur des éléments probants relatifs à de tels déploiements. Les résultats présentés représentent une étape très importante vers une profonde compréhension de ces technologies très prometteuses. Nous estimons que nos recommandations et nos suggestions aideront les praticiens et les bâtisseurs technologiques à améliorer la qualité des logiciels et des systèmes de l’IdO. Nous espérons que nos résultats pourront aider les communautés et les consortiums de l’IdO à établir des normes et des directives pour le développement, la maintenance, et l’évolution des logiciels de l’IdO.----------ABSTRACT : Internet of Things (IoT) aims to bring connectivity to almost every object found in the phys-ical space. It extends connectivity to everyday things, opens up the possibility to monitor, track, connect, and interact with industrial assets more eÿciently. In the industry nowadays, we can see connected sensor networks monitor logistics movements, manufacturing machines, and help organizations improve their eÿciency and reduce costs as well. However, designing and implementing an IoT network today is still a very challenging task. We are witnessing a high level of fragmentation in the IoT landscape and developers regularly complain about the diÿculty to integrate diverse technologies of various objects found in IoT systems, and the lack of clear guidelines and–or practices for developing and deploying safe and eÿcient IoT applications. Therefore, analyzing and understanding issues related to the development and deployment of the Internet of Things is utterly important to allow the industry to utilize its fullest potential. In this thesis, we examine IoT practitioners’ discussions on the popular Q&A websites, Stack Overflow and Stack Exchange, to understand the challenges and issues that they face when developing and deploying di˙erent IoT applications. Next, we examine the lack of interoper-ability among technologies developed for IoT and study the challenges that their integration poses and provide guidelines for practitioners interested in connecting IoT networks and de-vices to develop various services and applications. Since security issues are center to the success of this technology, we also investigate di˙erent security threats and challenges across di˙erent layers of the architecture of IoT systems and propose countermeasures. Finally, we conduct a series of experiments to understand the advantages and trade-o˙s of serverful and serverless deployments of IoT applications in order to provide practitioners with evidence-based guidelines and recommendations on such deployments. The results presented in this thesis represent a first important step towards a deep understanding of these very promising technologies. We believe that our recommendations and suggestions will help practitioners and technology builders improve the quality of IoT software and systems. We also hope that our results can help IoT communities and consortia establish standards and guidelines for the development, maintenance, and evolution of IoT software and systems
Increasing the Dependability of Internet-of-Things Systems in the context of End-User Development Environments
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
Cognitive Hyperconnected Digital Transformation
Cognitive Hyperconnected Digital Transformation provides an overview of the current Internet of Things (IoT) landscape, ranging from research, innovation and development priorities to enabling technologies in a global context. It is intended as a standalone book in a series that covers the Internet of Things activities of the IERC-Internet of Things European Research Cluster, including both research and technological innovation, validation and deployment. The book builds on the ideas put forward by the European Research Cluster, the IoT European Platform Initiative (IoT-EPI) and the IoT European Large-Scale Pilots Programme, presenting global views and state-of-the-art results regarding the challenges facing IoT research, innovation, development and deployment in the next years. Hyperconnected environments integrating industrial/business/consumer IoT technologies and applications require new IoT open systems architectures integrated with network architecture (a knowledge-centric network for IoT), IoT system design and open, horizontal and interoperable platforms managing things that are digital, automated and connected and that function in real-time with remote access and control based on Internet-enabled tools. The IoT is bridging the physical world with the virtual world by combining augmented reality (AR), virtual reality (VR), machine learning and artificial intelligence (AI) to support the physical-digital integrations in the Internet of mobile things based on sensors/actuators, communication, analytics technologies, cyber-physical systems, software, cognitive systems and IoT platforms with multiple functionalities. These IoT systems have the potential to understand, learn, predict, adapt and operate autonomously. They can change future behaviour, while the combination of extensive parallel processing power, advanced algorithms and data sets feed the cognitive algorithms that allow the IoT systems to develop new services and propose new solutions. IoT technologies are moving into the industrial space and enhancing traditional industrial platforms with solutions that break free of device-, operating system- and protocol-dependency. Secure edge computing solutions replace local networks, web services replace software, and devices with networked programmable logic controllers (NPLCs) based on Internet protocols replace devices that use proprietary protocols. Information captured by edge devices on the factory floor is secure and accessible from any location in real time, opening the communication gateway both vertically (connecting machines across the factory and enabling the instant availability of data to stakeholders within operational silos) and horizontally (with one framework for the entire supply chain, across departments, business units, global factory locations and other markets). End-to-end security and privacy solutions in IoT space require agile, context-aware and scalable components with mechanisms that are both fluid and adaptive. The convergence of IT (information technology) and OT (operational technology) makes security and privacy by default a new important element where security is addressed at the architecture level, across applications and domains, using multi-layered distributed security measures. Blockchain is transforming industry operating models by adding trust to untrusted environments, providing distributed security mechanisms and transparent access to the information in the chain. Digital technology platforms are evolving, with IoT platforms integrating complex information systems, customer experience, analytics and intelligence to enable new capabilities and business models for digital business