8,018 research outputs found
Migrating to Cloud-Native Architectures Using Microservices: An Experience Report
Migration to the cloud has been a popular topic in industry and academia in
recent years. Despite many benefits that the cloud presents, such as high
availability and scalability, most of the on-premise application architectures
are not ready to fully exploit the benefits of this environment, and adapting
them to this environment is a non-trivial task. Microservices have appeared
recently as novel architectural styles that are native to the cloud. These
cloud-native architectures can facilitate migrating on-premise architectures to
fully benefit from the cloud environments because non-functional attributes,
like scalability, are inherent in this style. The existing approaches on cloud
migration does not mostly consider cloud-native architectures as their
first-class citizens. As a result, the final product may not meet its primary
drivers for migration. In this paper, we intend to report our experience and
lessons learned in an ongoing project on migrating a monolithic on-premise
software architecture to microservices. We concluded that microservices is not
a one-fit-all solution as it introduces new complexities to the system, and
many factors, such as distribution complexities, should be considered before
adopting this style. However, if adopted in a context that needs high
flexibility in terms of scalability and availability, it can deliver its
promised benefits
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
DeMon++: A framework for designing and implementing Distributed Monitoring Systems based on Hierarchical Finite State Machines
In today’s interconnected world, the proliferation of diverse and numerous devices
has become increasingly common. This phenomenon is particularly evident in the
field of industrial computing, which has experienced rapid growth. With this rapid
expansion, monitoring an industrial control system (ICS) consisting of a large num-
ber of devices becomes a critical activity. To evaluate our approach, we chose the
CERN ICS as a suitable case study for our research. The CERN ICS is a complex
network of thousands of heterogeneous control devices, including PLCs, front-end
computers, supervisory control and data acquisition systems. Our approach resulted
in DeMon++, a framework for designing and implementing distributed monitoring
systems. DeMon++ uses the concept of hierarchical finite state machines to model
the system, capturing the hierarchical relationship between devices. In particular,
DeMon++ aims to be a flexible, scalable and maintainable monitoring framework
to abstract, aggregate and summarise the health state of industrial control sys-
tems composed of a heterogeneous set of devices. As part of the CERN OpenLab
programme, this thesis provides a flexible and maintainable approach to monitoring
complex and distributed ICS, with a particular focus on the demanding environment
of CERN
Interoperating networked embedded systems to compose the web of things
Improvements in science and technology have enhanced our quality of life with better healthcare services, comfortable living and transportation among others. Human beings are now able to travel faster, communicate across the globe in fraction of seconds, understand nature better than ever before and generate and consume huge amount of information. The Internet played a central role in this development by providing a vast network of networks. Leveraging this global infrastructure, the World Wide Web is providing a shared information space for such unprecedented amount of knowledge that is mostly contributed and used by human beings. It has played such a critical role in the adoption of the Internet, it is common to find people referring specific web sites as Internet. This adoption coupled with advances in manufacturing of computing elements that led to the reduction in size and price has introduced a new wave of technology, called the Internet of Things.
A rudimentary description of the Internet of Things (IoT) is an Internet that connects, not only traditional computing devices (with higher capacity and provide user interface) but also everyday physical objects or ’Things’ around us. These objects are augmented by small networked embedded computing elements that interact with the host via sensors and actuators. It is estimated that there will be Billions of such devices and Trillions of dollars of market value distributed in multiple aspects of our lives; such as healthcare, smart home, smart industries and smart cities. However, there are many challenges that are hindering the wide adoption of IoT. One of these challenges is heterogeneity of network interfaces, platforms, data formats and many standards that led to vertical islands of systems that are not interoperable at various levels.
To address the lack of interoperability, this thesis presents the author’s contributions in three categories. The first part is a lightweight middleware called LISA that address variations in protocols and platforms. It is designed to work within the constrained resources of the networked embedded devices. The overhead of the middleware is evaluated and compared with other related frameworks. The second set of contributions focus on higher level of system integration and related challenges. It includes a domain specific IoT language (DoS-IL) and a server implementation to support the proposed code on demand approach. The scripting language enables re-configuration of the behaviour of systems during integration or functional changes. The related server provides abstraction of the physical object and its embedded device to provide mobility services in addition to hosting the scripts. The last set of contributions are focused on either generalized architectural style design or a specific healthcare use case.
In summary, the overall thesis presents a highlevel architectural style that provides ease of understanding and communication of IoT systems, serves as a means for system level integration and provides the desired quality attributes for IoT systems. The other contributions fit in the architectural style to facilitate the adoption of the style or showcase specific instances of the architecture’s use. The performance of the middleware, the scripting language and the server including their resource utilization and overhead have been analyzed and presented. In general, the combination of the contributions enable inter-operation of networked embedded systems that serve as building blocks for the Web of Things - a global system of IoT systems
09021 Abstracts Collection -- Software Service Engineering
From 04.01. to 07.01.2009, the Dagstuhl Seminar 09021 ``Software Service Engineering \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
A framework for Model-Driven Engineering of resilient software-controlled systems
AbstractEmergent paradigms of Industry 4.0 and Industrial Internet of Things expect cyber-physical systems to reliably provide services overcoming disruptions in operative conditions and adapting to changes in architectural and functional requirements. In this paper, we describe a hardware/software framework supporting operation and maintenance of software-controlled systems enhancing resilience by promoting a Model-Driven Engineering (MDE) process to automatically derive structural configurations and failure models from reliability artifacts. Specifically, a reflective architecture developed around digital twins enables representation and control of system Configuration Items properly derived from SysML Block Definition Diagrams, providing support for variation. Besides, a plurality of distributed analytic agents for qualitative evaluation over executable failure models empowers the system with runtime self-assessment and dynamic adaptation capabilities. We describe the framework architecture outlining roles and responsibilities in a System of Systems perspective, providing salient design traits about digital twins and data analytic agents for failure propagation modeling and analysis. We discuss a prototype implementation following the MDE approach, highlighting self-recovery and self-adaptation properties on a real cyber-physical system for vehicle access control to Limited Traffic Zones
Governance Framework for Cloud Computing
In the current era of competitive business worldand stringent market share and revenue sustenance challenges,organizations tend to focus more on their core competencies ratherthan the functional areas that support the business. However,traditionally this has not been possible in the IT management areabecause the technologies and their underlying infrastructures aresignificantly complex thus requiring dedicated and sustained inhouse efforts to maintain IT systems that enable core businessactivities. Senior executives of organisations are forced in manycases to conclude that it is too cumbersome, expensive and timeconsuming for them to manage internal IT infrastructures. Thistakes the focus away from their core revenue making activities.This scenario facilitates the need for external infrastructurehosting, external service provision and outsourcing capability.This trend resulted in evolution of IT outsourcing models. Theauthors attempted to analyse the option of leveraging the cloudcomputing model to facilitate this common scenario. This paperinitially discusses the characteristics of cloud computing focusingon scalability and delivery as a service. The model is evaluatedusing two case scenarios, one is an enterprise client with30,000 worldwide customers followed by a small scale subjectmatter expertise through small to medium enterprise (SME)organisations. The paper evaluates the findings and developsa governance framework to articulate the value propositionof cloud computing.. The model takes into consideration thefinancial aspects, and the behaviors and IT control structures ofan IT organisation
- …