2,238 research outputs found

    On autonomic platform-as-a-service: characterisation and conceptual model

    Get PDF
    In this position paper, we envision a Platform-as-a-Service conceptual and architectural solution for large-scale and data intensive applications. Our architectural approach is based on autonomic principles, therefore, its ultimate goal is to reduce human intervention, the cost, and the perceived complexity by enabling the autonomic platform to manage such applications itself in accordance with highlevel policies. Such policies allow the platform to (i) interpret the application specifications; (ii) to map the specifications onto the target computing infrastructure, so that the applications are executed and their Quality of Service (QoS), as specified in their SLA, enforced; and, most importantly, (iii) to adapt automatically such previously established mappings when unexpected behaviours violate the expected. Such adaptations may involve modifications in the arrangement of the computational infrastructure, i.e. by re-designing a different communication network topology that dictates how computational resources interact, or even the live-migration to a different computational infrastructure. The ultimate goal of these challenges is to (de)provision computational machines, storage and networking links and their required topologies in order to supply for the application the virtualised infrastructure that better meets the SLAs. Generic architectural blueprints and principles have been provided for designing and implementing an autonomic computing system.We revisit them in order to provide a customised and specific viewfor PaaS platforms and integrate emerging paradigms such as DevOps for automate deployments, Monitoring as a Service for accurate and large-scale monitoring, or well-known formalisms such as Petri Nets for building performance models

    An Autonomous Engine for Services Configuration and Deployment.

    Full text link
    The runtime management of the infrastructure providing service-based systems is a complex task, up to the point where manual operation struggles to be cost effective. As the functionality is provided by a set of dynamically composed distributed services, in order to achieve a management objective multiple operations have to be applied over the distributed elements of the managed infrastructure. Moreover, the manager must cope with the highly heterogeneous characteristics and management interfaces of the runtime resources. With this in mind, this paper proposes to support the configuration and deployment of services with an automated closed control loop. The automation is enabled by the definition of a generic information model, which captures all the information relevant to the management of the services with the same abstractions, describing the runtime elements, service dependencies, and business objectives. On top of that, a technique based on satisfiability is described which automatically diagnoses the state of the managed environment and obtains the required changes for correcting it (e.g., installation, service binding, update, or configuration). The results from a set of case studies extracted from the banking domain are provided to validate the feasibility of this propos

    Design and Implementation of a Scalable Crowdsensing Platform for Geospatial Data

    Get PDF
    In the recent years smart devices and small low-powered sensors are becoming ubiquitous and nowadays everything is connected altogether, which is a promising foundation for crowdsensing of data related to various environmental and societal phenomena. Very often, such data is especially meaningful when related to time and location, which is possible by already equipped GPS capabilities of modern smart devices. However, in order to gain knowledge from high-volume crowd-sensed data, it has to be collected and stored in a central platform, where it can be processed and transformed for various use cases. Conventional approaches built around classical relational databases and monolithic backends, that load and process the geospatial data on a per-request basis are not suitable for supporting the data requests of a large crowd willing to visualize phenomena. The possibly millions of data points introduce challenges for calculation, data-transfer and visualization on smartphones with limited graphics performance. We have created an architectural design, which combines a cloud-native approach with Big Data concepts used in the Internet of Things. The architectural design can be used as a generic foundation to implement a scalable backend for a platform, that covers aspects important for crowdsensing, such as social- and incentive features, as well as a sophisticated stream processing concept to calculate incoming measurement data and store pre-aggregated results. The calculation is based on a global grid system to index geospatial data for efficient aggregation and building a hierarchical geospatial relationship of averaged values, that can be directly used to rapidly and efficiently provide data on requests for visualization. We introduce the Noisemap project as an exemplary use case of such a platform and elaborate on certain requirements and challenges also related to frontend implementations. The goal of the project is to collect crowd-sensed noise measurements via smartphones and provide users information and a visualization of noise levels in their environment, which requires storing and processing in a central platform. A prototypic implementation for the measurement context of the Noisemap project is showing that the architectural design is indeed feasible to realize

    Towards Digital Twin-enabled DevOps for CPS providing Architecture-Based Service Adaptation & Verification at Runtime

    Full text link
    Industrial Product-Service Systems (IPSS) denote a service-oriented (SO) way of providing access to CPS capabilities. The design of such systems bears high risk due to uncertainty in requirements related to service function and behavior, operation environments, and evolving customer needs. Such risks and uncertainties are well known in the IT sector, where DevOps principles ensure continuous system improvement through reliable and frequent delivery processes. A modular and SO system architecture complements these processes to facilitate IT system adaptation and evolution. This work proposes a method to use and extend the Digital Twins (DTs) of IPSS assets for enabling the continuous optimization of CPS service delivery and the latter's adaptation to changing needs and environments. This reduces uncertainty during design and operations by assuring IPSS integrity and availability, especially for design and service adaptations at CPS runtime. The method builds on transferring IT DevOps principles to DT-enabled CPS IPSS. The chosen design approach integrates, reuses, and aligns the DT processing and communication resources with DevOps requirements derived from literature. We use these requirements to propose a DT-enabled self-adaptive CPS model, which guides the realization of DT-enabled DevOps in CPS IPSS. We further propose detailed design models for operation-critical DTs that integrate CPS closed-loop control and architecture-based CPS adaptation. This integrated approach enables the implementation of A/B testing as a use case and central concept to enable CPS IPSS service adaptation and reconfiguration. The self-adaptive CPS model and DT design concept have been validated in an evaluation environment for operation-critical CPS IPSS. The demonstrator achieved sub-millisecond cycle times during service A/B testing at runtime without causing CPS operation interferences and downtime.Comment: Final published version appearing in 17th Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2022

    Modeling an Industrial Revolution: How to Manage Large-Scale, Complex IoT Ecosystems?

    Get PDF
    Advancements around the modern digital industry gave birth to a number of closely interrelated concepts: in the age of the Internet of Things (IoT), System of Systems (SoS), Cyber-Physical Systems (CPS), Digital Twins and the fourth industrial revolution, everything revolves around the issue of designing well-understood, sound and secure complex systems while providing maximum flexibility, autonomy and dynamics.The aim of the paper is to present a concise overview of a comprehensive conceptual framework for integrated modeling and management of industrial IoT architectures, supported by actual evidence from the Arrowhead Tools project; in particular, we adopt a three-dimensional projection of our complex engineering space, from modeling the engineering process to SoS design and deployment.In particular, we start from modeling principles of the the engineering process itself. Then, we present a design-time SoS representation along with a toolchain concept aiding SoS design and deployment. This brings us to reasoning about what potential workflows are thinkable for specifying comprehensive toolchains along with their data exchange interfaces. We also discuss the potential of aligning our vision with RAMI4.0, as well as the utilization perspectives for real-life engineering use-cases
    corecore