51 research outputs found

    Dagstuhl News January - December 2011

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Enabling Model-Driven Live Analytics For Cyber-Physical Systems: The Case of Smart Grids

    Get PDF
    Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded systems. They will be entrusted with increasingly complex tasks like controlling electric grids or autonomously driving cars. These systems have the potential to lay the foundations for tomorrow’s critical infrastructures, to form the basis of emerging and future smart services, and to improve the quality of our everyday lives in many areas. In order to solve their tasks, they have to continuously monitor and collect data from physical processes, analyse this data, and make decisions based on it. Making smart decisions requires a deep understanding of the environment, internal state, and the impacts of actions. Such deep understanding relies on efficient data models to organise the sensed data and on advanced analytics. Considering that cyber-physical systems are controlling physical processes, decisions need to be taken very fast. This makes it necessary to analyse data in live, as opposed to conventional batch analytics. However, the complex nature combined with the massive amount of data generated by such systems impose fundamental challenges. While data in the context of cyber-physical systems has some similar characteristics as big data, it holds a particular complexity. This complexity results from the complicated physical phenomena described by this data, which makes it difficult to extract a model able to explain such data and its various multi-layered relationships. Existing solutions fail to provide sustainable mechanisms to analyse such data in live. This dissertation presents a novel approach, named model-driven live analytics. The main contribution of this thesis is a multi-dimensional graph data model that brings raw data, domain knowledge, and machine learning together in a single model, which can drive live analytic processes. This model is continuously updated with the sensed data and can be leveraged by live analytic processes to support decision-making of cyber-physical systems. The presented approach has been developed in collaboration with an industrial partner and, in form of a prototype, applied to the domain of smart grids. The addressed challenges are derived from this collaboration as a response to shortcomings in the current state of the art. More specifically, this dissertation provides solutions for the following challenges: First, data handled by cyber-physical systems is usually dynamic—data in motion as opposed to traditional data at rest—and changes frequently and at different paces. Analysing such data is challenging since data models usually can only represent a snapshot of a system at one specific point in time. A common approach consists in a discretisation, which regularly samples and stores such snapshots at specific timestamps to keep track of the history. Continuously changing data is then represented as a finite sequence of such snapshots. Such data representations would be very inefficient to analyse, since it would require to mine the snapshots, extract a relevant dataset, and finally analyse it. For this problem, this thesis presents a temporal graph data model and storage system, which consider time as a first-class property. A time-relative navigation concept enables to analyse frequently changing data very efficiently. Secondly, making sustainable decisions requires to anticipate what impacts certain actions would have. Considering complex cyber-physical systems, it can come to situations where hundreds or thousands of such hypothetical actions must be explored before a solid decision can be made. Every action leads to an independent alternative from where a set of other actions can be applied and so forth. Finding the sequence of actions that leads to the desired alternative, requires to efficiently create, represent, and analyse many different alternatives. Given that every alternative has its own history, this creates a very high combinatorial complexity of alternatives and histories, which is hard to analyse. To tackle this problem, this dissertation introduces a multi-dimensional graph data model (as an extension of the temporal graph data model) that enables to efficiently represent, store, and analyse many different alternatives in live. Thirdly, complex cyber-physical systems are often distributed, but to fulfil their tasks these systems typically need to share context information between computational entities. This requires analytic algorithms to reason over distributed data, which is a complex task since it relies on the aggregation and processing of various distributed and constantly changing data. To address this challenge, this dissertation proposes an approach to transparently distribute the presented multi-dimensional graph data model in a peer-to-peer manner and defines a stream processing concept to efficiently handle frequent changes. Fourthly, to meet future needs, cyber-physical systems need to become increasingly intelligent. To make smart decisions, these systems have to continuously refine behavioural models that are known at design time, with what can only be learned from live data. Machine learning algorithms can help to solve this unknown behaviour by extracting commonalities over massive datasets. Nevertheless, searching a coarse-grained common behaviour model can be very inaccurate for cyber-physical systems, which are composed of completely different entities with very different behaviour. For these systems, fine-grained learning can be significantly more accurate. However, modelling, structuring, and synchronising many fine-grained learning units is challenging. To tackle this, this thesis presents an approach to define reusable, chainable, and independently computable fine-grained learning units, which can be modelled together with and on the same level as domain data. This allows to weave machine learning directly into the presented multi-dimensional graph data model. In summary, this thesis provides an efficient multi-dimensional graph data model to enable live analytics of complex, frequently changing, and distributed data of cyber-physical systems. This model can significantly improve data analytics for such systems and empower cyber-physical systems to make smart decisions in live. The presented solutions combine and extend methods from model-driven engineering, [email protected], data analytics, database systems, and machine learning

    The Next Evolution of MDE: A Seamless Integration of Machine Learning into Domain Modeling

    Get PDF
    Machine learning algorithms are designed to resolve unknown behaviors by extracting commonalities over massive datasets. Unfortunately, learning such global behaviors can be inaccurate and slow for systems composed of heterogeneous elements, which behave very differently, for instance as it is the case for cyber-physical systems andInternet of Things applications. Instead, to make smart deci-sions, such systems have to continuously refine the behavior on a per-element basis and compose these small learning units together. However, combining and composing learned behaviors from different elements is challenging and requires domain knowledge. Therefore, there is a need to structure and combine the learned behaviors and domain knowledge together in a flexible way. In this paper we propose to weave machine learning into domain modeling. More specifically, we suggest to decompose machine learning into reusable, chainable, and independently computable small learning units, which we refer to as microlearning units.These micro learning units are modeled together with and at the same level as the domain data. We show, based on asmart grid case study, that our approach can be significantly more accurate than learning a global behavior, while the performance is fast enough to be used for live learning

    Multi-Quality Auto-Tuning by Contract Negotiation

    Get PDF
    A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible

    Descriptive business process models at run-time

    Get PDF
    Today's competitive markets require organisations to react proactively to changes in their environment if financial and legal consequences are to be avoided. Since business processes are elementary parts of modern organisations they are also required to efficiently adapt to these changes in quick and flexible ways. This requirement demands a more dynamic handling of business processes, i.e. treating business processes as run-time artefacts rather than design-time artefacts. One general approach to address this problem is provided by the community of [email protected], which promotes methodologies concerned with self-adaptive systems where models reflect the system's current state at any point in time and allow immediate reasoning and adaptation mechanisms. However, in contrast to common self-adaptive systems the domain of business processes features two additional challenges: (i) a bigger than usual abstraction gap between the business process models and the actual run-time information of the enterprise system and (ii) the possibility of run-time deviations from the planned models. Developing an understanding of such processes is a crucial necessity in order to optimise business processes and dynamically adapt to changing demands. This thesis explores the potential of adopting and enhancing principles and mechanisms from the [email protected] domain to the business process domain for the purpose of run-time reasoning, i.e. investigating the potential role of Descriptive Business Process Models at Run-time (DBPMRTs) in the business process management domain. The DBPMRT is a model describing the enterprise system at run-time and thus enabling higher-level reasoning on the as-is state. Along with the specification of the DBPMRT, algorithms and an overall framework are proposed to establish and maintain a causal link from the enterprise system to the DBPMRT at run-time. Furthermore, it is shown that proactive higher-level reasoning on a DBPMRT in the form of performance prediction allows for more accurate results. By taking these steps the thesis addresses general challenges of business process management, e.g. dealing with frequently changing processes and shortening the business process life cycle. At the same time this thesis contributes to research in [email protected] by providing a complex real-world use case as well as a reference approach for dealing with volatile [email protected] of a higher abstraction level

    A Meta-Circular Basis for Model-Based Language Engineering

    Get PDF
    Meta-modelling is a technique that facilitates the construction of new languages to be used in system development. Although meta-modelling is supported by a number of tools and technologies, notably the Meta Object Facility from the OMG, there is no widely accepted precise basis for meta-modelling that can be used to develop and study language-based approaches to system development. Recent advances in meta-modelling have proposed several approaches to mixing types and instances, and allowing constraints to hold over multiple levels. This article proposes a collection of key characteristic features that are used to define a foundational self-contained unifying meta-language that is evaluated through several examples

    Architectural stability of self-adaptive software systems

    Get PDF
    This thesis studies the notion of stability in software engineering with the aim of understanding its dimensions, facets and aspects, as well as characterising it. The thesis further investigates the aspect of behavioural stability at the architectural level, as a property concerned with the architecture's capability in maintaining the achievement of expected quality of service and accommodating runtime changes, in order to delay the architecture drifting and phasing-out as a consequence of the continuous unsuccessful provision of quality requirements. The research aims to provide a systematic and methodological support for analysing, modelling, designing and evaluating architectural stability. The novelty of this research is the consideration of stability during runtime operation, by focusing on the stable provision of quality of service without violations. As the runtime dimension is associated with adaptations, the research investigates stability in the context of self-adaptive software architectures, where runtime stability is challenged by the quality of adaptation, which in turn affects the quality of service. The research evaluation focuses on the effectiveness, scale and accuracy in handling runtime dynamics, using the self-adaptive cloud architectures

    Achieving Autonomic Computing through the Use of Variability Models at Run-time

    Full text link
    Increasingly, software needs to dynamically adapt its behavior at run-time in response to changing conditions in the supporting computing infrastructure and in the surrounding physical environment. Adaptability is emerging as a necessary underlying capability, particularly for highly dynamic systems such as context-aware or ubiquitous systems. By automating tasks such as installation, adaptation, or healing, Autonomic Computing envisions computing environments that evolve without the need for human intervention. Even though there is a fair amount of work on architectures and their theoretical design, Autonomic Computing was criticised as being a \hype topic" because very little of it has been implemented fully. Furthermore, given that the autonomic system must change states at runtime and that some of those states may emerge and are much less deterministic, there is a great challenge to provide new guidelines, techniques and tools to help autonomic system development. This thesis shows that building up on the central ideas of Model Driven Development (Models as rst-order citizens) and Software Product Lines (Variability Management) can play a signi cant role as we move towards implementing the key self-management properties associated with autonomic computing. The presented approach encompass systems that are capable of modifying their own behavior with respect to changes in their operating environment, by using variability models as if they were the policies that drive the system's autonomic recon guration at runtime. Under a set of recon guration commands, the components that make up the architecture dynamically cooperate to change the con guration of the architecture to a new con guration. This work also provides the implementation of a Model-Based Recon guration Engine (MoRE) to blend the above ideas. Given a context event, MoRE queries the variability models to determine how the system should evolve, and then it provides the mechanisms for modifying the system.Cetina Englada, C. (2010). Achieving Autonomic Computing through the Use of Variability Models at Run-time [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7484Palanci
    corecore