33 research outputs found

    Enabling Model-Driven Live Analytics For Cyber-Physical Systems: The Case of Smart Grids

    Get PDF
    Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded systems. They will be entrusted with increasingly complex tasks like controlling electric grids or autonomously driving cars. These systems have the potential to lay the foundations for tomorrow’s critical infrastructures, to form the basis of emerging and future smart services, and to improve the quality of our everyday lives in many areas. In order to solve their tasks, they have to continuously monitor and collect data from physical processes, analyse this data, and make decisions based on it. Making smart decisions requires a deep understanding of the environment, internal state, and the impacts of actions. Such deep understanding relies on efficient data models to organise the sensed data and on advanced analytics. Considering that cyber-physical systems are controlling physical processes, decisions need to be taken very fast. This makes it necessary to analyse data in live, as opposed to conventional batch analytics. However, the complex nature combined with the massive amount of data generated by such systems impose fundamental challenges. While data in the context of cyber-physical systems has some similar characteristics as big data, it holds a particular complexity. This complexity results from the complicated physical phenomena described by this data, which makes it difficult to extract a model able to explain such data and its various multi-layered relationships. Existing solutions fail to provide sustainable mechanisms to analyse such data in live. This dissertation presents a novel approach, named model-driven live analytics. The main contribution of this thesis is a multi-dimensional graph data model that brings raw data, domain knowledge, and machine learning together in a single model, which can drive live analytic processes. This model is continuously updated with the sensed data and can be leveraged by live analytic processes to support decision-making of cyber-physical systems. The presented approach has been developed in collaboration with an industrial partner and, in form of a prototype, applied to the domain of smart grids. The addressed challenges are derived from this collaboration as a response to shortcomings in the current state of the art. More specifically, this dissertation provides solutions for the following challenges: First, data handled by cyber-physical systems is usually dynamic—data in motion as opposed to traditional data at rest—and changes frequently and at different paces. Analysing such data is challenging since data models usually can only represent a snapshot of a system at one specific point in time. A common approach consists in a discretisation, which regularly samples and stores such snapshots at specific timestamps to keep track of the history. Continuously changing data is then represented as a finite sequence of such snapshots. Such data representations would be very inefficient to analyse, since it would require to mine the snapshots, extract a relevant dataset, and finally analyse it. For this problem, this thesis presents a temporal graph data model and storage system, which consider time as a first-class property. A time-relative navigation concept enables to analyse frequently changing data very efficiently. Secondly, making sustainable decisions requires to anticipate what impacts certain actions would have. Considering complex cyber-physical systems, it can come to situations where hundreds or thousands of such hypothetical actions must be explored before a solid decision can be made. Every action leads to an independent alternative from where a set of other actions can be applied and so forth. Finding the sequence of actions that leads to the desired alternative, requires to efficiently create, represent, and analyse many different alternatives. Given that every alternative has its own history, this creates a very high combinatorial complexity of alternatives and histories, which is hard to analyse. To tackle this problem, this dissertation introduces a multi-dimensional graph data model (as an extension of the temporal graph data model) that enables to efficiently represent, store, and analyse many different alternatives in live. Thirdly, complex cyber-physical systems are often distributed, but to fulfil their tasks these systems typically need to share context information between computational entities. This requires analytic algorithms to reason over distributed data, which is a complex task since it relies on the aggregation and processing of various distributed and constantly changing data. To address this challenge, this dissertation proposes an approach to transparently distribute the presented multi-dimensional graph data model in a peer-to-peer manner and defines a stream processing concept to efficiently handle frequent changes. Fourthly, to meet future needs, cyber-physical systems need to become increasingly intelligent. To make smart decisions, these systems have to continuously refine behavioural models that are known at design time, with what can only be learned from live data. Machine learning algorithms can help to solve this unknown behaviour by extracting commonalities over massive datasets. Nevertheless, searching a coarse-grained common behaviour model can be very inaccurate for cyber-physical systems, which are composed of completely different entities with very different behaviour. For these systems, fine-grained learning can be significantly more accurate. However, modelling, structuring, and synchronising many fine-grained learning units is challenging. To tackle this, this thesis presents an approach to define reusable, chainable, and independently computable fine-grained learning units, which can be modelled together with and on the same level as domain data. This allows to weave machine learning directly into the presented multi-dimensional graph data model. In summary, this thesis provides an efficient multi-dimensional graph data model to enable live analytics of complex, frequently changing, and distributed data of cyber-physical systems. This model can significantly improve data analytics for such systems and empower cyber-physical systems to make smart decisions in live. The presented solutions combine and extend methods from model-driven engineering, [email protected], data analytics, database systems, and machine learning

    Model-driven software engineering for construction engineering: Quo vadis?

    Get PDF
    Models are an inherent part of the construction industry, which leverages from the steady advancements in information and communication technology. One of these advancements is Building Information Modeling (BIM), which denotes the move from 2D drawings to having semantically rich models of the objects subject to construction. Additionally, the way stakeholders collaborate in construction projects and their organization is revisited. This is commonly denoted as Integrated Project Delivery (IPD). Both BIM and IPD originate from the basic principles of Lean Construction, the vision to minimize waste, increase value, and continuous improvement. The application of Model-driven Software Engineering (MDSE) to BIM is a natural choice. Although several approaches utilizing MDSE for BIM have been proposed, so far no structured overview of the current state of the art has been conducted. Such an overview is vitally needed, because the existing literature is fragmented among multiple research areas. Consequently, in this paper, we present a systematic literature review on the application of MDSE to BIM, IPD and Lean Construction resulting in a systematically derived taxonomy, which we used to classify 97 papers published between 2008 and 2018. Based on the taxonomy, we provide an analysis of the classified research showing (a) where the discourse on model-driven construction engineering currently is, (b) the state of the art of model-driven techniques in construction engineering and (c) open research challenges

    Dagstuhl News January - December 2011

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Final CONNECT Architecture

    Get PDF
    Interoperability remains a fundamental challenge when connecting heterogeneous systems which encounter and spontaneously communicate with one another in pervasive computing environments. This challenge is exasperated by the highly heterogeneous technologies employed by each of the interacting parties, i.e., in terms of hardware, operating system, middleware protocols, and application protocols. The key aim of the CONNECT project is to drop this heterogeneity barrier and achieve universal interoperability. Here we report on the revised CONNECT architecture, highlighting the integration of the work carried out to integrate the CONNECT enablers developed by the different partners; in particular, we present the progress of this work towards a finalised concrete architecture. In the third year this architecture has been enhanced to: i) produce concrete CONNECTors, ii) match networked systems based upon their goals and intent, and iii) use learning technologies to find the affordance of a system. We also report on the application of the CONNECT approach to streaming based systems, further considering exploitation of CONNECT in the mobile environment

    DACA: arquitetura para implementação de mecanismos dinâmicos de controlo de acesso em camadas de negócio

    Get PDF
    Doutoramento em Ciências da ComputaçãoAccess control is a software engineering challenge in database applications. Currently, there is no satisfactory solution to dynamically implement evolving fine-grained access control mechanisms (FGACM) on business tiers of relational database applications. To tackle this access control gap, we propose an architecture, herein referred to as Dynamic Access Control Architecture (DACA). DACA allows FGACM to be dynamically built and updated at runtime in accordance with the established fine-grained access control policies (FGACP). DACA explores and makes use of Call Level Interfaces (CLI) features to implement FGACM on business tiers. Among the features, we emphasize their performance and their multiple access modes to data residing on relational databases. The different access modes of CLI are wrapped by typed objects driven by FGACM, which are built and updated at runtime. Programmers prescind of traditional access modes of CLI and start using the ones dynamically implemented and updated. DACA comprises three main components: Policy Server (repository of metadata for FGACM), Dynamic Access Control Component (DACC) (business tier component responsible for implementing FGACM) and Policy Manager (broker between DACC and Policy Server). Unlike current approaches, DACA is not dependent on any particular access control model or on any access control policy, this way promoting its applicability to a wide range of different situations. In order to validate DACA, a solution based on Java, Java Database Connectivity (JDBC) and SQL Server was devised and implemented. Two evaluations were carried out. The first one evaluates DACA capability to implement and update FGACM dynamically, at runtime, and, the second one assesses DACA performance against a standard use of JDBC without any FGACM. The collected results show that DACA is an effective approach for implementing evolving FGACM on business tiers based on Call Level Interfaces, in this case JDBC.Controlo de acesso é um desafio para a engenharia de software nas aplicações de bases de dados. Atualmente, não há uma solução satisfatória para a implementação dinâmica de mecanismos finos e evolutivos de controlo de acesso (FGACM) ao nível das camadas de negócio de aplicações de bases de dados relacionais. Para solucionar esta lacuna, propomos uma arquitetura, aqui referida como Arquitetura Dinâmica de Controlo de Acesso (DACA). DACA permite que FGACM sejam dinamicamente construídos e atualizados em tempo de execução de acordo com as políticas finas de controlo de acesso (FGACP) estabelecidas. DACA explora e utiliza as características das Call Level Interfaces (CLI) para implementar FGACM ao nível das camadas de negócio. De entre as características das CLI, destacamos o seu desempenho e os diversos modos para acesso a dados armazenados em bases de dados relacionais. Na DACA, os diversos modos de acesso das CLI são envolvidos por objetos tipados derivados de FGACM, que são construídos e atualizados em tempo de execução. Os programadores prescindem dos modos tradicionais de acesso das CLI e passam a utilizar os dinamicamente construídos e atualizados. DACA compreende três componentes principais: Policy Server (repositório de meta-data dos FGACM), Dynamic Access Control Component (componente da camada de negócio que é responsável pela implementação dos FGACM) e Policy Manager (broker entre DACC e Policy Server). Ao contrário das soluções atuais, DACA não é dependente de qualquer modelo de controlo de acesso ou de qualquer política de controlo de acesso, promovendo assim a sua aplicabilidade a muitas e diversificadas situações. Com o intuito de validar DACA, foi concebida e desenvolvida uma solução baseada em Java, Java Database Connectivity (JDBC) e SQL Server. Foram efetuadas duas avaliações. A primeira avalia DACA quanto à sua capacidade para dinamicamente, em tempo de execução, implementar e atualizar FGACM e, a segunda, avalia o desempenho de DACA contra uma solução sem FGACM que utiliza o JDBC normalizado. Os resultados recolhidos mostram que DACA é uma solução válida para implementar FGACM evolutivos em camadas de negócio baseadas em CLI

    Role-based Data Management

    Get PDF
    Database systems build an integral component of today’s software systems and as such they are the central point for storing and sharing a software system’s data while ensuring global data consistency at the same time. Introducing the primitives of roles and their accompanied metatype distinction in modeling and programming languages, results in a novel paradigm of designing, extending, and programming modern software systems. In detail, roles as modeling concept enable a separation of concerns within an entity. Along with its rigid core, an entity may acquire various roles in different contexts during its lifetime and thus, adapts its behavior and structure dynamically during runtime. Unfortunately, database systems, as important component and global consistency provider of such systems, do not keep pace with this trend. The absence of a metatype distinction, in terms of an entity’s separation of concerns, in the database system results in various problems for the software system in general, for the application developers, and finally for the database system itself. In case of relational database systems, these problems are concentrated under the term role-relational impedance mismatch. In particular, the whole software system is designed by using different semantics on various layers. In case of role-based software systems in combination with relational database systems this gap in semantics between applications and the database system increases dramatically. Consequently, the database system cannot directly represent the richer semantics of roles as well as the accompanied consistency constraints. These constraints have to be ensured by the applications and the database system loses its single point of truth characteristic in the software system. As the applications are in charge of guaranteeing global consistency, their development requires more effort in data management. Moreover, the software system’s data management is distributed over several layers, which results in an unstructured software system architecture. To overcome the role-relational impedance mismatch and bring the database system back in its rightful position as single point of truth in a software system, this thesis introduces the novel and tripartite RSQL approach. It combines a novel database model that represents the metatype distinction as first class citizen in a database system, an adapted query language on the database model’s basis, and finally a proper result representation. Precisely, RSQL’s logical database model introduces Dynamic Data Types, to directly represent the separation of concerns within an entity type on the schema level. On the instance level, the database model defines the notion of a Dynamic Tuple that combines an entity with the notion of roles and thus, allows for dynamic structure adaptations during runtime without changing an entity’s overall type. These definitions build the main data structures on which the database system operates. Moreover, formal operators connecting the query language statements with the database model data structures, complete the database model. The query language, as external database system interface, features an individual data definition, data manipulation, and data query language. Their statements directly represent the metatype distinction to address Dynamic Data Types and Dynamic Tuples, respectively. As a consequence of the novel data structures, the query processing of Dynamic Tuples is completely redesigned. As last piece for a complete database integration of a role-based notion and its accompanied metatype distinction, we specify the RSQL Result Net as result representation. It provides a novel result structure and features functionalities to navigate through query results. Finally, we evaluate all three RSQL components in comparison to a relational database system. This assessment clearly demonstrates the benefits of the roles concept’s full database integration

    A Functional, Comprehensive and Extensible Multi-Platform Querying and Transformation Approach

    Get PDF
    This thesis is about a new model querying and transformation approach called FunnyQT which is realized as a set of APIs and embedded domain-specific languages (DSLs) in the JVM-based functional Lisp-dialect Clojure. Founded on a powerful model management API, FunnyQT provides querying services such as comprehensions, quantified expressions, regular path expressions, logic-based, relational model querying, and pattern matching. On the transformation side, it supports the definition of unidirectional model-to-model transformations, of in-place transformations, it supports defining bidirectional transformations, and it supports a new kind of co-evolution transformations that allow for evolving a model together with its metamodel simultaneously. Several properties make FunnyQT unique. Foremost, it is just a Clojure library, thus, FunnyQT queries and transformations are Clojure programs. However, most higher-level services are provided as task-oriented embedded DSLs which use Clojure's powerful macro-system to support the user with tailor-made language constructs important for the task at hand. Since queries and transformations are just Clojure programs, they may use any Clojure or Java library for their own purpose, e.g., they may use some templating library for defining model-to-text transformations. Conversely, like every Clojure program, FunnyQT queries and transformations compile to normal JVM byte-code and can easily be called from other JVM languages. Furthermore, FunnyQT is platform-independent and designed with extensibility in mind. By default, it supports the Eclipse Modeling Framework and JGraLab, and support for other modeling frameworks can be added with minimal effort and without having to modify the respective framework's classes or FunnyQT itself. Lastly, because FunnyQT is embedded in a functional language, it has a functional emphasis itself. Every query and every transformation compiles to a function which can be passed around, given to higher-order functions, or be parametrized with other functions

    Trust engineering framework for software services

    Get PDF
    La presente tesis presenta un marco de trabajo que abarca distintas fases del ciclo de vida de los servicios software y que permite a ingenieros de requisitos, diseñadores y desarrolladores la integración en dichos servicios de modelos de confianza y reputación. En la fase de planificación, proponemos una metodología para evaluar la confianza en proveedores de Cloud antes de decidir si el sistema, o parte de él, se traslada al mismo. En la fase de análisis, ofrecemos una notación para la captura y representación de requisitos de confianza y reputación. Asimismo en esta misma fase, desarrollamos una metodología que permite detectar amenazas internas en un sistema a través de análisis de relaciones de confianza. Para la fase de diseño, proponemos un perfil UML que permite la especificación de modelos de confianza y reputación, lo cual facilita la siguiente fase de implementación, para la que desarrollamos un marco de trabajo que los desarrolladores pueden usar para implementar una amplia variedad de modelos de confianza y reputación. Finalmente, para la fase de verificación en tiempo de ejecución, presentamos un marco de trabajo desarrollado sobre una plataforma de sistemas auto-adaptativos que implementa el paradigma de modelos en tiempo de ejecución. Con dicho marco de trabajo, hacemos posible que los desarrolladores puedan implementar modelos de confianza y reputación, y que puedan usar la información proporcionada por dichos modelos para especificar políticas de reconfiguración en tiempo de ejecución. Esto permite que el sistema se adapte de forma que se mantengan niveles tolerables de confianza y reputación en los componentes de los que consiste. Todo los trabajos anteriores se apoyan sobre un marco conceptual que captura y relaciona entre sí las nociones más relevantes en los dominios de la confianza y la reputación

    Model Driven Development and Maintenance of Business Logic for Information Systems

    Get PDF
    Since information systems become more and more important in today\''s society, business firms, organizations, and individuals rely on these systems to manage their daily business and social activities. The dependency of possibly critical business processes on complex IT systems requires a strategy that supports IT departments in reducing the time needed to implement changed or new domain requirements of functional departments. In this context, software models help to manage system\''s complexity and provide a tool for communication and documentation purposes. Moreover, software engineers tend to use automated software model processing such as code generation to improve development and maintenance processes. Particularly in the context of web-based information systems, a number of model driven approaches were developed. However, we believe that compared to the user interface layer and the persistency layer, there could be a better support of consistent approaches providing a suitable architecture for the consistent model driven development of business logic. To ameliorate this situation, we developed an architectural blueprint consisting of meta models, tools, and a method support for model driven development and maintenance of business logic from analysis until system maintenance. This blueprint, which we call Amabulo infrastructure, consists of five layers and provides concepts and tools to set up and apply concrete infrastructures for model driven development projects. Modeling languages can be applied as needed. In this thesis we focus on business logic layers of J2EE applications. However, concrete code generation rules can be adapted easily for different target platforms. After providing a high-level overview of our Amabulo infrastructure, we describe its layers in detail: The Visual Model Layer is responsible for all visual modeling tasks. For this purpose, we discuss requirements for visual software models for business logic, analyze several visual modeling languages concerning their usefulness, and provide an UML profile for business logic models. The Abstract Model Layer provides an abstract view on the business logic model in the form of a domain specific model, which we call Amabulo model. An Amabulo model is reduced to pure logical information concerning business logic aspects. It focuses on information that is relevant for the code generation. For this purpose, an Amabulo model integrates model elements for process modeling, state modeling, and structural modeling. It is used as a common interface between visual modeling languages and code generators. Visual models of the Visual Model Layer are automatically transformed into an Amabulo model. The Abstract System Layer provides a formal view onto the system in the form of a Coloured Petri Net (CPN). A Coloured Petri Net representation of the modeled business logic is a formal structure and independent of the actual business logic implementation. After an Amabulo model is automatically transformed into a CPN, it can be analyzed and simulated before any line of code is generated. The Code Generation Layer is responsible for code generation. To support the design and implementation of project-specific code generators, we discuss several aspects of code integration issues and provide object-oriented design approaches to tackle the issues. Then, we provide a conceptual mapping of Amabulo model elements into architectural elements of a J2EE infrastructure. This mapping explicitly considers robustness features, which support a later manual integration of generated critical code artifacts and external systems. The Application Layer is the target layer of an Amabulo infrastructure and comprises generated code artifacts. These artifacts are instances of a specific target platform specification, and they can be modified for integration purposes with development tools. Through the contributions in this thesis, we aim to provide an integrated set of solutions to support an efficient model driven development and maintenance process for the business logic of information systems. Therefore, we provide a consistent infrastructure blueprint that considers modeling tasks, model analysis tasks, and code generation tasks. As a result, we see potential for reducing the development and maintenance efforts for changed domain requirements and simultaneously guaranteeing robustness and maintainability even after several changes

    Survey of Template-Based Code Generation

    Full text link
    L'automatisation de la génération des artefacts textuels à partir des modèles est une étape critique dans l'Ingénierie Dirigée par les Modèles (IDM). C'est une transformation de modèles utile pour générer le code source, sérialiser les modèles dans de stockages persistents, générer les rapports ou encore la documentation. Parmi les différents paradigmes de transformation de modèle-au-texte, la génération de code basée sur les templates (TBCG) est la plus utilisée en IDM. La TBCG est une technique de génération qui produit du code à partir des spécifications de haut niveau appelées templates. Compte tenu de la diversité des outils et des approches, il est nécessaire de classifier et de comparer les techniques de TBCG existantes afin d'apporter un soutien approprié aux développeurs. L'objectif de ce mémoire est de mieux comprendre les caractéristiques des techniques de TBCG, identifier les tendances dans la recherche, et éxaminer l'importance du rôle de l'IDM par rapport à cette approche. J'évalue également l'expressivité, la performance et la mise à l'échelle des outils associés selon une série de modèles. Je propose une étude systématique de cartographie de la littérature qui décrit une intéressante vue d'ensemble de la TBCG et une étude comparitive des outils de la TBCG pour mieux guider les dévloppeurs dans leur choix. Cette étude montre que les outils basés sur les modèles offrent plus d'expressivité tandis que les outils basés sur le code sont les plus performants. Enfin, Xtend2 offre le meilleur compromis entre l'expressivité et la performance.A critical step in model-driven engineering (MDE) is the automatic synthesis of a textual artifact from models. This is a very useful model transformation to generate application code, to serialize the model in persistent storage, generate documentation or reports. Among the various model-to-text transformation paradigms, Template-Based Code Generation (TBCG) is the most popular in MDE. TBCG is a synthesis technique that produces code from high-level specifications, called templates. It is a popular technique in MDE given that they both emphasize abstraction and automation. Given the diversity of tools and approaches, it is necessary to classify and compare existing TBCG techniques to provide appropriate support to developers. The goal of this thesis is to better understand the characteristics of TBCG techniques, identify research trends, and assess the importance of the role of MDE in this code synthesis approach. We also evaluate the expressiveness, performance and scalability of the associated tools based on a range of models that implement critical patterns. To this end, we conduct a systematic mapping study of the literature that paints an interesting overview of TBCG and a comparative study on TBCG tools to better guide developers in their choices. This study shows that model-based tools offer more expressiveness whereas code-based tools performed much faster. Xtend2 offers the best compromise between the expressiveness and the performance
    corecore