34 research outputs found

    Role-Modeling in Round-Trip Engineering for Megamodels

    Get PDF
    Software is becoming more and more part of our daily life and makes it easier, e.g., in the areas of communication and infrastructure. Model-driven software development forms the basis for the development of software through the use and combination of different models, which serve as central artifacts in the software development process. In this respect, model-driven software development comprises the process from requirement analysis through design to software implementation. This set of models with their relationships to each other forms a so-called megamodel. Due to the overlapping of the models, inconsistencies occur between the models, which must be removed. Therefore, round-trip engineering is a mechanism for synchronizing models and is the foundation for ensuring consistency between models. Most of the current approaches in this area, however, work with outdated batch-oriented transformation mechanisms, which no longer meet the requirements of more complex, long-living, and ever-changing software. In addition, the creation of megamodels is time-consuming and complex, and they represent unmanageable constructs for a single user. The aim of this thesis is to create a megamodel by means of easy-to-learn mechanisms and to achieve its consistency by removing redundancy on the one hand and by incrementally managing consistency relationships on the other hand. In addition, views must be created on the parts of the megamodel to extract them across internal model boundaries. To achieve these goals, the role concept of Kühn in 2014 is used in the context of model-driven software development, which was developed in the Research Training Group 'Role-based Software Infrastructures for continuous-context-sensitive Systems.' A contribution of this work is a role-based single underlying model approach, which enables the generation of views on heterogeneous models. Besides, an approach for the synchronization of different models has been developed, which enables the role-based single underlying model approach to be extended by new models. The combination of these two approaches creates a runtime-adaptive megamodel approach that can be used in model-driven software development. The resulting approaches will be evaluated based on an example from the literature, which covers all areas of the work. In addition, the model synchronization approach will be evaluated in connection with the Transformation Tool Contest Case from 2019

    Designing Round-Trip Systems by Change Propagation and Model Partitioning

    Get PDF
    Software development processes incorporate a variety of different artifacts (e.g., source code, models, and documentation). For multiple reasons the data that is contained in these artifacts does expose some degree of redundancy. Ensuring global consistency across artifacts during all stages in the development of software systems is required, because inconsistent artifacts can yield to failures. Ensuring consistency can be either achieved by reducing the amount of redundancy or by synchronizing the information that is shared across multiple artifacts. The discipline of software engineering that addresses these problems is called Round-Trip Engineering (RTE). In this thesis we present a conceptual framework for the design RTE systems. This framework delivers precise definitions for essential terms in the context of RTE and a process that can be used to address new RTE applications. The main idea of the framework is to partition models into parts that require synchronization - skeletons - and parts that do not - clothings. Once such a partitioning is obtained, the relations between the elements of the skeletons determine whether a deterministic RTE system can be built. If not, manual decisions may be required by developers. Based on this conceptual framework, two concrete approaches to RTE are presented. The first one - Backpropagation-based RTE - employs change translation, traceability and synchronization fitness functions to allow for synchronization of artifacts that are connected by non-injective transformations. The second approach - Role-based Tool Integration - provides means to avoid redundancy. To do so, a novel tool design method that relies on role modeling is presented. Tool integration is then performed by the creation of role bindings between role models. In addition to the two concrete approaches to RTE, which form the main contributions of the thesis, we investigate the creation of bridges between technical spaces. We consider these bridges as an essential prerequisite for performing logical synchronization between artifacts. Also, the feasibility of semantic web technologies is a subject of the thesis, because the specification of synchronization rules was identified as a blocking factor during our problem analysis. The thesis is complemented by an evaluation of all presented RTE approaches in different scenarios. Based on this evaluation, the strengths and weaknesses of the approaches are identified. Also, the practical feasibility of our approaches is confirmed w.r.t. the presented RTE applications

    TOWARDS CHANGE VALIDATION IN DYNAMIC SYSTEM UPDATING FRAMEWORKS

    Get PDF
    Dynamic Software Updating (DSU) provides mechanisms to update a program without stopping its execution. An indiscriminate update that does not consider the current state of the computation, potentially undermines the stability of the running application. Determining automatically a safe moment, the time that the updating process could be started, is still an open crux that usually neglected from the existing DSU systems. The program developer is the best one who knows the program semantics and the logical relations between two successive versions as well as the constraints which should be respected in order to proceed with the update. Therefore, a set of meta-data has been introduced that could be exploited to explain the constraints of the update. These constraints should be considered at the dynamic update time. Thus, a runtime validator has been designed and implemented to verify these constraints before starting the update process. The validator is independent of existing DSU systems and can be plugged into DSUs as a pre-update component. An architecture for validation has been proposed that includes the DSU, the running program, the validator, and their communications. Along with the ability to describe the restrictions by using meta-data, a method has been presented to extract some constraints automatically. The gradual transition from the old version to the new version requires that the running application frequently switches between executing old and new code for a transient period. Although this swinging execution phenomenon is inevitable, its beginning can be selected. Considering this issue, an automatic method has been proposed to determine which part of the code is unsafe to participate in the swinging execution. The method has been implemented as a static analyzer which can annotate the unsafe part of the code as constraints. This approach is demonstrated in the evolution of the various versions of three different long-running software systems and compared to other approaches. Although the approach has been evaluated by evolving various programs, the impact of different changes in the dynamic update is not entirely clear. In addition, the study of the effect of these changes can identify code smells on the program, regarding the dynamic update issue. For the first time, the code smells have been introduced that may cause a run-time or syntax error on the dynamic update process. A set of candidate error-prone patterns has been developed based on programming language features and possible changes for each item. This set of 75 patterns is inspected by three distinct DSUs to identify problematic cases as code smells. Additionally, error- prone patterns set can be exploited as a reference set by other DSUs to measure own flexibility

    Extensión de la especificación IMS Learning Design desde la adaptación e integración de unidades de aprendizaje

    Get PDF
    IMS Learning Design (IMS-LD) representa una corriente actual en aprendizaje online y blended que se caracteriza porque: a) Es una especificación que pretende estandarizar procesos de aprendizaje, así como reutilizarlos en diversos contextos b) Posee una expresividad pedagógica más elaborada que desarrollos anteriores o en proceso c) Mantiene una relación cordial y prometedora con Learning Management Systems (LMSs), herramientas de autoría y de ejecución d) Existe una amplia variedad de grupos de investigación y proyectos europeos trabajando sobre ella, lo que augura una sostenibilidad, al menos académica Aun así, IMS Learning Design es un producto inicial (se encuentra en su primera versión, de 2003) y mejorable en diversos aspectos, como son la expresividad pedagógica y la interoperabilidad. En concreto, en esta tesis nos centramos en el aprendizaje adaptativo o personalizado y en la integración de Unidades de Aprendizaje, como dos de los pilares que definen la especificación, y que al mismo tiempo la potencian considerablemente. El primero (aprendizaje adaptativo) hace que se puedan abordar itinerarios individuales personalizados de estudio, tanto en flujo de aprendizaje como en contenido o interfaz; el segundo (integración) permite romper el aislamiento de los paquetes de información o cursos (Unidades de Aprendizaje, UoL) y establecer un diálogo con otros sistemas (LMSs), modelos y estándares, así como una reutilización de dichas UoLs en diversos contextos. En esta tesis realizamos un estudio de la especificación desde la base, analizando su modelo de información y cómo se construyen Unidades de Aprendizaje. Desde el Nivel A al Nivel C analizamos y criticamos la estructura de la especificación basándonos en un estudio teórico y una investigación práctica fruto del modelado de Unidades de Aprendizaje reales y ejecutables que nos proporcionan una información muy útil de base, y que mayormente adjuntamos en los anexos, para no interferir en el flujo de lectura del cuerpo principal. A partir de este estudio, analizamos la integración de Unidades de Aprendizaje con otros sistemas y especificaciones, abarcando desde la integración mínima mediante un enlace directo hasta la compartición de variables y estados que permiten una comunicación en tiempo real de ambas partes. Exponemos aquí también las conclusiones de diversos casos de estudio basados en adaptación que se anexan al final de la tesis y que se vuelven un instrumento imprescindible para lograr una solución real y aplicable. Como segundo pilar de la tesis complementario a la integración de Unidades de Aprendizaje, estudiamos el aprendizaje adaptativo: Los tipos, los avances y los enfoques y restricciones de modelado dentro de IMS-LD. Por último, y como complemento de la investigación teórica, a través de diversos casos prácticos estudiamos la manera en que IMS-LD modela la perzonalización del aprendizaje y hasta qué punto. Este primer bloque de análisis (general, integración y aprendizaje adaptativo) nos permite realizar una crítica estructural de IMS-LD en dos grandes apartados: Modelado y Arquitectura. Modelado apunta cuestiones que necesitan mejora, modificación, extensión o incorporación de elementos de modelado dentro de IMS-LD, como son procesos, componentes y recursos de programación. Arquitectura engloba otras cuestiones centradas en la comunicación que realiza IMS-LD con el exterior y que apuntan directamente a capas estructurales de la especificación, más allá del modelado. Aunque se encuentra fuera del núcleo de esta tesis, también se ha realizado una revisión de aspectos relacionados con Herramientas de autoría, por ser este un aspecto que condiciona el alcance del modelado y la penetración de la especificación en los distintos públicos objetivo. Sobre Herramientas, no obstante, no realizamos ninguna propuesta de mejora. La solución desarrollada, se centra en las diversas cuestiones sobre Modelado y Arquitectura encontradas en el análisis. Esta solución se compone de un conjunto de propuestas de estructuras, nuevas o ya existentes y modificadas, a través de las que se refuerza la capacidad expresiva de la especificación y la capacidad de interacción con un entorno de trabajo ajeno. Esta investigación de tres años ha sido llevada a cabo entre 2004 y 2007, principalmente con colegas de The Open University of The Netherlands, The University of Bolton, Universitat Pompeu Fabra y del departamento Research & Innovation de ATOS Origin, y ha sido desarrollada parcialmente dentro de proyectos europeos como UNFOLD, EU4ALL y ProLearn. La conclusión principal que se extrae de esta investigación es que IMS-LD necesita una reestructuración y modificación de ciertos elementos, así como la incorporación de otros nuevos, para mejorar una expresividad pedagógica y una capacidad de integración con otros sistemas de aprendizaje y estándares eLearning, si se pretenden alcanzar dos de los objetivos principales establecidos de base en la definición de esta especificación: La personalización del proceso de aprendizaje y la interoperabilidad real. Aun así, es cierto que la implantación de la especificación se vería claramente mejorada si existieran unas herramientas de más alto nivel (preferiblemente con planteamiento visual) que permitieran un modelado sencillo por parte de los usuarios finales reales de este tipo de especificaciones, como son los profesores, los creadores de contenido y los pedagogos-didactas que diseñan la experienicia de aprendizaje. Este punto, no obstante, es ajeno a la especificación y afecta a la interpretación que de la misma realizan los grupos de investigación y compañías que desarrollan soluciones de autoría. _____________________________________________IMS Learning Design (IMS-LD) is a current asset in eLearning and blended learning, due to several reasons: a) It is a specification that points to standardization and modeling of learning processes, and not just content; at the same time, it is focused on the re-use of the information packages in several contexts; b) It shows a deeper pedagogical expressiveness than other specifications, already delivered or in due process c) It is integrated at different levels into well-known Learning Management Systems (LMSs) d) There are a huge amount of European research projects and groups working with it, which aims at sustainability (in academia, at least) Nevertheless, IMS-LD is roughly an initial outcome (be aware that we are still working with the same release, dated on 2003). Therefore, it can and must be improved in several aspects, i.e., pedagogical expressiveness and interoperability. In this thesis, we concentrate on Adaptive Learning (or Personalised Learning) and on the Integration of Units of Learning (UoLs). They both are core aspects which the specification is built upon. They also can improve it significantly. Adaptation makes personalised learning itineraries, adapted to every role, to every user involved in the process, and focus on several aspects, i.e., flow, content and interface. Integration fosters the re-use of IMS-LD information packages in different contexts and connects both-ways UoLs with other specifications, models and LMSs. In order to achive these goals we carry out a threephase analysis. First, analysis of IMS-LD in several steps: foundations, information model, construction of UoLs. From Level A to Level C, we analyse and review the specification structure. We lean on a theoretical frameword, along with a practical approach, coming from the actual modeling of real UoLs which give an important report back. Out of this analysis we get a report on the general structure of IMS-LD. Second, analysis and review of the integration of UoLs with several LMSs, models and specifications: we analyse three different types of integration: a) minimal integration, with a simple link between parts; b) embedded integration, with a marriage of both parts in a single information package; and d) full integration, sharing variables and states between parts. In this step, we also show different case studies and report our partial conclusions. And third, analysis and review of how IMS-LD models adaptive learning: we define, classify and explain several types of adaptation and we approach them with the specificacion. A key part of this step is the actual modeling of UoLs showing adaptive learning processes. We highlight pros and cons and stress drawbacks and weak points that could be improved in IMS-LD to support adaptation, but also general learning processes Out of this three-step analysis carried out so far (namely general, integration, adaptation) we focus our review of the IMS-LD structure and information model on two blocks: Modeling and Architecture. Modeling is focused on process, components and programming resources of IMS-LD. Architecture is focused on the communication that IMS-LD establishes outside, both ways, and it deals with upper layers of the specification, beyong modeling issues. Modeling and Architecture issues need to be addressed in order to improve the pedagogical expressiveness and the integration of IMS-LD. Furthermore, we provide an orchestrated solution which meets these goals. We develop a structured and organized group of modifications and extensions of IMS-LD, which match the different reported problems issues. We suggest modifications, extensions and addition of different elements, aiming at the strength of the specification on adaptation and integration, along with general interest issues. The main conclusion out of this research is that IMS-LD needs a re-structure and a modification of some elements. It also needs to incorporate new ones. Both actions (modification and extension) are the key to improve the pedagogical expressiveness and the integration with other specifications and eLearning systems. Both actions aim at two clear objectives in the definition of IMS-LD: the personalisation of learning processes, and a real interoperability. It is fair to highlight the welcome help of high-level visual authoring tools. They can support a smoother modeling process that could focus on pedagogical issues and not on technical ones, so that a broad target group made of teachers, learning designers, content creators and pedagogues could make use of the specification in a simpler way. However, this criticism is outside the specification, so outside the core of this thesis too. This three-year research (2004-2007) has been carried out along with colleagues from The Open University of The Netherlands, The University of Bolton, Universitat Pompeu Fabra and from the Department of Research & Innovation of ATOS Origin. In addition, a few European projects, like UNFOLD, EU4ALL and ProLearn, have partially supported it

    TOWARDS INSTITUTIONAL INFRASTRUCTURES FOR E-SCIENCE: The Scope of the Challenge

    Get PDF
    The three-fold purpose of this Report to the Joint Information Systems Committee (JISC) of the Research Councils (UK) is to: • articulate the nature and significance of the non-technological issues that will bear on the practical effectiveness of the hardware and software infrastructures that are being created to enable collaborations in e- Science; • characterise succinctly the fundamental sources of the organisational and institutional challenges that need to be addressed in regard to defining terms, rights and responsibilities of the collaborating parties, and to illustrate these by reference to the limited experience gained to date in regard to intellectual property, liability, privacy, and security and competition policy issues affecting scientific research organisations; and • propose approaches for arriving at institutional mechanisms whose establishment would generate workable, specific arrangements facilitating collaboration in e-Science; and, that also might serve to meet similar needs in other spheres such as e- Learning, e-Government, e-Commerce, e-Healthcare. In carrying out these tasks, the report examines developments in enhanced computer-mediated telecommunication networks and digital information technologies, and recent advances in technologies of collaboration. It considers the economic and legal aspects of scientific collaboration, with attention to interactions between formal contracting and 'private ordering' arrangements that rest upon research community norms. It offers definitions of e-Science, virtual laboratories, collaboratories, and develops a taxonomy of collaborative e-Science activities which is implemented to classify British e-Science pilot projects and contrast these with US collaboratory projects funded during the 1990s. The approach to facilitating inter-organizational participation in collaborative projects rests upon the development of a modular structure of contractual clauses that permit flexibility and experience-based learning.

    Towards Semantically Enabled Complex Event Processing

    Full text link

    Tagungsband zum Doctoral Consortium der WI 2011

    Get PDF
    corecore