166 research outputs found

    ARTIST: Model-Based Stairway to the Cloud

    Get PDF
    International audienceOver the past decade, cloud services emerged as one of the most promising technologies in IT. Since cloud computing allows improving the quality of software and, at the same time, aims at reducing costs of operating software and hardware, more and more software is delivered as a service in the cloud. However , moving existing software applications to the cloud and making them behave as software as a service is still a major challenge. In fact, in addition to technical aspects, business aspects also need to be considered. The ARTIST EU project (FP7) proposes a comprehensive model-based modernization approach, covering both business and technical aspects, to cloudify already existing software. In particular , ARTIST employs MDE techniques to automate the reverse engineering and forward engineering phases in a way that modernized software truly benefits from targeted cloud environments. In this paper we describe the overall ARTIST approach and present several lessons learned

    Automatic generation of UML profile graphical editors for Papyrus

    Get PDF
    UML profiles offer an intuitive way for developers to build domain-specific modelling languages by reusing and extending UML concepts. Eclipse Papyrus is a powerful open-source UML modelling tool which supports UML profiling. However, with power comes complexity, implementing non-trivial UML profiles and their supporting editors in Papyrus typically requires the developers to handcraft and maintain a number of interconnected models through a loosely guided, labour-intensive and error-prone process. We demonstrate how metamodel annotations and model transformation techniques can help manage the complexity of Papyrus in the creation of UML profiles and their supporting editors. We present Jorvik, an open-source tool that implements the proposed approach. We illustrate its functionality with examples, and we evaluate our approach by comparing it against manual UML profile specification and editor implementation using a non-trivial enterprise modelling language (Archimate) as a case study. We also perform a user study in which developers are asked to produce identical editors using both Papyrus and Jorvik demonstrating the substantial productivity and maintainability benefits that Jorvik delivers

    Towards automatic generation of UML profile graphical editors for papyrus

    Get PDF
    We present an approach for defining the abstract and concrete syntax of UML profiles and their equivalent Papyrus graphical editors using annotated Ecore metamodels, driven by automated model-to-model and model-to-text transformations. We compare our approach against manual UML profile specification and implementation using Archimate, a non-trivial enterprise modelling language, and we demonstrate the substantial productivity and maintainability benefits it delivers

    Model Transformation For Validation Of Software Design

    Get PDF

    AVENTIS - An architecture for event data analysis

    Full text link
    Time-stamped event data is being generated at an exponential rate from various sources (sensor networks, e-markets etc.), which are stored in event logs and made available to researchers. Despite the data deluge and evolution of a plethora of tools and technologies, science behind exploratory analysis and knowledge discovery lags. There are several reasons behind this. In conducting event data analysis, researchers typically detect a pattern or trend in the data through computation of time-series measures and apply the computed measures to several mathematical models to glean information from data. This is a complex and time-consuming process covering a range of activities from data capture (from a broad array of data sources) to interpretation and dissemination of experimental results forming a pipeline of activities. Further, data-analysis is conducted by domain-users, who are typically non-IT experts but data processing tools and applications are largely developed by application developers. End-users not only lack the critical skills to build a structured analysis pipeline, but are also perplexed by the number of different ways available to derive the necessary information. Consequently, this thesis proposes AVENTIS (Architecture for eVENT Data analysIS), a novel framework to guide the design of analytic solutions to facilitate time-series analysis of event data and is tailored to the needs of domain users. The framework comprises three components; a knowledge base, a model-driven analytic methodology and an accompanying software architecture that provides the necessary technical and operational requirements. Specifically, the research contribution lies in the ability of the framework to enable expressing analysis requirements at a level of abstraction consistent with the domain users and readily make available the information sought without the users having to build the analysis process themselves. Secondly, the framework also facilitates an abstract design space for the domain experts to enable them to build conceptual models of their experiment as a sequence of structured tasks in a technology neutral manner and transparently translate these abstract process models to executable implementations. To evaluate the AVENTIS framework, a prototype based on AVENTIS is implemented and tested with case studies taken from the financial research domain

    Self-Organizing Software Architectures

    Get PDF
    Looking at engineering productivity is a source for improving the state of software engineering. We present two approaches to improve productivity: bottom-up modeling and self-configuring software components. Productivity, as measured in the ability to produce correctly working software features using limited resources is improved by performing less wasteful activities and by concentrating on the required activities to build sustainable software development organizations. Bottom-up modeling is a way to combine improved productivity with agile software engineering. Instead of focusing on tools and up-front planning, the models used emerge, as the requirements to the product are unveiled during a project. The idea is to build the modeling formalisms strong enough to be employed in code generation and as runtime models. This brings the benefits of model-driven engineering to agile projects, where the benefits have been rare. Self-configuring components are a development of bottom-up modeling. The notion of a source model is extended to incorporate the software entities themselves. Using computational reflection and introspection, dependent components of the software can be automatically updated to reflect changes in the dependence. This improves maintainability, thus making software changes faster. The thesis contains a number of case studies explaining the ways of applying the presented techniques. In addition to constructing the case studies, an empirical validation with test subjects is presented to show the usefulness of the techniques.Itseorganisoituvat ohjelmistoarkkitehtuurit Ohjelmistokehityksen tuottavuus on monen ohjelmistokehitysorganisaation huolenaihe. Erityisesti ylläpitovaiheessa ohjelmistojen heikko muokattavuus tuottaa turhia kustannuksia ja pettymyksiä asiakassuhteissa, kun vaikeasti muokattavaan ohjelmistoon tulisi tehdä muutoksia. Tässä työssä esitetään kaksi menetelmää ohjelmistojen muokattavuuden parantamiseksi: kokoava mallinnuskielten käyttäminen sekä itseorganisoituvat ohjelmistokomponentit. Mallipohjaisessa ohjelmistotuotannossa ohjelmistoille kehitetään soveltuvat mallinnuskielet ja -työkalut, joiden pohjalta kehitettävä ohjelmisto voidaan automaattisesti tuottaa. Uuden mallinnuskielen kehittäminen ja sitä tukevan välineistön rakentaminen on kuitenkin aikaaviepää ja vaikeaa. Vaarana on, että kehitetty kieli on valmistuessaan vanhentunut. Niin kutsutuissa ketterissä ohjelmistomenetelmissä yritetään välttää perinteisten, suunittelupainotteisten kehitysmenetelmien tuottamia sudenkuoppia. Liiallinen ketteryys voi kuitenkin kostautua heikkona tuottavuutena, kun kehitysväen kaikki aika kuluu näppäryysharjoituksiin varsinaisen tuottavan työn sijaan. Kokoava mallipohjainen tuotanto keskittyy kehittämään vain riittävän hyviä malleja, joiden perusteella voidaan yhdistää mallipohjaisen ohjelmistotuotannon ja ketterien prosessimallien tuomat edut. Ulkoisten, erikseen kehiteltyjen mallikielten lisäksi työssä esitellään ajatus ohjelmakoodin itsensä käyttämisestä mallipohjaisen ohjelmistotuotannon työkaluna. Näin syntyy itseorganisoituva ohjelmistoarkkitehtuuri. Tällä tavoin kehitystyön tuottavuus paranee, sillä ohjelmakoodin sisäisten riippuvuuksien määrä laskee, ja näin ollen muokkausten tekeminen on helpompaa. Työssä esitellään tapaustutkimuksia ohjelmakoodiin perustuvasta mallipohjaisen ohjelmistotuotannon ohjelmistokehyksistä sekä empiirinen validointi itseorganisoituvuuden hyödyllisyydestä tuottavuusnäkökulmasta katsoen

    Connected Information Management

    Get PDF
    Society is currently inundated with more information than ever, making efficient management a necessity. Alas, most of current information management suffers from several levels of disconnectedness: Applications partition data into segregated islands, small notes don’t fit into traditional application categories, navigating the data is different for each kind of data; data is either available at a certain computer or only online, but rarely both. Connected information management (CoIM) is an approach to information management that avoids these ways of disconnectedness. The core idea of CoIM is to keep all information in a central repository, with generic means for organization such as tagging. The heterogeneity of data is taken into account by offering specialized editors. The central repository eliminates the islands of application-specific data and is formally grounded by a CoIM model. The foundation for structured data is an RDF repository. The RDF editing meta-model (REMM) enables form-based editing of this data, similar to database applications such as MS access. Further kinds of data are supported by extending RDF, as follows. Wiki text is stored as RDF and can both contain structured text and be combined with structured data. Files are also supported by the CoIM model and are kept externally. Notes can be quickly captured and annotated with meta-data. Generic means for organization and navigation apply to all kinds of data. Ubiquitous availability of data is ensured via two CoIM implementations, the web application HYENA/Web and the desktop application HYENA/Eclipse. All data can be synchronized between these applications. The applications were used to validate the CoIM ideas
    • …
    corecore