316 research outputs found

    Towards using intelligent techniques to assist software specialists in their tasks

    Full text link
    L’automatisation et l’intelligence constituent des préoccupations majeures dans le domaine de l’Informatique. Avec l’évolution accrue de l’Intelligence Artificielle, les chercheurs et l’industrie se sont orientés vers l’utilisation des modèles d’apprentissage automatique et d’apprentissage profond pour optimiser les tâches, automatiser les pipelines et construire des systèmes intelligents. Les grandes capacités de l’Intelligence Artificielle ont rendu possible d’imiter et même surpasser l’intelligence humaine dans certains cas aussi bien que d’automatiser les tâches manuelles tout en augmentant la précision, la qualité et l’efficacité. En fait, l’accomplissement de tâches informatiques nécessite des connaissances, une expertise et des compétences bien spécifiques au domaine. Grâce aux puissantes capacités de l’intelligence artificielle, nous pouvons déduire ces connaissances en utilisant des techniques d’apprentissage automatique et profond appliquées à des données historiques représentant des expériences antérieures. Ceci permettra, éventuellement, d’alléger le fardeau des spécialistes logiciel et de débrider toute la puissance de l’intelligence humaine. Par conséquent, libérer les spécialistes de la corvée et des tâches ordinaires leurs permettra, certainement, de consacrer plus du temps à des activités plus précieuses. En particulier, l’Ingénierie dirigée par les modèles est un sous-domaine de l’informatique qui vise à élever le niveau d’abstraction des langages, d’automatiser la production des applications et de se concentrer davantage sur les spécificités du domaine. Ceci permet de déplacer l’effort mis sur l’implémentation vers un niveau plus élevé axé sur la conception, la prise de décision. Ainsi, ceci permet d’augmenter la qualité, l’efficacité et productivité de la création des applications. La conception des métamodèles est une tâche primordiale dans l’ingénierie dirigée par les modèles. Par conséquent, il est important de maintenir une bonne qualité des métamodèles étant donné qu’ils constituent un artéfact primaire et fondamental. Les mauvais choix de conception, ainsi que les changements conceptuels répétitifs dus à l’évolution permanente des exigences, pourraient dégrader la qualité du métamodèle. En effet, l’accumulation de mauvais choix de conception et la dégradation de la qualité pourraient entraîner des résultats négatifs sur le long terme. Ainsi, la restructuration des métamodèles est une tâche importante qui vise à améliorer et à maintenir une bonne qualité des métamodèles en termes de maintenabilité, réutilisabilité et extensibilité, etc. De plus, la tâche de restructuration des métamodèles est délicate et compliquée, notamment, lorsqu’il s’agit de grands modèles. De là, automatiser ou encore assister les architectes dans cette tâche est très bénéfique et avantageux. Par conséquent, les architectes de métamodèles pourraient se concentrer sur des tâches plus précieuses qui nécessitent de la créativité, de l’intuition et de l’intelligence humaine. Dans ce mémoire, nous proposons une cartographie des tâches qui pourraient être automatisées ou bien améliorées moyennant des techniques d’intelligence artificielle. Ensuite, nous sélectionnons la tâche de métamodélisation et nous essayons d’automatiser le processus de refactoring des métamodèles. A cet égard, nous proposons deux approches différentes: une première approche qui consiste à utiliser un algorithme génétique pour optimiser des critères de qualité et recommander des solutions de refactoring, et une seconde approche qui consiste à définir une spécification d’un métamodèle en entrée, encoder les attributs de qualité et l’absence des design smells comme un ensemble de contraintes et les satisfaire en utilisant Alloy.Automation and intelligence constitute a major preoccupation in the field of software engineering. With the great evolution of Artificial Intelligence, researchers and industry were steered to the use of Machine Learning and Deep Learning models to optimize tasks, automate pipelines, and build intelligent systems. The big capabilities of Artificial Intelligence make it possible to imitate and even outperform human intelligence in some cases as well as to automate manual tasks while rising accuracy, quality, and efficiency. In fact, accomplishing software-related tasks requires specific knowledge and skills. Thanks to the powerful capabilities of Artificial Intelligence, we could infer that expertise from historical experience using machine learning techniques. This would alleviate the burden on software specialists and allow them to focus on valuable tasks. In particular, Model-Driven Engineering is an evolving field that aims to raise the abstraction level of languages and to focus more on domain specificities. This allows shifting the effort put on the implementation and low-level programming to a higher point of view focused on design, architecture, and decision making. Thereby, this will increase the efficiency and productivity of creating applications. For its part, the design of metamodels is a substantial task in Model-Driven Engineering. Accordingly, it is important to maintain a high-level quality of metamodels because they constitute a primary and fundamental artifact. However, the bad design choices as well as the repetitive design modifications, due to the evolution of requirements, could deteriorate the quality of the metamodel. The accumulation of bad design choices and quality degradation could imply negative outcomes in the long term. Thus, refactoring metamodels is a very important task. It aims to improve and maintain good quality characteristics of metamodels such as maintainability, reusability, extendibility, etc. Moreover, the refactoring task of metamodels is complex, especially, when dealing with large designs. Therefore, automating and assisting architects in this task is advantageous since they could focus on more valuable tasks that require human intuition. In this thesis, we propose a cartography of the potential tasks that we could either automate or improve using Artificial Intelligence techniques. Then, we select the metamodeling task and we tackle the problem of metamodel refactoring. We suggest two different approaches: A first approach that consists of using a genetic algorithm to optimize set quality attributes and recommend candidate metamodel refactoring solutions. A second approach based on mathematical logic that consists of defining the specification of an input metamodel, encoding the quality attributes and the absence of smells as a set of constraints and finally satisfying these constraints using Alloy

    Model-Driven Methodology for Rapid Deployment of Smart Spaces based on Resource-Oriented Architectures

    Get PDF
    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym

    MINERVA : Model drIveN and sErvice oRiented framework for the continuous improVement of business process & relAted tools

    Get PDF
    Organizations are facing several challenges nowadays, one of the most important ones being their ability to react quickly to changes either to their business process (BP) models or to the software implementing them. These changes can come from different sources: external requirements from partners or the market, or new internal requirements for the way that things are carried out by the defined BPs; they may also arise from improvement opportunities detected for the BPs defined, based on BPs execution monitoring and execution evaluation that is done by the organization, and/or its partners and customers. The increasing complexity of both BPs models and the software implementing them, requires the changes needed or the improvements to be carefully weighed against the impact their introduction will have; they ought also to be carried out in a systematic way to assure a successful development. Two key elements are to provide these requirements: the separation of BPs definition from their implementation to minimize the impact of changes in one to the other, and a process to introduce the changes or improvements in the existing BPs and/or software implementing them. Business Process Management (BPM) provides the means for guiding and supporting the modeling, implementation, deployment, execution and evaluation of BPs in an organization, based on the BP lifecycle. The realization of BPs by means of services provides the basis for separating their definition from the technologies implementing them and helps provide a better response to changes in either of the layers defined -definition and implementation of business processes- with minimum impact on the other. Modeling of both BP and services is a key aspect to support this vision, helping provide traceability between elements from one area to the other, so easing the analysis of the impact of changes, among other things. Models have proven to play an important role in the software development process, one of its key uses in the context of BP realization by means of services is that of designing services at a more abstract level than with specific technologies, also promoting reuse by separating services logic from its implementation. MINERVA: Model drIveN & sErvice oRiented framework for the continuous business process improVement & relAted tools is the framework that has been defined in this thesis work; it takes into account all the aspects mentioned, in which the SOC and MDD paradigms are applied to BPs focusing on their continuous improvement, extending an existing BP lifecycle with explicit execution measurement and improvement activities and elements. It is made up of three dimensions: i) conceptual, which defines the concepts that are managed throughout the framework. ii) methodological, which defines a methodology for service oriented development from BPs with automatic generation of SoaML service models from BPMN2 models, along with a continuous improvement process based on execution measurement of the occurrences of BPs in the organization to carry out the improvement effort. iii) tools support for the whole proposal based on several existing tools we have integrated, along with new ones we have developed. The proposals in MINERVA have been validated by means of an experiment and two case studies carried out in the context of real projects in two organizations, from which, as the main result of the applications performed, it can be concluded that MINERVA can be a useful and key guide for the continuous improvement of BPs realized by services and for the development of service oriented systems from BPs, with automatic generation of service models from BP models.Las organizaciones se enfrentan en la actualidad a varios retos, siendo uno de los más importantes su capacidad para reaccionar rápidamente a los cambios ya sea en sus modelos de procesos de negocio (PN) o en el software que los implementa. Estos cambios pueden provenir de distintas fuentes: requisitos externos de socios o del mercado, o nuevos requisitos internos para la forma en que las cosas se llevan a cabo por los PNs definidos; también pueden surgir de las oportunidades de mejora detectadas para los PNs definidos, en base al monitoreo y evaluación de la ejecución de los PNs llevada a cabo por la organización, y/o sus socios y clientes. La creciente complejidad de los modelos de PNs y del software que los implementa, requiere que los cambios o las mejoras sean sopesados cuidadosamente contra el impacto que su introducción tendrá; también deben llevarse a cabo de manera sistemática para asegurar un desarrollo exitoso. Dos elementos son clave para proveer estos requisitos: la separación de la definición de los PNs de su implementación, para minimizar el impacto de los cambios de uno en otro, y un proceso para introducir los cambios o mejoras en los PNs y/o en el software que los implementa. La Gestión de Procesos de Negocio (Business Process Management, BPM) proporciona los medios para guiar y apoyar el modelado, implementación, despliegue, ejecución y evaluación de PNs en una organización, basado en el ciclo de vida de PNs. La realización de PNs con servicios proporciona la base para la separación de su definición de las tecnologías para implementarlos, y ayuda a proporcionar una mejor respuesta a los cambios en cualquiera de las capas definidas -definición e implementación de procesos de negocio- con un impacto mínimo sobre la otra. El modelado de PNs y servicios es un aspecto clave para apoyar esta visión, ayudando a proveer trazabilidad entre los elementos de un área a la otra, por lo tanto facilitando el análisis del impacto de los cambios, entre otras cosas. Los modelos han demostrado jugar un papel importante en el proceso de desarrollo de software, uno de sus usos principales en el contexto de la realización de PNs con servicios es el de diseñar servicios a un nivel más abstracto que con tecnologías específicas, promoviendo la reutilización separando la lógica de los servicios de su implementacion. MINERVA: Model drIveN & sErvice oRiented framework for the continuous business process improVement & relAted tools es el marco que se ha definido en este trabajo de tesis, que toma en cuenta todos los aspectos mencionados, en el cual los paradigmas de Computación Orientada a Servicios (Service Oriented Computing, SOC) y Desarrollo Dirigido por Modelos (Model Driven Development, MDD) se aplican a los PNs con foco en su mejora continua, extendiendo un ciclo de vida PN existente con actividades y elementos explícitos para la medición de la ejecución y mejora de PNs. El marco se compone de tres dimensiones: i) conceptual, que define los conceptos que se manejan en todo el marco. ii) metodológica, que define una metodología para el desarrollo orientado a servicios desde PNs, con generación automática de modelos de servicio en SoaML desde modelos en BPMN2, junto con un proceso de mejora continua basado en la medición de la ejecución de las ocurrencias de los PNs en la organización para llevar a cabo el esfuerzo de mejora. iii) soporte de herramientas para la propuesta completa basado en la integracion de varias herramientas existentes, junto con otras nuevas que hemos desarrollado. Las propuestas de MINERVA han sido validadas por medio de un experimento y dos casos de estudio realizados en el marco de proyectos reales en dos organizaciones, de los cuales, como resultado principal de las aplicaciones realizadas, se puede concluir que MINERVA puede ser una guía útil y clave para la mejora continua de PNs realizados por servicios y para el desarrollo de sistemas orientados a servicios desde PNs, con generación automática de modelos de servicio a partir de modelos de PN

    Formal transformation methods for automated fault tree generation from UML diagrams

    Get PDF
    With a growing complexity in safety critical systems, engaging Systems Engineering with System Safety Engineering as early as possible in the system life cycle becomes ever more important to ensure system safety during system development. Assessing the safety and reliability of system architectural design at the early stage of the system life cycle can bring value to system design by identifying safety issues earlier and maintaining safety traceability throughout the design phase. However, this is not a trivial task and can require upfront investment. Automated transformation from system architecture models to system safety and reliability models offers a potential solution. However, existing methods lack of formal basis. This can potentially lead to unreliable results. Without a formal basis, Fault Tree Analysis of a system, for example, even if performed concurrently with system design may not ensure all safety critical aspects of the design. [Continues.]</div

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Configuration management for models : generic methods for model comparison and model co-evolution

    Get PDF
    It is an undeniable fact that software plays an important role in our lives. We use the software to play our music, to check our e-mail, or even to help us drive our car. Thus, the quality of software directly influences the quality of our lives. However, the traditional Software Engineering paradigm is not able to cope with the increasing demands in quantity and quality of produced software. Thus, a new paradigm of Model Driven Software Engineering (MDSE) is quickly gaining ground. MDSE promises to solve some of the problems of traditional Software Engineering (SE) by raising the level of abstraction. Thus, MDSE proposes the use of models and model transformations, instead of textual program files used in traditional SE, as means of producing software. The models are usually graph-based, and are built by using graphical notations – i.e. the models are represented diagrammatically. The advantages of using graphical models over text files are numerous, for example it is usually easier to deduce the relations between different model elements in their diagrammatic form, thus reducing the possibility of defects during the production of the software. Furthermore, formal model transformations can be used to produce different kinds of artifacts from models in all stages of software production. For example, artifacts that can be used as input for model checkers or simulation tools can be produced. This enables the checking or simulation of software products in the early phases of development, which further reduces the probability of defects in the final software product. However, methods and techniques to support MDSE are still not mature enough. In particular methods and techniques for model configuration management (MCM) are still in development, and no generic MCM system exists. In this thesis, I describe my research which was focused on developing methods and techniques to support generic model configuration management. In particular, during my research, I focused on developing methods and techniques for supporting model evolution and model co-evolution. Described methods and techniques are generic and are suitable for a state-based approach to model configuration management. In order to support the model evolution, I developed methods for the representation, calculation, and visualization of state-based model differences. Unlike in previously published research, where these three aspects of model differences are dealt with in separation, in my research all these three aspects are integrated. Thus, the result of model differences calculation algorithm is in the format which is described by my research on model differences representation. The same representation format of model differences is used as a basis of my approach to differences visualization. It is important to notice that the developed representation format for model differences is metamodel independent, and thus is generic, i.e., it can be used to represent differences between all graph-based models. Model co-evolution is a term that describes the problem of adapting models when their metamodels evolve. My solution to this problem has three steps. In the first step a special metamodel MMfMM is introduced. Unlike in traditional approaches, where metamodels are represented as instances of a metametamodel, in my approach the metamodels are represented by models which are instances of an MMfMM. In the second step, since metamodels are represented by models, previously defined methods and techniques for model evolution are reused to represent and calculate the metamodel differences. In the final step I define an algorithm that uses the calculated metamodel differences to adapt models conforming to the evolved metamodel. In order to validate my approaches to model evolution and model co-evolution, I have developed a tool for comparing models and visualizing resulting differences, and a tool for model co-evolution. Moreover, I have developed a method to compare tools for model comparison, and using this method I have conducted a series of experiments in which I compared the tool I developed to an industrial tool called EMFCompare. The results of these experiments are also presented in the thesis. Furthermore, in order to validate my tool and approach to model co-evolution, I have also specified and conducted several experiments. The results of these experiments are also presented in the thesis

    Interoperability of Enterprise Software and Applications

    Get PDF

    Multivariat analyse som verktøy til forståelse og reduksjon av kompleksitet av matematiske modeller i systembiologi

    Get PDF
    In the area of systems biology, technologies develop very fast, which allows us to collect massive amounts of various data. The main interest of scientists is to receive an insight into the obtained data sets and discover their inherent properties. Since the data often are rather complex and intimidating equations may be required for modelling, data analysis can be quite challenging for the majority of bio-scientists who do not master advanced mathematics. In this thesis it is proposed to use multivariate statistical methods as a tool for understanding the properties of complex models used for describing biological systems. The methods of multivariate analysis employed in this thesis search for latent variables that form a basis of all processes in a system. This often reduces dimensions of the system and makes it easier to get the whole picture of what is going on. Thus, in this work, methods of multivariate analysis were used with a descriptive purpose in Papers I and IV to discover effects of input variables on a response. Often it is necessary to know a functional form that could have generated the collected data in order to study the behaviour of the system when one or another parameter is tuned. For this purpose, we propose the Direct Look-Up (DLU) approach that is claimed here to be a worthy alternative to the already existing fitting methods due to its high computational speed and ability to avoid many problems such as subjectivity, choice of initial values, local optima and so on (Papers II and III). Another aspect covered in this thesis is an interpretation of function parameters by the custom human language with the use of multivariate analysis. This would enable mathematicians and bio-scientists to understand each other when describing the same object. It was accomplished here by using the concept of a metamodel and sensory analysis in Paper IV. In Paper I, a similar approach was used even though the main focus of the paper was slightly different. The original aim of the article was to show the advantages of the multi-way GEMANOVA analysis over the traditional ANOVA analysis for certain types of data. However, in addition, the relationship between human profiling of data samples and function parameters was discovered. In situations when funds for conducting experiments are limited and it is unrealizable to study all possible parameter combinations, it is necessary to have a smart way of choosing a few but most representative conditions for a particular system. In Paper V Multi-level Binary Replacement design (MBR) was developed as such, which can also be used for searching for a relevant parameter range. This new design method was applied here in Papers II and IV for selection of samples for further analyses.Teknologiutviklingen innenfor systembiologien er nå så rask at det gir mulighet til å samle svært store datamengder på kort tid og til relativ lav pris. Hovedinteressen til forskerne er typisk å få innsikt i dataene og deres iboende egenskaper. Siden data kan være ganske komplekse og ofte beskrives ved kompliserte, gjerne ikke-lineære, funksjoner, kan dataanalyse være ganske utfordrende for mange bioforskere som ikke behersker avansert matematikk. I dette arbeidet er det foreslått å bruke multivariat statistisk analyse for å komme nærmere en forståelse av egenskapene av kompliserte modeller som blir brukt for å beskrive biologiske systemer. De multivariate metodene som er benyttet i denne avhandlingen søker etter latente variabler som utgjør en lineær basis og tilnærming til de komplekse prosessene i et system. Dermed kan man oppnå en forenkling av systemet som er lettere å tolke. I dette arbeidet ble multivariate analysemetoder brukt i denne beskrivende hensikten i Artikler (Papers) I og IV til å oppdage effekter av funksjonsparametre på egenskapene til komplekse matematiske modeller. Ofte er det nødvendig å finne en matematisk funksjon som kunne ha generert de innsamlede dataene for å studere oppførselen av systemet. Med den hensikt foreslår vi en metode for modelltilpasning ved DLU-metoden (the Direct Look-Up) som her påstås å være et verdifullt alternativ til de eksisterende estimeringsmetodene på grunn av høy fart og evne til å unngå typiske problemer som for eksempel subjektivitet, valg av initialverdier, lokale optima, m.m (Artikler II og III). Et annet aspekt dekket i denne avhandlingen er bruken av multivariat analyse til å gi tolking av matematiske funksjonsparametre ved hjelp av et dagligdags vokabular. Dette kan gjøre det enklere for matematikere og bioforskere å forstå hverandre når de beskriver det samme objektet. Det var utført her ved å benytte ideen om en metamodell og sensorisk analyse i Artikkel IV. I Artikkel I var en lignende metode også brukt for å få sensoriske beskrivelser av bilder generert fra differensiallikninger. Hovedfokuset i Artikkel I var imidlertid et annet, nemlig å vise fordelen ved multi-way GEMANOVA-analyse fremfor den tradisjonelle ANOVA-analysen for visse datatyper. I denne artikkelen ble GEMANOVA brukt til å avdekke sammenhengen mellom kompliserte kombinasjoner av funksjonsparametrene og bildedeskriptorer. I situasjoner der ressurser til å utføre eksperimenter er begrenset og det er umulig å prøve ut alle kombinasjoner av parametre, er det behov for metoder som kan bestemme et fåtall av parameterinnstillinger som er mest mulig representative for et bestemt system. I Artikkel V ble derfor Multi-level Binary Replacement (MBR) design utviklet som en sådan, og den kan også brukes for å søke etter et relevant parameterrom for datasimuleringer. Den nye designmetoden ble anvendt i Artikler II og IV for utvelgelse av parameterverdier for videre analyser
    • …
    corecore