1,404 research outputs found

    Improving Quality Assurance in Multidisciplinary Engineering Environments with Semantic Technologies

    Get PDF
    In multidisciplinary engineering (MDE) projects, for example, automation systems or manufacturing systems, stakeholders from various disciplines, for example, electrics, mechanics and software, have to collaborate. In industry practice, engineers apply individual and highly specialized tools with strong limitation regarding defect detection in early engineering phases. Experts typically execute reviews with limited tool support which make engineering projects defective and risky. Semantic Web Technologies (SWTs) can help to bridge the gap between heterogeneous sources as foundation for efficient and effective defect detection. Main questions focus on (a) how to bridge gaps between loosely coupled tools and incompatible data models and (b) how SWTs can help to support efficient and effective defect detection in context of engineering process improvement. This chapter describes success-critical requirements for defect detection in MDE and shows how SWTs can provide the foundation for early and efficient defect detection with an adapted review approach. The proposed defect detection framework (DDF) suggests different levels of SWT contributions as a roadmap for engineering process improvement. Two selected industry-related real-life cases show different levels of SWT involvement. Although SWTs have been successfully applied in real-life use cases, SWT applications can be risky if applied without good understanding of success factors and limitations

    Model-Based Systems Engineering Approach to Distributed and Hybrid Simulation Systems

    Get PDF
    INCOSE defines Model-Based Systems Engineering (MBSE) as the formalized application of modeling to support system requirements, design, analysis, verification, and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases. One very important development is the utilization of MBSE to develop distributed and hybrid (discrete-continuous) simulation modeling systems. MBSE can help to describe the systems to be modeled and help make the right decisions and partitions to tame complexity. The ability to embrace conceptual modeling and interoperability techniques during systems specification and design presents a great advantage in distributed and hybrid simulation systems development efforts. Our research is aimed at the definition of a methodological framework that uses MBSE languages, methods and tools for the development of these simulation systems. A model-based composition approach is defined at the initial steps to identify distributed systems interoperability requirements and hybrid simulation systems characteristics. Guidelines are developed to adopt simulation interoperability standards and conceptual modeling techniques using MBSE methods and tools. Domain specific system complexity and behavior can be captured with model-based approaches during the system architecture and functional design requirements definition. MBSE can allow simulation engineers to formally model different aspects of a problem ranging from architectures to corresponding behavioral analysis, to functional decompositions and user requirements (Jobe, 2008)

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Towards using intelligent techniques to assist software specialists in their tasks

    Full text link
    L’automatisation et l’intelligence constituent des préoccupations majeures dans le domaine de l’Informatique. Avec l’évolution accrue de l’Intelligence Artificielle, les chercheurs et l’industrie se sont orientés vers l’utilisation des modèles d’apprentissage automatique et d’apprentissage profond pour optimiser les tâches, automatiser les pipelines et construire des systèmes intelligents. Les grandes capacités de l’Intelligence Artificielle ont rendu possible d’imiter et même surpasser l’intelligence humaine dans certains cas aussi bien que d’automatiser les tâches manuelles tout en augmentant la précision, la qualité et l’efficacité. En fait, l’accomplissement de tâches informatiques nécessite des connaissances, une expertise et des compétences bien spécifiques au domaine. Grâce aux puissantes capacités de l’intelligence artificielle, nous pouvons déduire ces connaissances en utilisant des techniques d’apprentissage automatique et profond appliquées à des données historiques représentant des expériences antérieures. Ceci permettra, éventuellement, d’alléger le fardeau des spécialistes logiciel et de débrider toute la puissance de l’intelligence humaine. Par conséquent, libérer les spécialistes de la corvée et des tâches ordinaires leurs permettra, certainement, de consacrer plus du temps à des activités plus précieuses. En particulier, l’Ingénierie dirigée par les modèles est un sous-domaine de l’informatique qui vise à élever le niveau d’abstraction des langages, d’automatiser la production des applications et de se concentrer davantage sur les spécificités du domaine. Ceci permet de déplacer l’effort mis sur l’implémentation vers un niveau plus élevé axé sur la conception, la prise de décision. Ainsi, ceci permet d’augmenter la qualité, l’efficacité et productivité de la création des applications. La conception des métamodèles est une tâche primordiale dans l’ingénierie dirigée par les modèles. Par conséquent, il est important de maintenir une bonne qualité des métamodèles étant donné qu’ils constituent un artéfact primaire et fondamental. Les mauvais choix de conception, ainsi que les changements conceptuels répétitifs dus à l’évolution permanente des exigences, pourraient dégrader la qualité du métamodèle. En effet, l’accumulation de mauvais choix de conception et la dégradation de la qualité pourraient entraîner des résultats négatifs sur le long terme. Ainsi, la restructuration des métamodèles est une tâche importante qui vise à améliorer et à maintenir une bonne qualité des métamodèles en termes de maintenabilité, réutilisabilité et extensibilité, etc. De plus, la tâche de restructuration des métamodèles est délicate et compliquée, notamment, lorsqu’il s’agit de grands modèles. De là, automatiser ou encore assister les architectes dans cette tâche est très bénéfique et avantageux. Par conséquent, les architectes de métamodèles pourraient se concentrer sur des tâches plus précieuses qui nécessitent de la créativité, de l’intuition et de l’intelligence humaine. Dans ce mémoire, nous proposons une cartographie des tâches qui pourraient être automatisées ou bien améliorées moyennant des techniques d’intelligence artificielle. Ensuite, nous sélectionnons la tâche de métamodélisation et nous essayons d’automatiser le processus de refactoring des métamodèles. A cet égard, nous proposons deux approches différentes: une première approche qui consiste à utiliser un algorithme génétique pour optimiser des critères de qualité et recommander des solutions de refactoring, et une seconde approche qui consiste à définir une spécification d’un métamodèle en entrée, encoder les attributs de qualité et l’absence des design smells comme un ensemble de contraintes et les satisfaire en utilisant Alloy.Automation and intelligence constitute a major preoccupation in the field of software engineering. With the great evolution of Artificial Intelligence, researchers and industry were steered to the use of Machine Learning and Deep Learning models to optimize tasks, automate pipelines, and build intelligent systems. The big capabilities of Artificial Intelligence make it possible to imitate and even outperform human intelligence in some cases as well as to automate manual tasks while rising accuracy, quality, and efficiency. In fact, accomplishing software-related tasks requires specific knowledge and skills. Thanks to the powerful capabilities of Artificial Intelligence, we could infer that expertise from historical experience using machine learning techniques. This would alleviate the burden on software specialists and allow them to focus on valuable tasks. In particular, Model-Driven Engineering is an evolving field that aims to raise the abstraction level of languages and to focus more on domain specificities. This allows shifting the effort put on the implementation and low-level programming to a higher point of view focused on design, architecture, and decision making. Thereby, this will increase the efficiency and productivity of creating applications. For its part, the design of metamodels is a substantial task in Model-Driven Engineering. Accordingly, it is important to maintain a high-level quality of metamodels because they constitute a primary and fundamental artifact. However, the bad design choices as well as the repetitive design modifications, due to the evolution of requirements, could deteriorate the quality of the metamodel. The accumulation of bad design choices and quality degradation could imply negative outcomes in the long term. Thus, refactoring metamodels is a very important task. It aims to improve and maintain good quality characteristics of metamodels such as maintainability, reusability, extendibility, etc. Moreover, the refactoring task of metamodels is complex, especially, when dealing with large designs. Therefore, automating and assisting architects in this task is advantageous since they could focus on more valuable tasks that require human intuition. In this thesis, we propose a cartography of the potential tasks that we could either automate or improve using Artificial Intelligence techniques. Then, we select the metamodeling task and we tackle the problem of metamodel refactoring. We suggest two different approaches: A first approach that consists of using a genetic algorithm to optimize set quality attributes and recommend candidate metamodel refactoring solutions. A second approach based on mathematical logic that consists of defining the specification of an input metamodel, encoding the quality attributes and the absence of smells as a set of constraints and finally satisfying these constraints using Alloy

    A Data-Driven Approach for Modeling Agents

    Get PDF
    Agents are commonly created on a set of simple rules driven by theories, hypotheses, and assumptions. Such modeling premise has limited use of real-world data and is challenged when modeling real-world systems due to the lack of empirical grounding. Simultaneously, the last decade has witnessed the production and availability of large-scale data from various sensors that carry behavioral signals. These data sources have the potential to change the way we create agent-based models; from simple rules to driven by data. Despite this opportunity, the literature has neglected to offer a modeling approach to generate granular agent behaviors from data, creating a gap in the literature. This dissertation proposes a novel data-driven approach for modeling agents to bridge the research gap. The approach is composed of four detailed steps including data preparation, attribute model creation, behavior model creation, and integration. The connection between and within each step is established using data flow diagrams. The practicality of the approach is demonstrated with a human mobility model that uses millions of location footprints collected from social media. In this model, the generation of movement behavior is tested with five machine learning/statistical modeling techniques covering a large number of model/data configurations. Results show that Random Forest-based learning is the most effective for the mobility use case. Furthermore, agent attribute values are obtained/generated with machine learning and translational assignment techniques. The proposed approach is evaluated in two ways. First, the use case model is compared to another model which is developed using a state-of-the-art data-driven approach. The model’s prediction performance is comparable to the state-of-the-art model. The plausibility of behaviors and model structure in the use case model is found to be closer to real-world than the state-of-the-art model. This outcome indicates that the proposed approach produces realistic results. Second, a standard mobility dataset is used for driving the mobility model in place of social media data. Despite its small size, the data and model resembled the results gathered from the primary use case indicating the possibility of using different datasets with the proposed approach

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases
    • …
    corecore