119 research outputs found

    Contract-Based Design of Dataflow Programs

    Get PDF
    Quality and correctness are becoming increasingly important aspects of software development, as our reliance on software systems in everyday life continues to increase. Highly complex software systems are today found in critical appliances such as medical equipment, cars, and telecommunication infrastructure. Failures in these kinds of systems may have disastrous consequences. At the same time, modern computer platforms are increasingly concurrent, as the computational capacity of modern CPUs is improved mainly by increasing the number of processor cores. Computer platforms are also becoming increasingly parallel, distributed and heterogeneous, often involving special processing units, such as graphics processing units (GPU) or digital signal processors (DSP) for performing specific tasks more efficiently than possible on general-purpose CPUs. These modern platforms allow implementing increasingly complex functionality in software. Cost efficient development of software that efficiently exploits the power of this type of platforms and at the same time ensures correctness is, however, a challenging task. Dataflow programming has become popular in development of safetycritical software in many domains in the embedded community. For instance, in the automotive domain, the dataflow language Simulink has become widely used in model-based design of control software. However, for more complex functionality, this model of computation may not be expressive enough. In the signal processing domain, more expressive, dynamic models of computation have attracted much attention. These models of computation have, however, not gained as significant uptake in safety-critical domains due to a great extent to that it is challenging to provide guarantees regarding e.g. timing or determinism under these more expressive models of computation. Contract-based design has become widespread to specify and verify correctness properties of software components. A contract consists of assumptions (preconditions) regarding the input data and guarantees (postconditions) regarding the output data. By verifying a component with respect to its contract, it is ensured that the output fulfils the guarantees, assuming that the input fulfils the assumptions. While contract-based verification of traditional object-oriented programs has been researched extensively, verification of asynchronous dataflow programs has not been researched to the same extent. In this thesis, a contract-based design framework tailored specifically to dataflow programs is proposed. The proposed framework supports both an extensive subset of the discrete-time Simulink synchronous language, as well as a more general, asynchronous and dynamic, dataflow language. The proposed contract-based verification techniques are automatic, only guided by user-provided invariants, and based on encoding dataflow programs in existing, mature verification tools for sequential programs, such as the Boogie guarded command language and its associated verifier. It is shown how dataflow programs, with components implemented in an expressive programming language with support for matrix computations, can be efficiently encoded in such a verifier. Furthermore, it is also shown that contract-based design can be used to improve runtime performance of dataflow programs by allowing more scheduling decisions to be made at compile-time. All the proposed techniques have been implemented in prototype tools and evaluated on a large number of different programs. Based on the evaluation, the methods were proven to work in practice and to scale to real-world programs.Kvalitet och korrekthet blir idag allt viktigare aspekter inom mjukvaruutveckling, dÄ vi i allt högre grad förlitar oss pÄ mjukvarusystem i vÄra vardagliga sysslor. Mycket komplicerade mjukvarusystem finns idag i kritiska tillÀmpningar sÄ som medicinsk utrustning, bilar och infrastruktur för telekommunikation. Fel som uppstÄr i de hÀr typerna av system kan ha katastrofala följder. Samtidigt utvecklas kapaciteten hos moderna datorplattformar idag frÀmst genom att öka antalet processorkÀrnor. DÀrtill blir datorplattformar allt mer parallella, distribuerade och heterogena, och innefattar ofta specialla processorer sÄ som grafikprocessorer (GPU) eller signalprocessorer (DSP) för att utföra specifika berÀkningar snabbare Àn vad som Àr möjligt pÄ vanliga processorer. Den hÀr typen av plattformar möjligör implementering av allt mer komplicerade berÀkningar i mjukvara. Kostnadseffektiv utveckling av mjukvara som effektivt utnyttjar kapaciteten i den hÀr typen av plattformar och samtidigt sÀkerstÀller korrekthet Àr emellertid en mycket utmanande uppgift. Dataflödesprogrammering har blivit ett populÀrt sÀtt att utveckla mjukvara inom flera omrÄden som innefattar sÀkerhetskritiska inbyggda datorsystem. Till exempel inom fordonsindustrin har dataflödessprÄket Simulink kommit att anvÀndas i bred utstrÀckning för modellbaserad design av kontrollsystem. För mer komplicerad funktionalitet kan dock den hÀr modellen för berÀkning vara för begrÀnsad betrÀffande vad som kan beksrivas. Inom signalbehandling har mera expressiva och dynamiska modeller för berÀkning attraherat stort intresse. De hÀr modellerna för berÀkning har ÀndÄ inte tagits i bruk i samma utstrÀckning inom sÀkerhetskritiska tillÀmpningar. Det hÀr beror till en stor del pÄ att det Àr betydligt svÄrare att garantera egenskaper gÀllande till exempel timing och determinism under sÄdana hÀr modeller för berÀkning. Kontraktbaserad design har blivit ett vanligt sÀtt att specifiera och verifiera korrekthetsegenskaper hos mjukvarukomponeneter. Ett kontrakt bestÄr av antaganden (förvillkor) gÀllande indata och garantier (eftervillkor) gÀllande utdata. Genom att verifiera en komponent gentemot sitt konktrakt kan man bevisa att utdatan uppfyller garantierna, givet att indatan uppfyller antagandena. Trots att kontraktbaserad verifiering i sig Àr ett mycket beforskat omrÄde, sÄ har inte verifiering av asynkrona dataflödesprogram beforskats i samma utstrÀckning. I den hÀr avhandlingen presenteras ett ramverk för kontraktbaserad design skrÀddarsytt för dataflödesprogram. Det föreslagna ramverket stödjer sÄ vÀl en stor del av det synkrona sprÄket. Simulink med diskret tid som ett mera generellt asynkront och dynamiskt dataflödessprÄk. De föreslagna kontraktbaserade verifieringsteknikerna Àr automatiska. Utöver kontraktets för- och eftervillkor ger anvÀndaren endast de invarianter som krÀvs för att möjliggöra verifieringen. Verifieringsteknikerna grundar sig pÄ att omkoda dataflödesprogram till input för existerande och beprövade verifieringsverktyg för sekventiella program sÄ som Boogie. Avhandlingen visar hur dataflödesprogram implementerade i ett expressivt programmeringssprÄk med inbyggt stöd för matrisoperationer effektivt kan omkodas till input för ett verifieringsverktyg som Boogie. Utöver detta visar avhandlingen ocksÄ att kontraktbaserad design ocksÄ kan förbÀttra prestandan hos dataflödesprogram i körningsskedet genom att möjliggöra flera schemalÀggningsbeslut redan i kompileringsskedet. Alla tekniker som presenteras i avhandlingen har implementerats i prototypverktyg och utvÀrderats pÄ en stor mÀngd olika program. UtvÀrderingen bevisar att teknikerna fungerar i praktiken och Àr tillrÀckligt skalbara för att ocksÄ fungera pÄ program av realistisk storlek

    Development of a Framework for CPS Open Standards and Platforms

    Get PDF
    This technical report describes a Framework we have developed through our research and investigations in this project, with the goal to facilitate creation of Open Standards and Platforms for CPS; a task that addresses a critical mission for NIST. The rapid development of information technology (in terms of processing power, embedded hardware and software systems, comprehensive IT management systems, networking and Internet growth, system design environments) is producing an increasing number of applications and opening new doors. In addition over the last decade we entered a new era where systems complexity has increased dramatically. Complexity is increased both by the number of components that are included in each system as well as by the dependencies between those components. Increasingly, systems tend to be more software dependent and that is a major challenge that engineers involved in the development of such systems face. The challenge is even greater when a safety critical system is considered, like an airplane or a passenger car. Software-intensive systems and devices have become everyday consumables. There is a need for development of software that is provably error-free. Thanks to their multifaceted support for networking and inclusion of data and services from global networks, systems are evolving to form integrated, overarching solutions that are increasingly penetrating all areas of life and work. When software dependent systems interact with the physical environment then we have the class of cyber-physical systems (CPS) [1, 2]. The challenge in CPS is to incorporate the inputs (and their characteristics and constraints) from the physical components in the logic of the cyber components (hardware and software). CPS are engineered systems constructed as networked interactions of physical and computational (cyber) components. In CPS, computations and communication are deeply embedded in and interacting with physical processes, and add new capabilities to physical systems. Competitive pressure and societal needs drive industry to design and deploy airplanes and cars that are more energy efficient and safe, medical devices and systems that are more dependable, defense systems that are more autonomous and secure. Whole industrial sectors are transformed by new product lines that are CPS-based. Modern CPSs are not simply the connection of two different kinds of components engineered by means of distinct design technology, but rather, a new system category that is both physical and computational [1, 2]. Current industrial experience tells us that, in fact, we have reached the limits of our knowledge of how to combine computers and physical systems. The shortcomings range from technical limitations in the foundations of cyber-physical systems to the way we organize our industries and educate engineers and scientists that support cyber-physical system design. If we continue to build systems using our very limited methods and tools but lack the science and technology foundations, we will create significant risks, produce failures and lead to loss of market. Nowadays, with increasing frequency we observe systems that cooperate to achieve a common goal, even though there were not built for that reason. These are called systems of systems. For example, the Global Positioning System (GPS) is a system by itself. However, it needs to cooperate with other systems when the air traffic control system of systems is under 3 consideration. The analysis and development of such systems should be done carefully because of the emergent behavior that systems exhibit when they are coupled with other systems. However, apart from the increasing complexity and the other technical challenges, there is a need to decrease time-to-market for new systems as well as the associated costs. This specific trend and associated requirements, which are an outcome of global competitiveness, are expected to continue and become even more stringent. If a successful contribution is to be made in shaping this change, the revolutionary potential of CPS must be recognized and incorporated into internal development processes at an early stage. For that Interoperability and Integratability of CPS is critical. In this Task we have developed a Framework to facilitate interoperability and integratability of CPS via Open Standards and Platforms. The purpose of this technical report is to introduce this Framework and its critical components, to provide various instantiations of it, and to describe initial successful applications of it in various important classes of CPS. An additional goal of publishing this technical report is to solicit feedback on the proposed Framework, and to catalyze discussions and interactions in the broader CPS technical community towards improving and strengthening this Framework. CPS integrate data and services from different systems which were developed independently and with disparate objectives, thereby enabling new functionalities and benefits. Currently there is a lack of well-defined interfaces that on the one hand define the standards for the form and content of the data being exchanged, but on the other hand take account of non-functional aspects of this data, such as differing levels of data quality or reliability. A similar situation exists with respect to tools and synthesis environments, although some work has been initiated in the latter. The technological prerequisite for the design of the aforementioned various functions and value added services of CPS is the interoperability and integratability of these systems as well as their capability to be adapted flexibly and application-specifically as well as extended at the different levels of abstraction. Dependent on the objective and scope of the application, it may be necessary to integrate component functions (Embedded Systems (ES), System of Systems (SoS), CPS), to establish communication and interfaces, and to ensure the required level of quality of interaction and also of the overall system behavior. This requires cross-domain concepts for architecture, communication and compatibility at all levels. The effects of these factors on existing or yet undeveloped systems and architectures represent a major challenge. Investigation into these factors is the objective of current national and international studies and research projects. CPS create core technological challenges for traditional system architectures, especially because of their high degree of connectivity. This is because CPS are not constructed for one specific purpose or function, but rather are open for many different services and processes, and must therefore be adaptable. In view of their evolutionary nature, they are only controllable to a limited extent. This creates new demands for greater interoperability and communication within CPS that cannot be met by current closed systems. In particular, the differences in the characteristics of embedded systems in relation to IT systems and services and data in networks lead to outstanding questions in relation to the form of architectures, the definition of system and communication interfaces and requirements for underlying CPS platforms with basic services and parallel architectures at different levels of abstraction. 4 The technological developments underlying CPS evolution require the development of standards in the individual application domains, as well as basic infrastructure investments that cannot be borne by individual companies alone. This is particularly significant for SMEs. The development and operation of uniform platforms to migrate individual services and products will therefore be as much of a challenge as joint specification standards. The creation of such quasi standards, less in the traditional mold of classic industry norms and standards and more in the sense of de facto standards that become established on the basis of technological and market dominance, will become an essential part of technological and market leadership. To summarize and emphasize, the complexity of the subject in terms of the required technologies and capabilities of CPS, as well as the capabilities and competences required to develop, control and design/ create innovative, usable CPS applications, demand fundamentally integrated action, interdisciplinarity (research and development, economy and society) and vertical and horizontal efforts in: The creation of open, cross-domain platforms with fundamental services (communication, networking, interoperability) and architectures (including domainspecific architectures); The complementary expansion and integration of application fields and environments with vertical experimentation platforms and correspondingly integrated interdisciplinary efforts; The systematic enhancement with respect to methods and technologies across all involved disciplines to create innovative CPS. The aim of our research and investigations under this Task of the project, was precisely to clarify these objectives and systematically develop detailed recommendations for action. Our research and investigations have identified the following essential and fundamental challenges for the modeling, design, synthesis and manufacturing of CPS: (i) The creation and demonstration of a framework for developing cross-domain integrated modeling hubs for CPS. (ii) The creation and demonstration of a framework for linking the integrated CPS modeling hub of (i) with powerful and diverse tradeoff analysis methods and tools for design exploration for CPS. (iii) The creation of a framework of linking the integrated CPS synthesis environment of (i) and (ii) with databases of modular component and process (manufacturing) models, backwards compatible with earlier legacy systems; (iv)The creation of a framework for translating textual requirements to mathematical representations as constraints, rules and metrics involving both logical and numerical variables and the automatic (at least to 75%) allocation of the resulting specifications to components of the CPS and of processes, in a way that allows traceability. 5 These challenges have been listed here in the order of increasing difficulty both conceptually and in terms of arriving at implementable solutions. The order also reflects the extent to which the current state of affairs has made progress towards developing at least some initial instantiations of the desired frameworks. In this context, it is useful to compare with the advanced state of development of similar frameworks and their instantiations for synthesis and manufacturing of complex microelectronic VLSI chips including distributed ones, which have been available as integrated tools by several vendors for at least a decade. Regarding challenge (i) we have performed extensive work and research in this project towards developing model-based systems engineering (MBSE) procedures for the design, integration, testing and operational management of cyber-physical systems, that is, physical systems with cyber potentially embedded in every physical component. Thus in the Framework, described in this report, for standards for integrated modeling hubs for CPS, MBSE methods and tools are prominent. Regarding the search for a framework for standards for CPS this selection has the additional advantage that it is also emerging as an accepted framework for systems engineering by all industry sectors with substantial interest in CPS [3, 7]. Regarding challenge (ii) we have performed extensive work and research in this project towards developing the foundations for such an integration, and we have developed and demonstrated the first ever integration of a powerful tradeoff analysis tool (and methodology) with our SysMLIntegrated system modeling environments for CPS synthesis [3, 7]. Primary applications of interest that we have instantiated this framework are: microgrids and power grids, wireless sensor networks (WSN) and applications to Smart Grid, energy efficient buildings, microrobotics and collaborative robotics, and the overarching (for all these applications) security and trust issues including our pioneering and innovative work on compositional security systems. A key concept here is the integration of multi-criteria, multi constraint optimization with constrained based reasoning. Regarding challenge (iii) we have only developed the conceptual Framework, as any required instantiations will require substantial commercial grade software development beyond the scope of this project. It is clear however that object-relational databases and database mediators (for both data and semantics) will have to be employed. Regarding challenge (iv) we have developed a Framework for checking and validating specifications, after they have been translated to their mathematical representations as constraints and metrics with logical and numerical variables. Various multi-criteria optimization, constrained based reasoning, model checking and automatic theorem proving tools will have to be combined. The automatic annotation of the system blocks with requirements and parameter specifications remains an open challenge.Research supported in part by Cooperative Agreement, NIST 70NANB11H148, to the University of Maryland College Park

    Proceedings of the Sixth NASA Langley Formal Methods (LFM) Workshop

    Get PDF
    Today's verification techniques are hard-pressed to scale with the ever-increasing complexity of safety critical systems. Within the field of aeronautics alone, we find the need for verification of algorithms for separation assurance, air traffic control, auto-pilot, Unmanned Aerial Vehicles (UAVs), adaptive avionics, automated decision authority, and much more. Recent advances in formal methods have made verifying more of these problems realistic. Thus we need to continually re-assess what we can solve now and identify the next barriers to overcome. Only through an exchange of ideas between theoreticians and practitioners from academia to industry can we extend formal methods for the verification of ever more challenging problem domains. This volume contains the extended abstracts of the talks presented at LFM 2008: The Sixth NASA Langley Formal Methods Workshop held on April 30 - May 2, 2008 in Newport News, Virginia, USA. The topics of interest that were listed in the call for abstracts were: advances in formal verification techniques; formal models of distributed computing; planning and scheduling; automated air traffic management; fault tolerance; hybrid systems/hybrid automata; embedded systems; safety critical applications; safety cases; accident/safety analysis

    Proceedings of the Second NASA Formal Methods Symposium

    Get PDF
    This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis

    Proceedings of the First NASA Formal Methods Symposium

    Get PDF
    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000

    Modelling and Analysis for Cyber-Physical Systems: An SMT-based approach

    Get PDF

    Ernst Denert Award for Software Engineering 2020

    Get PDF
    This open access book provides an overview of the dissertations of the eleven nominees for the Ernst Denert Award for Software Engineering in 2020. The prize, kindly sponsored by the Gerlind & Ernst Denert Stiftung, is awarded for excellent work within the discipline of Software Engineering, which includes methods, tools and procedures for better and efficient development of high quality software. An essential requirement for the nominated work is its applicability and usability in industrial practice. The book contains eleven papers that describe the works by Jonathan BrachthĂ€user (EPFL Lausanne) entitled What You See Is What You Get: Practical Effect Handlers in Capability-Passing Style, Mojdeh Golagha’s (Fortiss, Munich) thesis How to Effectively Reduce Failure Analysis Time?, Nikolay Harutyunyan’s (FAU Erlangen-NĂŒrnberg) work on Open Source Software Governance, Dominic Henze’s (TU Munich) research about Dynamically Scalable Fog Architectures, Anne Hess’s (Fraunhofer IESE, Kaiserslautern) work on Crossing Disciplinary Borders to Improve Requirements Communication, Istvan Koren’s (RWTH Aachen U) thesis DevOpsUse: A Community-Oriented Methodology for Societal Software Engineering, Yannic Noller’s (NU Singapore) work on Hybrid Differential Software Testing, Dominic Steinhofel’s (TU Darmstadt) thesis entitled Ever Change a Running System: Structured Software Reengineering Using Automatically Proven-Correct Transformation Rules, Peter WĂ€gemann’s (FAU Erlangen-NĂŒrnberg) work Static Worst-Case Analyses and Their Validation Techniques for Safety-Critical Systems, Michael von Wenckstern’s (RWTH Aachen U) research on Improving the Model-Based Systems Engineering Process, and Franz Zieris’s (FU Berlin) thesis on Understanding How Pair Programming Actually Works in Industry: Mechanisms, Patterns, and Dynamics – which actually won the award. The chapters describe key findings of the respective works, show their relevance and applicability to practice and industrial software engineering projects, and provide additional information and findings that have only been discovered afterwards, e.g. when applying the results in industry. This way, the book is not only interesting to other researchers, but also to industrial software professionals who would like to learn about the application of state-of-the-art methods in their daily work

    On extending process monitoring and diagnosis to the electrical and mechanical utilities: an advanced signal analysis approach

    Get PDF
    This thesis is concerned with extending process monitoring and diagnosis to electrical and mechanical utilities. The motivation is that the reliability, safety and energy efficiency of industrial processes increasingly depend on the condition of the electrical supply and the electrical and mechanical equipment in the process. To enable the integration of electrical and mechanical measurements in the analysis of process disturbances, this thesis develops four new signal analysis methods for transient disturbances, and for measurements with different sampling rates. Transient disturbances are considered because the electrical utility is mostly affected by events of a transient nature. Different sampling rates are considered because process measurements are commonly sampled at intervals in the order of seconds, while electrical and mechanical measurements are commonly sampled with millisecond intervals. Three of the methods detect transient disturbances. Each method progressively improves or extends the applicability of the previous method. Specifically, the first detection method does univariate analysis, the second method extends the analysis to a multivariate data set, and the third method extends the multivariate analysis to measurements with different sampling rates. The fourth method developed removes the transient disturbances from the time series of oscillatory measurements. The motivation is that the analysis of oscillatory disturbances can be affected by transient disturbances. The methods were developed and tested on experimental and industrial data sets obtained during industrial placements with ABB Corporate Research Center, KrakĂłw, Poland and ABB Oil, Gas and Petrochemicals, Oslo, Norway. The concluding chapters of the thesis discuss the merits and limitations of each method, and present three directions for future research. The ideas should contribute further to the extension of process monitoring and diagnosis to the electrical and mechanical utilities. The ideas are exemplified on the case studies and shown to be promising directions for future research.Open Acces

    Knowledge Representation in Engineering 4.0

    Get PDF
    This dissertation was developed in the context of the BMBF and EU/ECSEL funded projects GENIAL! and Arrowhead Tools. In these projects the chair examines methods of specifications and cooperations in the automotive value chain from OEM-Tier1-Tier2. Goal of the projects is to improve communication and collaborative planning, especially in early development stages. Besides SysML, the use of agreed vocabularies and on- tologies for modeling requirements, overall context, variants, and many other items, is targeted. This thesis proposes a web database, where data from the collaborative requirements elicitation is combined with an ontology-based approach that uses reasoning capabilities. For this purpose, state-of-the-art ontologies have been investigated and integrated that entail domains like hardware/software, roadmapping, IoT, context, innovation and oth- ers. New ontologies have been designed like a HW / SW allocation ontology and a domain-specific "eFuse ontology" as well as some prototypes. The result is a modular ontology suite and the GENIAL! Basic Ontology that allows us to model automotive and microelectronic functions, components, properties and dependencies based on the ISO26262 standard among these elements. Furthermore, context knowledge that influences design decisions such as future trends in legislation, society, environment, etc. is included. These knowledge bases are integrated in a novel tool that allows for collabo- rative innovation planning and requirements communication along the automotive value chain. To start off the work of the project, an architecture and prototype tool was developed. Designing ontologies and knowing how to use them proved to be a non-trivial task, requiring a lot of context and background knowledge. Some of this background knowledge has been selected for presentation and was utilized either in designing models or for later immersion. Examples are basic foundations like design guidelines for ontologies, ontology categories and a continuum of expressiveness of languages and advanced content like multi-level theory, foundational ontologies and reasoning. Finally, at the end, we demonstrate the overall framework, and show the ontology with reasoning, database and APPEL/SysMD (AGILA ProPErty and Dependency Descrip- tion Language / System MarkDown) and constraints of the hardware / software knowledge base. There, by example, we explore and solve roadmap constraints that are coupled with a car model through a constraint solver.Diese Dissertation wurde im Kontext des von BMBF und EU / ECSEL gefördertem Projektes GENIAL! und Arrowhead Tools entwickelt. In diesen Projekten untersucht der Lehrstuhl Methoden zur Spezifikationen und Kooperation in der Automotive Wertschöp- fungskette, von OEM zu Tier1 und Tier2. Ziel der Arbeit ist es die Kommunikation und gemeinsame Planung, speziell in den frĂŒhen Entwicklungsphasen zu verbessern. Neben SysML ist die Benutzung von vereinbarten Vokabularen und Ontologien in der Modellierung von Requirements, des Gesamtkontextes, Varianten und vielen anderen Elementen angezielt. Ontologien sind dabei eine Möglichkeit, um das Vermeiden von MissverstĂ€ndnissen und Fehlplanungen zu unterstĂŒtzen. Dieser Ansatz schlĂ€gt eine Web- datenbank vor, wobei Ontologien das Teilen von Wissen und das logische Schlussfolgern von implizitem Wissen und Regeln unterstĂŒtzen. Diese Arbeit beschreibt Ontologien fĂŒr die DomĂ€ne des Engineering 4.0, oder spezifischer, fĂŒr die DomĂ€ne, die fĂŒr das deutsche Projekt GENIAL! benötigt wurde. Dies betrifft DomĂ€nen, wie Hardware und Software, Roadmapping, Kontext, Innovation, IoT und andere. Neue Ontologien wurden entworfen, wie beispielsweise die Hardware-Software Allokations-Ontologie und eine domĂ€nen-spezifische "eFuse Ontologie". Das Ergebnis war eine modulare Ontologie-Bibliothek mit der GENIAL! Basic Ontology, die es erlaubt, automotive und mikroelektronische Komponenten, Funktionen, Eigenschaften und deren AbhĂ€ngigkeiten basierend auf dem ISO26262 Standard zu entwerfen. Des weiteren ist Kontextwissen, welches Entwurfsentscheidungen beinflusst, inkludiert. Diese Wissensbasen sind in einem neuartigen Tool integriert, dass es ermöglicht, Roadmapwissen und Anforderungen durch die Automobil- Wertschöpfungskette hinweg auszutauschen. On tologien zu entwerfen und zu wissen, wie man diese benutzt, war dabei keine triviale Aufgabe und benötigte viel Hintergrund- und Kontextwissen. AusgewĂ€hlte Grundlagen hierfĂŒr sind Richtlinien, wie man Ontologien entwirft, Ontologiekategorien, sowie das Spektrum an Sprachen und Formen von Wissensrepresentationen. Des weiteren sind fort- geschrittene Methoden erlĂ€utert, z.B wie man mit Ontologien Schlußfolgerungen trifft. Am Schluss wird das Overall Framework demonstriert, und die Ontologie mit Reason- ing, Datenbank und APPEL/SysMD (AGILA ProPErty and Dependency Description Language / System MarkDown) und Constraints der Hardware / Software Wissensbasis gezeigt. Dabei werden exemplarisch Roadmap Constraints mit dem Automodell verbunden und durch den Constraint Solver gelöst und exploriert
    • 

    corecore