801,471 research outputs found

    Formal methods for industrial critical systems, preface to the special section

    Full text link
    [EN] This special issue contains improved versions of selected papers from the workshops on Formal Methods for Industrial Critical Systems (FMICS) held in Eindhoven, The Netherlands, in November 2009 and in Antwerp, Belgium, in September 2010. These were, respectively, the 14th and 15th of a series of international workshops organized by an open working group supported by ERCIM (European Research Consortium for Informatics and Mathematics) that promotes research in all aspects of formal methods (see details in http://www.inrialpes.fr/vasy/fmics/). The FMICS workshops that have produced this special issue considered papers describing original, previously unpublished research and not simultaneously submitted for publication elsewhere, and dealing with the following themes: Design, specification, code generation and testing based on formal methods. Methods, techniques and tools to support automated analysis, certification, debugging, learning, optimization and transformation of complex, distributed, real-time and embedded systems. Verification and validation methods that address shortcomings of existing methods with respect to their industrial applicability (e.g., scalability and usability issues). Tools for the development of formal design descriptions. Case studies and experience reports on industrial applications of formal methods, focusing on lessons learned or new research directions. Impact and costs of the adoption of formal methods. Application of formal methods in standardization and industrial forums. The selected papers are the result of several evaluation steps. In response to the call for papers, FMICS 2009 received 24 papers and FMICS 2010 received 33 papers, with 10 and 14 accepted, respectively, which were published by Springer- Verlag in the series Lecture Notes in Computer Science (volumes 5825 [1] and 6371 [2]). Each paper was reviewed by at least three anonymous referees which provided full written evaluations. After the workshops, the authors of 10 papers were invited to submit extended journal versions to this special issue. These papers passed two review phases, and finally 7 were accepted to be included in the journal.his work has been partially supported by the EU (FEDER) and the Spanish MEC TIN2010-21062-C02-02 project, MICINN INNCORPORA-PTQ program, and by Generalitat Valenciana, ref. PROMETEO2011/052.Alpuente Frasnedo, M.; Joubert ., C.; Kowalewski, S.; Roveri, M. (2013). Formal methods for industrial critical systems, preface to the special section. Science of Computer Programming. 78(7):775-777. doi:10.1016/j.scico.2012.05.005S77577778

    A formal approach to AADL model-based software engineering

    Get PDF
    Formal methods have become a recommended practice in safety-critical software engineering. To be formally verified, a system should be specified with a specific formalism such as Petri nets, automata and process algebras, which requires a formal expertise and may become complex especially with large systems. In this paper, we report our experience in the formal verification of safety-critical real-time systems. We propose a formal mapping for a real-time task model using the LNT language, and we describe how it is used for the integration of a formal verification phase in an AADL model-based development process. We focus on real-time systems with event-driven tasks, asynchronous communication and preemptive fixed-priority scheduling. We provide a complete tool-chain for the automatic model transformation and formal verification of AADL models. Experimentation illustrates our results with the Flight control system and Line follower robot case studies

    The engineering of generic requirements for failure management

    No full text
    We consider the failure detection and management function for engine control systems as an application domain where product line engineering is indicated. The need to develop a generic requirement set - for subsequent system instantiation - is complicated by the addition of the high levels of verification demanded by this safety-critical domain, subject to avionics industry standards. We present our case study experience in this area as a candidate methodology for the engineering, validation and verification of generic requirements using domain engineering and Formal Methods techniques and tools. For a defined class of systems, the case study produces a generic requirement set in UML and an example instantiation in tabular form. Domain analysis and engineering produce a model which is integrated with the formal specification/ verification method B by the use of our UML-B profile. The formal verification both of the generic requirement set, and of a simple system instance, is demonstrated using our U2B and ProB tools. This work is a demonstrator for a tool-supported method which will be an output of EU project RODIN. The method, based in the dominant UML standard, will exploit formal verification technology largely as a "black box" for this novel combination of product line, failure management and safety-critical engineering

    A formal methods approach to interpretability, safety and composability for reinforcement learning

    Full text link
    Robotic systems that are capable of learning from experience have recently become more common place. These systems have demonstrated success in learning difficult control tasks. However, as tasks become more complex and the number of options to reason about becomes greater, there is an increasing need to be able to specify the desired behavior in a structured and interpretable fashion, guarantee system safety, conveniently integrate task specific knowledge with more general knowledge about the world and generate new skills from learned ones without additional exploration. This thesis addresses these problems specifically in the case of reinforcement learning (RL) by using techniques from formal methods. Experience and prior knowledge shape the way humans make decisions when asked to perform complex tasks. Conversely, robots have had difficulty incorporating a rich set of prior knowledge when solving complex planning and control problems. In RL, the reward offers an avenue for incorporating prior knowledge. However, incorporating such knowledge is not always straightforward using standard reward engineering techniques. This thesis presents a formal specification language that can combine a base of general knowledge with task specifications to generate richer task descriptions. For example, to make a hotdog at the task level, one needs to grab a sausage, grill it, place the cooked sausage in a bun, apply ketchup, and serve. Prior knowledge about the context of the task, e.g., sausages can be damaged if squeezed too hard, should also be taken into account. Interpretability in RL rewards - easily understanding what the reward function represents and knowing how to improve it - is a key component in understanding the behavior of an RL agent. This property is often missing in reward engineering techniques, which makes it difficult to understand exactly what the implications of the reward function are when tasks become complex. Interpretability of the reward allows for better value alignment between human intent and system objectives, leading to a lower likelihood of reward hacking by the system. The formal specification language presented in this work has the added benefit of being easily interpretable for its similarity with natural language. Safe RL - guaranteeing undesirable behaviors do not occur (i.e. collisions with obstacles), is a critical concern when learning and deployment of robotic systems happen in the real world. Safety for these systems not only presents legal challenges to their wide adoption, but also raises risks to hardware and users. By using techniques from formal methods and control theory, we provide two main components to ensure safety in the RL agent behaviors. First, the formal specification language allows for explicit definition of undesirable behaviors (e.g. always avoid collisions). Second, control barrier functions (CBF) are used to enforce these safety constraints. Composability of learned skills - the ability to compose new skills from a library of learned ones can significantly enhance a robot's capabilities by making efficient use of past experience. Modern RL systems focus mainly on mastery (maximizing the given reward) and less on generalization (transfer from one task domain to another). In this thesis, we will also exploit the logical and graphical representations of the task specification and develop techniques for skill composition

    Pitfalls in Analyzing Systems in Organizations

    Get PDF
    Despite the availability of elaborate methods for defining data and business processes, huge amounts of time and effort are wasted on system projects that produce disappointing results. An important contributing factor is the difficulty business and IT professionals experience when they try to describe, evaluate, and/or analyze systems in organizations even at a cursory level. Between 1997 and 2003, the author\u27s information system courses for evening MBAs and EMBAs required students to write two group papers that present a business-oriented analysis of a real world system in an organization and propose preliminary recommendations for improvements. If these working students are representative of the types of business professionals who are involved in systems in organizations, it is plausible that the major types of pitfalls demonstrated by their papers are representative of common pitfalls that contribute to disappointing results with systems. An examination of 202 group papers submitted by evening MBA and EMBA students between 1997 and 2003 revealed pitfalls in 9 categories related to system and information definition, performance measurement, treatment of personal and organizational issues, susceptibility to techno-hype and jargon, inadequate critical thinking, and difficulty applying abstractions and formal methods. This paper illustrates these pitfalls using examples from student papers. Assuming that typical business professionals encounter the same types of pitfalls, both MBA programs and analysis and design methods should provide concepts and techniques that help in identifying and minimizing the related problems

    Pitfalls in Analyzing Systems in Organizations

    Get PDF
    Despite the availability of elaborate methods for defining data and business processes, huge amounts of time and effort are wasted on system projects that produce disappointing results. An important contributing factor is the difficulty business and IT professionals experience when they try to describe, evaluate, and/or analyze systems in organizations even at a cursory level. Between 1997 and 2003, the author\u27s information system courses for evening MBAs and EMBAs required students to write two group papers that present a business-oriented analysis of a real world system in an organization and propose preliminary recommendations for improvements. If these working students are representative of the types of business professionals who are involved in systems in organizations, it is plausible that the major types of pitfalls demonstrated by their papers are representative of common pitfalls that contribute to disappointing results with systems. An examination of 202 group papers submitted by evening MBA and EMBA students between 1997 and 2003 revealed pitfalls in 9 categories related to system and information definition, performance measurement, treatment of personal and organizational issues, susceptibility to techno-hype and jargon, inadequate critical thinking, and difficulty applying abstractions and formal methods. This paper illustrates these pitfalls using examples from student papers. Assuming that typical business professionals encounter the same types of pitfalls, both MBA programs and analysis and design methods should provide concepts and techniques that help in identifying and minimizing the related problems

    Modeling Guidelines for Code Generation in the Railway Signaling Context

    Get PDF
    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these recommendations has been performed for the automotive control systems domain in order to enforce code generation [7]. The MAAB guidelines have been found profitable also in the aerospace/avionics sector [1] and they have been adopted by the MathWorks Aerospace Leadership Council (MALC). General Electric Transportation Systems (GETS) is a well known railway signaling systems manufacturer leading in Automatic Train Protection (ATP) systems technology. Inside an effort of adopting formal methods within its own development process, GETS decided to introduce system modeling by means of the MathWorks tools [2], and in 2008 chose to move to code generation. This article reports the experience performed by GETS in developing its own modeling standard through customizing the MAAB rules for the railway signaling domain and shows the result of this experience with a successful product development story
    corecore