27 research outputs found

    Automating HAZOP studies using D-higraphs

    Get PDF
    In this paper, we present the use of D-higraphs to perform HAZOP studies. D-higraphs is a formalism that includes in a single model the functional as well as the structural (ontological) components of any given system. A tool to perform a semi-automatic guided HAZOP study on a process plant is presented. The diagnostic system uses an expert system to predict the behavior modeled using D-higraphs. This work is applied to the study of an industrial case and its results are compared with other similar approaches proposed in previous studies. The analysis shows that the proposed methodology fits its purpose enabling causal reasoning that explains causes and consequences derived from deviations, it also fills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools

    Steps Towards a Method for the Formal Modeling of Dynamic Objects

    Get PDF
    Fragments of a method to formally specify object-oriented models of a universe of discourse are presented. The task of finding such models is divided into three subtasks, object classification, event specification, and the specification of the life cycle of an object. Each of these subtasks is further subdivided, and for each of the subtasks heuristics are given that can aid the analyst in deciding how to represent a particular aspect of the real world. The main sources of inspiration are Jackson System Development, algebraic specification of data- and object types, and algebraic specification of processes

    HAZOP: Our Primary Guide in the Land of Process Risks: How can we improve it and do more with its results?

    Get PDF
    PresentationAll risk management starts in determining what can happen. Reliable predictive analysis is key. So, we perform process hazard analysis, which should result in scenario identification and definition. Apart from material/substance properties, thereby, process conditions and possible deviations and mishaps form inputs. Over the years HAZOP has been the most important tool to identify potential process risks by systematically considering deviations in observables, by determining possible causes and consequences, and, if necessary, suggesting improvements. Drawbacks of HAZOP are known; it is effort-intensive while the results are used only once. The exercise must be repeated at several stages of process build-up, and when the process is operational, it must be re-conducted periodically. There have been many past attempts to semi- automate the HazOp procedure to ease the effort of conducting it, but lately new promising developments have been realized enabling also the use of the results for facilitating operational fault diagnosis. This paper will review the directions in which improved automation of HazOp is progressing and how the results, besides for risk analysis and design of preventive and protective measures, also can be used during operations for early warning of upcoming abnormal process situations

    Evaluation of the usability of constraint diagrams as a visual modelling language: theoretical and empirical investigations

    Get PDF
    This research evaluates the constraint diagrams (CD) notation, which is a formal representation for program specification that has some promise to be used by people who are not expert in software design. Multiple methods were adopted in order to provide triangulated evidence of the potential benefits of constraint diagrams compared with other notational systems. Three main approaches were adopted for this research. The first approach was a semantic and task analysis of the CD notation. This was conducted by the application of the Cognitive Dimensions framework, which was used to examine the relative strengths and weaknesses of constraint diagrams and conventional notations in terms of the perceptive facilitation or impediments of these different representations. From this systematic analysis, we found that CD cognitively reduced the cost of exploratory design, modification, incrementation, searching, and transcription activities with regard to the cognitive dimensions: consistency, visibility, abstraction, closeness of mapping, secondary notation, premature commitment, role-expressiveness, progressive evaluation, diffuseness, provisionality, hidden dependency, viscosity, hard mental operations, and error-proneness. The second approach was an empirical evaluation of the comprehension of CD compared to natural language (NL) with computer science students. This experiment took the form of a web-based competition in which 33 participants were given instructions and training on either CD or the equivalent NL specification expressions, and then after each example, they responded to three multiple-choice questions requiring the interpretation of expressions in their particular notation. Although the CD group spent more time on the training and had less confidence, they obtained comparable interpretation scores to the NL group and took less time to answer the questions, although they had no prior experience of CD notation. The third approach was an experiment on the construction of CD. 20 participants were given instructions and training on either CD or the equivalent NL specification expressions, and then after each example, they responded to three questions requiring the construction of expressions in their particular notation. We built an editor to allow the construction of the two notations, which automatically logged their interactions. In general, for constructing program specification, the CD group had more accurate answers, they had spent less time in training, and their returns to the training examples were fewer than those of the NL group. Overall it was found that CD is understandable, usable, intuitive, and expressive with unambiguous semantic notation

    Semantic networks

    Get PDF
    AbstractA semantic network is a graph of the structure of meaning. This article introduces semantic network systems and their importance in Artificial Intelligence, followed by I. the early background; II. a summary of the basic ideas and issues including link types, frame systems, case relations, link valence, abstraction, inheritance hierarchies and logic extensions; and III. a survey of ‘world-structuring’ systems including ontologies, causal link models, continuous models, relevance, formal dictionaries, semantic primitives and intersecting inference hierarchies. Speed and practical implementation are briefly discussed. The conclusion argues for a synthesis of relational graph theory, graph-grammar theory and order theory based on semantic primitives and multiple intersecting inference hierarchies

    ONTOLOGY-ENABLED TRACEABILITY MODELS FOR ENGINEERING SYSTEMS DESIGN AND MANAGEMENT

    Get PDF
    This thesis describes new models and a system for satisfying requirements, and an architectural framework for linking discipline-specific dependencies through inter- action relationships at the ontology (or meta-model) level. In a departure from state-of-the-art traceability mechanisms, we ask the question: What design concept (or family of design concepts) should be applied to satisfy this requirement? Solu- tions to this question establish links between requirements and design concepts. The implementation of these concepts leads to the design itself. These ideas, and support for design-rule checking are prototyped through a series of progressively complicated applications, culminating in a case study for rail transit systems management

    Process hazard analysis, hazard identification and scenario definition: are the conventional tools sufficient, or should and can we do much better?

    Get PDF
    Hazard identification is the first and most crucial step in any risk assessment. Since the late 1960s it has been done in a systematic manner using hazard and operability studies (HAZOP) and failure mode and effect analysis (FMEA). In the area of process safety these methods have been successful in that they have gained global recognition. There still remain numerous and significant challenges when using these methodologies. These relate to the quality of human imagination in eliciting failure events and subsequent causal pathways, the breadth and depth of outcomes, application across operational modes, the repetitive nature of the methods and the substantial effort expended in performing this important step within risk management practice. The present article summarizes the attempts and actual successes that have been made over the last 30 years to deal with many of these challenges. It analyzes what should be done in the case of a full systems approach and describes promising developments in that direction. It shows two examples of how applying experience and historical data with Bayesian network, HAZOP and FMEA can help in addressing issues in operational risk management

    Assessing Operational Situations.

    Get PDF

    Methodology and System for Ontology-Enabled Traceability: Pilot Application to Design and Management of the Washington D.C. Metro System

    Get PDF
    This report describes a new methodology and system for satisfying requirements, and an architectural framework for linking discipline-specific dependencies through interaction relationships at the meta-model (or ontology) level. In state-of-the-art traceability mechanisms, requirements are connected directly to design objects. Here, in contrast, we ask the question: What design concept (or family of design concepts) should be applied to satisfy this requirement? Solutions to this question establish links between requirements and design concepts. Then, it is the implementation of these concepts that leads to the design itself. These ideas are prototyped through a Washington DC Metro System requirements-to-design model mockup. The proposed methodology offers several benefits not possible with state-of-the-art procedures. First, procedures for design rule checking may be embedded into design concept nodes, thereby creating a pathway for system validation and verification processes that can be executed early in the systems lifecycle where errors are cheapest and easiest to fix. Second, the proposed model provides a much better big-picture view of relevant design concepts and how they fit together, than is possible with linking of domains at the model level. And finally, the proposed procedures are automatically reusable across families of projects where the ontologies are applicable
    corecore