11 research outputs found

    Human Requirements Validation for Complex Systems Design

    Get PDF
    AbstractOne of the most critical phases in complex systems design is the requirements engineering process. During this phase, system designers need to accurately elicit, model and validate the desired system based on user requirements. Smart driver assistive technologies (SDAT) belong to a class of complex systems that are used to alleviate accident risk by improving situation awareness, reducing driver workload or enhancing driver attentiveness. Such systems aim to draw drivers’ attention on critical information cues that improve decision making. Discovering the requirements for such systems necessitates a holistic approach that addresses not only functional and non-functional aspects but also the human requirements such as drivers’ situation awareness and workload. This work describes a simulation-based user requirements discovery method. It utilizes the benefits of a modular virtual reality simulator to model driving conditions to discover user needs that subsequently inform the design of prototype SDATs that exploit the augmented reality method. Herein, we illustrate the development of the simulator, the elicitation of user needs through an experiment and the prototype SDAT designs using UNITY game engine

    Methodological developments for probabilistic risk analyses of socio-technical systems

    Get PDF
    International audienceNowadays, the risk analysis of critical systems cannot be focused only on a technical point of view. Indeed, several major accidents have changed this initial way of thinking. As a result, there exist numerous methods that allow to study risks by considering on the main system resources: the technical process, the operator constraining this process, and the organisation conditioning human actions. However, few works propose to jointly use these different methods to study risks in a global approach. In that way, this paper presents a methodology, which is under development between CRAN, EDF and INERIS, allowing an integration of these different methods to probabilistically estimate risks. This integration is based on unification and structuring knowledge concepts; and the quantitative aspect is achieved through the use of Bayesian Networks. An application of this methodology, on an industrial case, demonstrates its feasibility and concludes on model capacities, which are about the necessary consideration of the whole causes for a system weakness treatment, and the classification of these contributors considering their criticality for this system. This tool can thus be used to help decision makers to prioritise their actions

    A Core Model for Parts Suppliers Selecting Method in Manufacturing Supply Chain

    Get PDF
    Service-oriented manufacturing is the new development of manufacturing systems, and manufacturing supply chain service is also an important part of the service-oriented manufacturing systems; hence, the optimal selection of parts suppliers becomes one of key problems in the supply chain system. Complex network theories made a rapid progress in recent years, but the classical models such as BA model and WS model can not resolve the widespread problems of manufacturing supply chain, such as the repeated attachment of edge and fixed number of vertices, but edges increased with preferential connectivity, and flexible edges’ probability. A core model is proposed to resolve the problem in the paper: it maps the parts supply relationship as a repeatable core; a vertex’s probability distribution function integrating the edge’s rate and vertex’s degree is put forward; some simulations, such as the growth of core, the degree distribution characteristics, and the impacting of parameter, are carried out in our experiments, and the case study is set also. The paper proposed a novel model to analyze the manufacturing supply chain system from the insights of complex network

    Human reliability analysis: exploring the intellectual structure of a research field

    Get PDF
    Humans play a crucial role in modern socio-technical systems. Rooted in reliability engineering, the discipline of Human Reliability Analysis (HRA) has been broadly applied in a variety of domains in order to understand, manage and prevent the potential for human errors. This paper investigates the existing literature pertaining to HRA and aims to provide clarity in the research field by synthesizing the literature in a systematic way through systematic bibliometric analyses. The multi-method approach followed in this research combines factor analysis, multi-dimensional scaling, and bibliometric mapping to identify main HRA research areas. This document reviews over 1200 contributions, with the ultimate goal of identifying current research streams and outlining the potential for future research via a large-scale analysis of contributions indexed in Scopus database

    An Agent Based Model to Assess Crew Temporal Variability During U.S. Navy Shipboard Operations

    Get PDF
    Understanding the factors that affect human performance variability as well as their temporal impacts is an essential element in fully integrating and designing complex, adaptive environments. This understanding is particularly necessary for high stakes, time-critical routines such as those performed during nuclear reactor, air traffic control, and military operations. Over the last three decades significant efforts have emerged to demonstrate and apply a host of techniques to include Discrete Event Simulation, Bayesian Belief Networks, Neural Networks, and a multitude of existing software applications to provide relevant assessments of human task performance and temporal variability. The objective of this research was to design and develop a novel Agent Based Modeling and Simulation (ABMS) methodology to generate a timeline of work and assess impacts of crew temporal variability during U.S. Navy Small Boat Defense operations in littoral waters. The developed ABMS methodology included human performance models for six crew members (agents) as well as a threat craft, and incorporated varying levels of crew capability and task support. AnyLogic ABMS software was used to simultaneously provide detailed measures of individual sailor performance and of system-level emergent behavior. This methodology and these models were adapted and built to assure extensibility across a broad range of U.S. Navy shipboard operations. Application of the developed ABMS methodology effectively demonstrated a way to visualize and quantify impacts/uncertainties of human temporal variability on both workload and crew effectiveness during U.S. Navy shipboard operations

    A framework to support automation in manufacturing through the study of process variability

    Get PDF
    In manufacturing, automation has replaced many dangerous, mundane, arduous and routine manual operations, for example, transportation of heavy parts, stamping of large parts, repetitive welding and bolt fastening. However, skilled operators still carry out critical manual processes in various industries such as aerospace, automotive and heavy-machinery. As automation technology progresses through more flexible and intelligent systems, the potential for these processes to be automated increases. However, the decision to undertake automation is a complex one, involving consideration of many factors such as return of investment, health and safety, life cycle impact, competitive advantage, and resources and technology availability. A key challenge to manufacturing automation is the ability to adapt to process variability. In manufacturing processes, human operators apply their skills to adapt to variability, in order to meet the product and process specifications or requirements. This thesis is focussed on understanding the ‎variability involved in these manual processes, and how it may influence the automation solution. ‎ Two manual industrial processes in polishing and de-burring of high-value components were observed to evaluate the extent of the variability and how the operators applied their skills to overcome it. Based on the findings from the literature and process studies, a framework was developed to categorise variability in manual manufacturing processes and to suggest a level of automation for the tasks in the processes, based on scores and weights given to the parameters by the user. The novelty of this research lies in the creation of a framework to categorise and evaluate process variability, suggesting an appropriate level of automation. The framework uses five attributes of processes; inputs, outputs, strategy, time and requirements and twelve parameters (quantity, range or interval of variability, interdependency, diversification, number of alternatives, number of actions, patterned actions, concurrency, time restriction, sensorial domain, cognitive requisite and physical requisites) to evaluate variability inherent in the process. The level of automation suggested is obtained through a system of scores and weights for each parameter. The weights were calculated using Analytical Hierarchical Process (AHP) with the help of three experts in manufacturing processes. Finally, this framework was validated through its application to two processes consisting of a lab-based peg-in-a-hole manual process and an industrial process on welding. In addition, the framework was further applied to three processes (two industrial processes and one process simulated in the laboratory) by two subjects for each process to verify the consistency of the results obtained. The results suggest that the framework is robust when applied by different subjects, presenting high similarity in outputs. Moreover, the framework was found to be effective when characterising variability present in the processes where it was applied. The framework was developed and tested in manufacturing of high value components, with high potential to be applied to processes in other industries, for instance, automotive, heavy machinery, pharmaceutical or electronic components, although this would need further investigation. Thus, future work would include the application of the framework in processes in other industries, hence enhancing its robustness and widening its scope of applicability. Additionally, a database would be created to assess the correlation between process variability and the level of automation

    Uma abordagem aos projetos complexos na perspetiva da prevenção

    Get PDF
    Tese de mestrado. Engenharia de Segurança e Higiene Ocupacionais. Faculdade de Engenharia. Universidade do Porto. 201

    A Process Model for the Integrated Reasoning about Quantitative IT Infrastructure Attributes

    Get PDF
    IT infrastructures can be quantitatively described by attributes, like performance or energy efficiency. Ever-changing user demands and economic attempts require varying short-term and long-term decisions regarding the alignment of an IT infrastructure and particularly its attributes to this dynamic surrounding. Potentially conflicting attribute goals and the central role of IT infrastructures presuppose decision making based upon reasoning, the process of forming inferences from facts or premises. The focus on specific IT infrastructure parts or a fixed (small) attribute set disqualify existing reasoning approaches for this intent, as they neither cover the (complex) interplay of all IT infrastructure components simultaneously, nor do they address inter- and intra-attribute correlations sufficiently. This thesis presents a process model for the integrated reasoning about quantitative IT infrastructure attributes. The process model’s main idea is to formalize the compilation of an individual reasoning function, a mathematical mapping of parametric influencing factors and modifications on an attribute vector. Compilation bases upon model integration to benefit from the multitude of existing specialized, elaborated, and well-established attribute models. The achieved reasoning function consumes an individual tuple of IT infrastructure components, attributes, and external influencing factors to expose a broad applicability. The process model formalizes a reasoning intent in three phases. First, reasoning goals and parameters are collected in a reasoning suite, and formalized in a reasoning function skeleton. Second, the skeleton is iteratively refined, guided by the reasoning suite. Third, the achieved reasoning function is employed for What-if analyses, optimization, or descriptive statistics to conduct the concrete reasoning. The process model provides five template classes that collectively formalize all phases in order to foster reproducibility and to reduce error-proneness. Process model validation is threefold. A controlled experiment reasons about a Raspberry Pi cluster’s performance and energy efficiency to illustrate feasibility. Besides, a requirements analysis on a world-class supercomputer and on the European-wide execution of hydro meteorology simulations as well as a related work examination disclose the process model’s level of innovation. Potential future work employs prepared automation capabilities, integrates human factors, and uses reasoning results for the automatic generation of modification recommendations.IT-Infrastrukturen können mit Attributen, wie Leistung und Energieeffizienz, quantitativ beschrieben werden. Nutzungsbedarfsänderungen und ökonomische Bestrebungen erfordern Kurz- und Langfristentscheidungen zur Anpassung einer IT-Infrastruktur und insbesondere ihre Attribute an dieses dynamische Umfeld. Potentielle Attribut-Zielkonflikte sowie die zentrale Rolle von IT-Infrastrukturen erfordern eine Entscheidungsfindung mittels Reasoning, einem Prozess, der Rückschlüsse (rein) aus Fakten und Prämissen zieht. Die Fokussierung auf spezifische Teile einer IT-Infrastruktur sowie die Beschränkung auf (sehr) wenige Attribute disqualifizieren bestehende Reasoning-Ansätze für dieses Vorhaben, da sie weder das komplexe Zusammenspiel von IT-Infrastruktur-Komponenten, noch Abhängigkeiten zwischen und innerhalb einzelner Attribute ausreichend berücksichtigen können. Diese Arbeit präsentiert ein Prozessmodell für das integrierte Reasoning über quantitative IT-Infrastruktur-Attribute. Die grundlegende Idee des Prozessmodells ist die Herleitung einer individuellen Reasoning-Funktion, einer mathematischen Abbildung von Einfluss- und Modifikationsparametern auf einen Attributvektor. Die Herleitung basiert auf der Integration bestehender (Attribut-)Modelle, um von deren Spezialisierung, Reife und Verbreitung profitieren zu können. Die erzielte Reasoning-Funktion verarbeitet ein individuelles Tupel aus IT-Infrastruktur-Komponenten, Attributen und externen Einflussfaktoren, um eine breite Anwendbarkeit zu gewährleisten. Das Prozessmodell formalisiert ein Reasoning-Vorhaben in drei Phasen. Zunächst werden die Reasoning-Ziele und -Parameter in einer Reasoning-Suite gesammelt und in einem Reasoning-Funktions-Gerüst formalisiert. Anschließend wird das Gerüst entsprechend den Vorgaben der Reasoning-Suite iterativ verfeinert. Abschließend wird die hergeleitete Reasoning-Funktion verwendet, um mittels “What-if”–Analysen, Optimierungsverfahren oder deskriptiver Statistik das Reasoning durchzuführen. Das Prozessmodell enthält fünf Template-Klassen, die den Prozess formalisieren, um Reproduzierbarkeit zu gewährleisten und Fehleranfälligkeit zu reduzieren. Das Prozessmodell wird auf drei Arten validiert. Ein kontrolliertes Experiment zeigt die Durchführbarkeit des Prozessmodells anhand des Reasonings zur Leistung und Energieeffizienz eines Raspberry Pi Clusters. Eine Anforderungsanalyse an einem Superrechner und an der europaweiten Ausführung von Hydro-Meteorologie-Modellen erläutert gemeinsam mit der Betrachtung verwandter Arbeiten den Innovationsgrad des Prozessmodells. Potentielle Erweiterungen nutzen die vorbereiteten Automatisierungsansätze, integrieren menschliche Faktoren, und generieren Modifikationsempfehlungen basierend auf Reasoning-Ergebnissen

    Workload prediction for improved design and reliability of complex systems

    No full text
    This paper describes a method and a tool for analysing and predicting workload for the design and reliability of complex socio-technical systems. It concentrates on the need to assess workload early in the design phase to prevent systems failures. This is a continuation of our previous work on workload assessment. The method is supported by a tool that enables scenario-based validation of prospective socio-technical systems designs such as command and control rooms of military vessels. The approach combines probabilistic measures of human performance with subjective estimates of workload. The causal relationships of performance shaping factors (PSF) are modelled in a Bayesian belief network (BBN) and used to assess the agent's operational performance and reliability. Workload for each agent is calculated based on demand placed upon agents in terms of behavioural response to tasks, communications and interactions between humans and technology. The approach uses scenarios to stress test prospective system designs, where each scenario is modelled as a sequence of events. Reliability is expressed in terms of human error and is dynamically assessed throughout test scenario executions using BBN technology. The innovation beyond traditional reliability analysis relies to the use of dynamic and static estimates of reliability inputs for better informed assessment. This method enables identification of performance bottlenecks to be addressed by the designer early in the design phase. A case study is presented that demonstrates the use of the method and tool for the design of the command and control room of a military vessel
    corecore