30,781 research outputs found

    An Exploratory Study of Field Failures

    Get PDF
    Field failures, that is, failures caused by faults that escape the testing phase leading to failures in the field, are unavoidable. Improving verification and validation activities before deployment can identify and timely remove many but not all faults, and users may still experience a number of annoying problems while using their software systems. This paper investigates the nature of field failures, to understand to what extent further improving in-house verification and validation activities can reduce the number of failures in the field, and frames the need of new approaches that operate in the field. We report the results of the analysis of the bug reports of five applications belonging to three different ecosystems, propose a taxonomy of field failures, and discuss the reasons why failures belonging to the identified classes cannot be detected at design time but shall be addressed at runtime. We observe that many faults (70%) are intrinsically hard to detect at design-time

    Classification and reduction of pilot error

    Get PDF
    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses

    An Exploratory Study of Field Failures

    Full text link
    Field failures, that is, failures caused by faults that escape the testing phase leading to failures in the field, are unavoidable. Improving verification and validation activities before deployment can identify and timely remove many but not all faults, and users may still experience a number of annoying problems while using their software systems. This paper investigates the nature of field failures, to understand to what extent further improving in-house verification and validation activities can reduce the number of failures in the field, and frames the need of new approaches that operate in the field. We report the results of the analysis of the bug reports of five applications belonging to three different ecosystems, propose a taxonomy of field failures, and discuss the reasons why failures belonging to the identified classes cannot be detected at design time but shall be addressed at runtime. We observe that many faults (70%) are intrinsically hard to detect at design-time

    Correct and Control Complex IoT Systems: Evaluation of a Classification for System Anomalies

    Full text link
    In practice there are deficiencies in precise interteam communications about system anomalies to perform troubleshooting and postmortem analysis along different teams operating complex IoT systems. We evaluate the quality in use of an adaptation of IEEE Std. 1044-2009 with the objective to differentiate the handling of fault detection and fault reaction from handling of defect and its options for defect correction. We extended the scope of IEEE Std. 1044-2009 from anomalies related to software only to anomalies related to complex IoT systems. To evaluate the quality in use of our classification a study was conducted at Robert Bosch GmbH. We applied our adaptation to a postmortem analysis of an IoT solution and evaluated the quality in use by conducting interviews with three stakeholders. Our adaptation was effectively applied and interteam communications as well as iterative and inductive learning for product improvement were enhanced. Further training and practice are required.Comment: Submitted to QRS 2020 (IEEE Conference on Software Quality, Reliability and Security

    Honeywell Enhancing Airplane State Awareness (EASA) Project: Final Report on Refinement and Evaluation of Candidate Solutions for Airplane System State Awareness

    Get PDF
    The loss of pilot airplane state awareness (ASA) has been implicated as a factor in several aviation accidents identified by the Commercial Aviation Safety Team (CAST). These accidents were investigated to identify precursors to the loss of ASA and develop technologies to address the loss of ASA. Based on a gap analysis, two technologies were prototyped and assessed with a formative pilot-in-the-loop evaluation in NASA Langleys full-motion Research Flight Deck. The technologies address: 1) data source anomaly detection in real-time, and 2) intelligent monitoring aids to provide nominal and predictive awareness of situations to be monitored and a mission timeline to visualize events of interest. The evaluation results indicated favorable impressions of both technologies for mitigating the loss of ASA in terms of operational utility, workload, acceptability, complexity, and usability. The team concludes that there is a feasible retrofit solution for improving ASA that would minimize certification risk, integration costs, and training impact

    Architecture and Information Requirements to Assess and Predict Flight Safety Risks During Highly Autonomous Urban Flight Operations

    Get PDF
    As aviation adopts new and increasingly complex operational paradigms, vehicle types, and technologies to broaden airspace capability and efficiency, maintaining a safe system will require recognition and timely mitigation of new safety issues as they emerge and before significant consequences occur. A shift toward a more predictive risk mitigation capability becomes critical to meet this challenge. In-time safety assurance comprises monitoring, assessment, and mitigation functions that proactively reduce risk in complex operational environments where the interplay of hazards may not be known (and therefore not accounted for) during design. These functions can also help to understand and predict emergent effects caused by the increased use of automation or autonomous functions that may exhibit unexpected non-deterministic behaviors. The envisioned monitoring and assessment functions can look for precursors, anomalies, and trends (PATs) by applying model-based and data-driven methods. Outputs would then drive downstream mitigation(s) if needed to reduce risk. These mitigations may be accomplished using traditional design revision processes or via operational (and sometimes automated) mechanisms. The latter refers to the in-time aspect of the system concept. This report comprises architecture and information requirements and considerations toward enabling such a capability within the domain of low altitude highly autonomous urban flight operations. This domain may span, for example, public-use surveillance missions flown by small unmanned aircraft (e.g., infrastructure inspection, facility management, emergency response, law enforcement, and/or security) to transportation missions flown by larger aircraft that may carry passengers or deliver products. Caveat: Any stated requirements in this report should be considered initial requirements that are intended to drive research and development (R&D). These initial requirements are likely to evolve based on R&D findings, refinement of operational concepts, industry advances, and new industry or regulatory policies or standards related to safety assurance

    SSME lifetime prediction and verification, integrating environments, structures, materials: The challenge

    Get PDF
    The planned missions for the space shuttle dictated a unique and technology-extending rocket engine. The high specific impulse requirements in conjunction with a 55-mission lifetime, plus volume and weight constraints, produced unique structural design, manufacturing, and verification requirements. Operations from Earth to orbit produce severe dynamic environments, which couple with the extreme pressure and thermal environments associated with the high performance, creating large low cycle loads and high alternating stresses above endurance limit which result in high sensitivity to alternating stresses. Combining all of these effects resulted in the requirements for exotic materials, which are more susceptible to manufacturing problems, and the use of an all-welded structure. The challenge of integrating environments, dynamics, structures, and materials into a verified SSME structure is discussed. The verification program and developmental flight results are included. The first six shuttle flights had engine performance as predicted with no failures. The engine system has met the basic design challenges

    Realtime market microstructure analysis: online Transaction Cost Analysis

    Full text link
    Motivated by the practical challenge in monitoring the performance of a large number of algorithmic trading orders, this paper provides a methodology that leads to automatic discovery of the causes that lie behind a poor trading performance. It also gives theoretical foundations to a generic framework for real-time trading analysis. Academic literature provides different ways to formalize these algorithms and show how optimal they can be from a mean-variance, a stochastic control, an impulse control or a statistical learning viewpoint. This paper is agnostic about the way the algorithm has been built and provides a theoretical formalism to identify in real-time the market conditions that influenced its efficiency or inefficiency. For a given set of characteristics describing the market context, selected by a practitioner, we first show how a set of additional derived explanatory factors, called anomaly detectors, can be created for each market order. We then will present an online methodology to quantify how this extended set of factors, at any given time, predicts which of the orders are underperforming while calculating the predictive power of this explanatory factor set. Armed with this information, which we call influence analysis, we intend to empower the order monitoring user to take appropriate action on any affected orders by re-calibrating the trading algorithms working the order through new parameters, pausing their execution or taking over more direct trading control. Also we intend that use of this method in the post trade analysis of algorithms can be taken advantage of to automatically adjust their trading action.Comment: 33 pages, 12 figure
    • …
    corecore