130,911 research outputs found

    Time Waits for No One: Using Time as a Lens in Information Systems Research

    Get PDF
    Despite considerable research interest, IT projects still fail at a higher rate than other projects. Primary causes for these failures are relational, motivational, and scheduling issues on the team. Using the concept of time as a lens, the four essays in this dissertation examine how the ways that individuals and teams structure time can help explain these failures. The essays formulate the concept of temporal dissonance at the individual and team level, and explore how temporal dissonance causes negative consequences for IT workers and IT teams. Essay one synthesizes temporal dissonance from concepts of temporal congruity and cognitive dissonance. It proposes a model in which an interaction between salience and temporal congruity creates an affective reaction of discomfort, called temporal dissonance. Temporal dissonance provides a partial explanation for the mixed results for time management interventions. Essay two extends the model and tests it empirically. The essay proposes that IT workers differ in several temporal characteristics from managers, resulting in IT workers feeling more temporal dissonance than managers. This difference results in greater stress and cynicism among IT workers, and results in reduced willingness to meet deadlines. Essay three extends the theory of temporal dissonance to the team level, using group development processes, shared mental models, and cognitive dissonance as a framework. Conflicting temporal structures salient to the team create tension, called team temporal dissonance. Teams reduce temporal dissonance by engaging in affect and process conflict, which reduces the performance of the team. Essay four empirically confirms team temporal dissonance in IT project teams. The study finds that the consequences of team temporal dissonance can vary. When internally generated, temporal dissonance causes the team to engage in process conflict, reducing its performance. Conversely, generated temporal dissonance causes a team to engage in affect conflict as a dissonance reduction measure. The reduction in dissonance improves team performance. The four essays together triangulate on the concept of temporal dissonance, eliciting its existence from differing starting points. Together, they provide strong evidence of the existence and importance of temporal dissonance

    Identifying common problems in the acquisition and deployment of large-scale software projects in the US and UK healthcare systems

    Get PDF
    Public and private organizations are investing increasing amounts into the development of healthcare information technology. These applications are perceived to offer numerous benefits. Software systems can improve the exchange of information between healthcare facilities. They support standardised procedures that can help to increase consistency between different service providers. Electronic patient records ensure minimum standards across the trajectory of care when patients move between different specializations. Healthcare information systems also offer economic benefits through efficiency savings; for example by providing the data that helps to identify potential bottlenecks in the provision and administration of care. However, a number of high-profile failures reveal the problems that arise when staff must cope with the loss of these applications. In particular, teams have to retrieve paper based records that often lack the detail on electronic systems. Individuals who have only used electronic information systems face particular problems in learning how to apply paper-based fallbacks. The following pages compare two different failures of Healthcare Information Systems in the UK and North America. The intention is to ensure that future initiatives to extend the integration of electronic patient records will build on the ‘lessons learned’ from previous systems

    Why Catastrophic Organizational Failures Happen

    Get PDF
    Excerpt from the introduction: The purpose of this chapter is to examine the major streams of research about catastrophic failures, describing what we have learned about why these failures occur as well as how they can be prevented. The chapter begins by describing the most prominent sociological school of thought with regard to catastrophic failures, namely normal accident theory. That body of thought examines the structure of organizational systems that are most susceptible to catastrophic failures. Then, we turn to several behavioral perspectives on catastrophic failures, assessing a stream of research that has attempted to understand the cognitive, group and organizational processes that develop and unfold over time, leading ultimately to a catastrophic failure. For an understanding of how to prevent such failures, we then assess the literature on high reliability organizations (HRO). These scholars have examined why some complex organizations operating in extremely hazardous conditions manage to remain nearly error free. The chapter closes by assessing how scholars are trying to extend the HRO literature to develop more extensive prescriptions for managers trying to avoid catastrophic failures

    Choosing effective methods for design diversity - How to progress from intuition to science

    Get PDF
    Design diversity is a popular defence against design faults in safety critical systems. Design diversity is at times pursued by simply isolating the development teams of the different versions, but it is presumably better to "force" diversity, by appropriate prescriptions to the teams. There are many ways of forcing diversity. Yet, managers who have to choose a cost-effective combination of these have little guidance except their own intuition. We argue the need for more scientifically based recommendations, and outline the problems with producing them. We focus on what we think is the standard basis for most recommendations: the belief that, in order to produce failure diversity among versions, project decisions should aim at causing "diversity" among the faults in the versions. We attempt to clarify what these beliefs mean, in which cases they may be justified and how they can be checked or disproved experimentally

    Modeling and Diagnostic Software for Liquefying-Fuel Rockets

    Get PDF
    A report presents a study of five modeling and diagnostic computer programs considered for use in an integrated vehicle health management (IVHM) system during testing of liquefying-fuel hybrid rocket engines in the Hybrid Combustion Facility (HCF) at NASA Ames Research Center. Three of the programs -- TEAMS, L2, and RODON -- are model-based reasoning (or diagnostic) programs. The other two programs -- ICS and IMS -- do not attempt to isolate the causes of failures but can be used for detecting faults. In the study, qualitative models (in TEAMS and L2) and quantitative models (in RODON) having varying scope and completeness were created. Each of the models captured the structure and behavior of the HCF as a physical system. It was noted that in the cases of the qualitative models, the temporal aspects of the behavior of the HCF and the abstraction of sensor data are handled outside of the models, and it is necessary to develop additional code for this purpose. A need for additional code was also noted in the case of the quantitative model, though the amount of development effort needed was found to be less than that for the qualitative models

    The Impact of Quality Action Teams in the Workplace: Testimony of Edith Kelly Before the Commission on the Future of Worker-Management Relations

    Get PDF
    Testimony_Kelly_072893.pdf: 3746 downloads, before Oct. 1, 2020

    Teams and cardiac surgery

    Get PDF
    Motivation\ud Our study is designed to identify human factors that are a threat to the safety of children with heart disease.\ud \ud Research approach\ud After an initial observation period, we will apply a major safety intervention. We will then re-measure the occurrence and types of human factors in the operating room, and the incidence of adverse events, near misses and hospital death, to evaluate if there was a significant post-intervention reduction. \ud \ud Findings/design\ud We focus on challenges encountered during the training of the observers. Research Limitations\ud Because of the complexity of the OR, observations are necessarily subjective. \ud \ud Originality/Value\ud This work is original because of the systematic evaluation of a safety intevention and the training protocol for the observers.\ud \ud Take Away Message\ud Systematic and periodic assessment of observers is required when teamwork is observed in complex, dynamic settings

    Download the PDF of the Entire Issue: PEHC vol. 1, issue 2

    Get PDF

    A Prognostic Launch Vehicle Probability of Failure Assessment Methodology for Conceptual Systems Predicated on Human Causal Factors

    Get PDF
    Create an improved method to calculate reliability of a conceptual launch vehicle system prior to fabrication by using historic data of actual root causes of failures. While failures have unique "proximate causes", there are typically a finite amount of common "root causes". Heretofore launch vehicle reliability evaluation typically hardware-centric statistical analyses, while most root causes of failures are been shown to be human-centric. A method based on human-centric root causes can be used to quantify reliability assessments and focus proposed actions to mitigate problems. Existing methods have been optimistic in their projections of launch vehicle reliability compared to actuals. Hypothesis: reliability of a conceptual launch vehicle can be more accurately evaluated based on a rational, probabilistic approach using past failure assessment teams' findings predicated on human-centric causes."Human Reliability Analysis Methods Selection Guidance for NASA"Chandler F.T., et al., NASA HQ/OSMA study group, July 2006. Outside HRA experts from academia, other federal labs, and the private sector. 50 system reliability methods considered, fourteen selected for further study, four finally selected as best suited for human spaceflight. Probabilistic Risk Analysis (PRA) + Human Reliability Analysis (HRA) enabled incorporating effects and probabilities of human errors. While four down-selected methods deemed appropriate for failure assessment, it did not appear that these methods could be concisely applied to perform major system-wide assessment of probability of failure of a conceptual design without becoming unwieldy."Engineering a Safer World", Detailed, comprehensive study external to NASA Leveson N. G., MIT, 2011.Systems-Theoretic Accident Model and Processes (STAMP). All-encompassing accident model based on systems theory analyzed accidents after they occurred and created approaches to prevent occurrence in developing systems not focused on failure prevention per se, but rather reducing hazards by influencing human behavior through use of constraints, hierarchical control structures, and process models to improve system safetySystem Theoretic Process Analysis (STPA) addresses predictive part of problem (a "hazard analysis"). Includes all causal factors identified in STAMP: "...design errors, software flaws, component interaction accidents, cognitively complex human decision-making errors, and social organizational and management factors contributing to accidents" can guide design process rather than require it to exist before-hand did not appear capable of concise application for system-wide assessment of probability of failure of a conceptual design without becoming unwieldy

    The natural history of bugs: using formal methods to analyse software related failures in space missions

    Get PDF
    Space missions force engineers to make complex trade-offs between many different constraints including cost, mass, power, functionality and reliability. These constraints create a continual need to innovate. Many advances rely upon software, for instance to control and monitor the next generation ‘electron cyclotron resonance’ ion-drives for deep space missions.Programmers face numerous challenges. It is extremely difficult to conduct valid ground-based tests for the code used in space missions. Abstract models and simulations of satellites can be misleading. These issues are compounded by the use of ‘band-aid’ software to fix design mistakes and compromises in other aspects of space systems engineering. Programmers must often re-code missions in flight. This introduces considerable risks. It should, therefore, not be a surprise that so many space missions fail to achieve their objectives. The costs of failure are considerable. Small launch vehicles, such as the U.S. Pegasus system, cost around 18million.Payloadsrangefrom18 million. Payloads range from 4 million up to 1billionforsecurityrelatedsatellites.Thesecostsdonotincludeconsequentbusinesslosses.In2005,Intelsatwroteoff1 billion for security related satellites. These costs do not include consequent business losses. In 2005, Intelsat wrote off 73 million from the failure of a single uninsured satellite. It is clearly important that we learn as much as possible from those failures that do occur. The following pages examine the roles that formal methods might play in the analysis of software failures in space missions
    • …
    corecore