587 research outputs found
Recommended from our members
The Effect of Interruptions on Primary Task Performance in Safety-Critical Environments
Safety critical systems in medicine utilize alarms to signal potentially life threatening situations to professionals and patients. In particular, in the medical field multiple alarms from equipment are activated daily and often simultaneously. There are a number of alarms which require caregivers to take breaks in complex, primary tasks to attend to the interruption task which is signaled by the alarm. The motivation for this research is the knowledge that, in general, interrupting tasks can have a potentially negative impact on performance and outcomes of the primary task.
The focus of this research is on the effect of an interrupting task on the cognitive behavior of nurses on a primary task: administering medication to a simulated patient. Fifty-eight student nurses were monitored with eye-tracking technology as they perform direct patient care and a medication administration task. There are four hypotheses. First, it is hypothesized that an interruption generated by an alarm during medication administration significantly increases errors because it causes caregivers to forget components of the original task. These errors result when the primary task is suspended in memory, as a result of the intervening task, and because of this suspension, memory for the original task can decay. Second, it is hypothesized that interrupting tasks result in time delays on the primary task (the time during which the caregiver is performing the interrupting task is not included in the time to perform the original task). Third, it is hypothesized that metacognition training will mitigate the negative effects of the interrupting task on the primary task. The metacognition training is based on knowledge of how memory processes are affected by interruptions and how modifying these processes can potentially result in a reduction of errors. Fourth, it is hypothesized that the intervention strategy will lead to improvements in the memory for the material that is required to resume and complete the primary task. This improvement will be measured by increases in the number of eye fixations to the primary task before attending to the secondary task. Furthermore, this measurement will correlate with a reduction in errors
Integrating knowledge of multitasking and interruptions across different perspectives and research methods
Multitasking and interruptions have been studied using a variety of methods in multiple fields (e.g., HCI, cognitive science, computer science, and social sciences). This diversity brings many complementary insights. However, it also challenges researchers to understand how seemingly disparate ideas can best be integrated to further theory and to inform the design of interactive systems. There is therefore a need for a platform to discuss how different approaches to understanding multitasking and interruptions can be combined to provide insights that are more than the sum of their parts. In this article we argue for the necessity of an integrative approach. As part of this argument we provide an overview of articles in this special issue on multitasking and interruptions. These articles showcase the variety of methods currently used to study multitasking and interruptions. It is clear that there are many challenges to studying multitasking and interruptions from different perspectives and using different techniques. We advance a six-point research agenda for the future of multi-method research on this important and timely topic
What makes an interruption disruptive? Understanding the effects of interruption relevance and timing on performance
Interruptions disrupt activity, hindering performance and provoking errors. They present an obvious challenge in safety-critical environments where momentary slips can have fatal consequences. Interruptions are also a problem in more workaday settings, like offices, where they can reduce productivity and increase stress levels. To be able to systematically manage the negative effects of interruptions, we first need to understand the factors that influence their disruptiveness. This thesis explores how the disruptiveness of interruptions is influenced by their relevance and timing. Seven experimental studies investigate these properties in the context of a routine data-entry task. The first three experiments explore how relevance and timing interact. They demonstrate that the relevance of interruptions depends on the contents of working memory at the moment of interruption. Next, a pair of experiments distinguish the oft-conflated concepts of interruption relevance and relatedness. They show that interruptions with similar content to the task at hand can negatively affect performance if they do not contribute toward the rehearsal of goals in working memory. By causing active interference, seemingly useful interruptions that are related to the task at hand have the potential to be more disruptive than entirely unrelated, irrelevant interruptions. The final two experiments in this thesis test the reliability of the effects observed in the first five experiments through alternative experimental paradigms. They show that relevance and timing effects are consistent even when participants are given control over interruptions and that these effects are robust even in an online setting where experimental control is compromised. The work presented in this thesis enhances our understanding of the factors influencing the disruptiveness of interruptions. Its primary contribution is to show that when we talk about interruptions, ‘relevance’, ‘irrelevance’ and ‘relatedness’ must be considered in the context of the contents of working memory at the moment of interruption. This finding has implications for experimental investigations of interrupted performance, efforts to under- stand the effects of interruptions in the workplace, and the development of systems that help users manage interruptions
Recommended from our members
Exploring the nature of cognitive resilience strategies
Where improving the safety or performance of a system, there is a tendency to focus on negative aspects surrounding human performance or interaction: errors, threats, past incidents or identified issues and flaws. This does not, however, tell the whole story. Users frequently deploy a variety of resilient interventions, devising and implementing strategies to improve performance and mitigate threats such as errorparticularly during complex or challenging circumstances. In so doing, users can and do make an active, positive contribution to the wider resilience of a system. To date, the subject of how individual actors within a system leverage such resilience strategies to improve the functioning of said system is a topic that has received only limited direct investigation.
An initial study was undertaken as a probing investigation to test the notion of user-configured cues as a means to facilitate individual resilience. The insights from this study challenged an existing foundational categorisation scheme, which we then sought to expand and refine in collaboration with its original authors, to better represent and articulate 10 different types of resilience strategy. As a means to broaden our real-world pool of strategy accounts, a diary study was then conducted, the resulting data being used to both inform and validate a new iteration of the scheme. Stemming from challenges of the applicability of the scheme to complex resilience cases, we introduced the notion of a new type of compound strategy, and developed a framework to support their analysis by deconstructing them to examine their motivational and functional components. A final controlled laboratory study was undertaken to apply our insights. The resultant refined categorisation scheme and conceptual framework enrich our understanding of the phenomenon of user or individual resilience and could potentially be leveraged to inform and support the design of future technical and sociotechnical systems
The Role of Patient Room-Type, Interruptions, and Intrapersonal Resources in Nurse Performance and Well-Being
Interruptions create a complex challenge in health care. Because some interruptions are necessary in health care, they cannot be completely eliminated. Thus, their effects must be appropriately mitigated. To better understand predictors and consequences of interruptions, as well as factors that may mitigate their negative effects, I employed Job Demands-Resources (JD-R) theory, supplemented by additional constructs from organizational behavior and psychology to develop a model of predictors and mitigators of interruptions. Twenty registered nurses providing care on a progressive acute care unit with single- and double-occupancy patient rooms volunteered to participate in this study. The study incorporated nurse-level questionnaires, event-level surveys, observation, and medical record review to test a mediated, moderation multi-level model. Double-occupancy rooms were a significant predictor of interruptions. Interruptions mediated the effect of room-type on perceived stress, but not on the other five dependent variables (task completion rate, medication administration errors, positive affect, and negative affect). While the full mediated, moderation models were not supported, the individual nurse characteristic of conscientiousness was found to have a significant moderating effect on the effect of room-type on perceived stress. Other nurse characteristics tested, but not found to have a significant effect, were stress mindset and psychological resilience. This study fills significant gaps in interruption research by using theory to develop a single conceptual model that identifies predictors of interruptions and nurse characteristics that may mitigate their effects. Future applications of this research should expand this approach to support nurse selection and training for working in interruptive patient care environment
The system of aseptic preparation of intravenous drugs in clinical care settings
Abstract
A review of the literature on blood stream infections caused by contaminated intravenous infusates which are prepared in clinical care settings found that this common nursing procedure poses at times a significant and life-threatening risk to patients. The guidance and regulations surrounding the preparation of intravenous drugs in clinical care settings suggests that this procedure is extremely complex and poses many different potential hazards to patients. This thesis set out to determine how the infection risks are being addressed in practice by asking the questions: ‘What is the system of intravenous drug preparation in clinical care settings in NHS Scotland?’ and, ‘How does it work in practice?’
Several data sources were utilised: six locations, in specialities where the literature identified significant outbreaks had occurred, were examined for potential contamination risk. Observations (78) of infusate preparations were undertaken and, where available, written procedures were compared with observed practices. Finally, analyses were made of 71 questionnaires, completed by the nurses who prepare intravenous drugs, regarding their opinions of the procedures’ safety and when they perform redundancy checks.
The conclusion of this study is that the system of preparing intravenous drugs in clinical care settings by nurses is, as a consequence of potential infusate contamination, error-prone and unreliable. The reasons for this conclusion are now detailed.
o Due to a lack of mandatory environmental standards, and the provision of poor environments, there is a risk of infusate contamination from environmental sources and consequently, a risk to patients of infusate-related blood stream infections (IR-BSI).
o Some in use equipment poses contamination risks to patients’ infusates. Equipment that could reduce the contamination risk is not always available and in some instances such safety-enhancing equipment has been removed.
o There are no complete written procedures which mirror what is done in practice. At present, from a human-factors perspective, it is not easy for the nurse to do the right thing, or to be sure exactly what is the right thing to do.
o The procedure, in practice, has the required elements of an aseptic procedure, but the execution of the procedure is more often not performed aseptically.
o The procedure of intravenous drug preparation as observed is mainly an interrupted aseptic procedure and as such the recommencement of the aseptic procedure requires repeated hand hygiene.
o The nurses’ opinions of safety vary, as did their assessment of the infection risk to their patients, but it is clear that intravenous drug preparation is not a much-loved nursing procedure and some nurses find it very stressful.
o There is no asepsis quality control built into the system. Aseptic steps are the least likely to be performed as a redundancy check compared to the mandatory checks of ‘right patient, right drug and right dose’.
o The information available to the nurses, from the drug companies, from the makers of equipment and from national agencies does not identify with sufficient clarity the infection risks, or detail how to negate them.
Suggestions for improvement to the six procedures and environments are clear once the procedure steps are colour-coded as either aseptic or non-aseptic; validity testing of these improvements is however, still needed.
The systems’ vulnerabilities observed in this research appear to stem from a chain of external influences including an underestimation of the problem size and the actions needed to prevent it in evidence-based guidelines and mandatory guidance. This leads to poor recognition of the risk of IR-BSI in clinical practice. The problem of infusate contamination causing IR-BSIs is further compounded by the fact that it is not caused by a single organism and does not always present as a disease in real time, that is, over the lifetime of the infusion. As a consequence, this presents surveillance difficulties in terms of definitions, data collection and analysis.
Finally, although the diagnosis of a blood stream infection for an individual patient remains relatively easy, it is not easy to recognise a contaminated infusate as the origin of the problem. All these challenges make both the recognition of the problem and agreement on prevention strategies, extremely challenging.
In summary, the main conclusion of this thesis is that the preparation of infusates in clinical care settings, which occurs approximately 3,000,000 times a year in NHSScotland, is from an aseptic perspective, error-prone and unreliable. Recommendations to optimise patient safety include, changing the procedure locally and, with the utmost urgency, the production of minimum environmental standards. The results of this study are relevant to all hospitals in Scotland and throughout the United Kingdom where the current regulations apply and similar procedures are performed
Colorectal Cancer
The projections for future growth in the number of new patients with colorectal cancer in most parts of the world remain unfavorable. When we consider the substantial morbidity and mortality that accompanies the disease, the acute need for improvements and better solutions in patient care becomes evident. This volume, organized in five sections, represents a synopsis of the significant efforts from scientists, clinicians and investigators towards finding improvements in different patient care aspects including nutrition, diagnostic approaches, treatment strategies with the addition of some novel therapeutic approaches, and prevention. For scientists involved in investigations that explore fundamental cellular events in colorectal cancer, this volume provides a framework for translational integration of cell biological and clinical information. Clinicians as well as other healthcare professionals involved in patient management for colorectal cancer will find this volume useful
- …