195 research outputs found

    Why don't hospital staff activate the rapid response system (RRS)? How frequently is it needed and can the process be improved?

    Get PDF
    Abstract Background The rapid response system (RRS) is a process of accessing help for health professionals when a patient under their care becomes severely ill. Recent studies and meta-analyses show a reduction in cardiac arrests by a one-third in hospitals that have introduced a rapid response team, although the effect on overall hospital mortality is less clear. It has been suggested that the difficulty in establishing the benefit of the RRS has been due to implementation difficulties and a reluctance of clinical staff to call for additional help. This assertion is supported by the observation that patients continue to have poor outcomes in our institution despite an established RRS being available. In many of these cases, the patient is often unstable for many hours or days without help being sought. These poor outcomes are often discovered in an ad hoc fashion, and the real numbers of patients who may benefit from the RRS is currently unknown. This study has been designed to answer three key questions to improve the RRS: estimate the scope of the problem in terms of numbers of patients requiring activation of the RRS; determine cognitive and socio-cultural barriers to calling the Rapid Response Team; and design and implement solutions to address the effectiveness of the RRS. Methods The extent of the problem will be addressed by establishing the incidence of patients who meet abnormal physiological criteria, as determined from a point prevalence investigation conducted across four hospitals. Follow-up review will determine if these patients subsequently require intensive care unit or critical care intervention. This study will be grounded in both cognitive and socio-cultural theoretical frameworks. The cognitive model of situation awareness will be used to determine psychological barriers to RRS activation, and socio-cultural models of interprofessional practice will be triangulated to inform further investigation. A multi-modal approach will be taken using reviews of clinical notes, structured interviews, and focus groups. Interventions will be designed using a human factors analysis approach. Ongoing surveillance of adverse outcomes and surveys of the safety climate in the clinical areas piloting the interventions will occur before and after implementation

    Tear fluid biomarkers in ocular and systemic disease: potential use for predictive, preventive and personalised medicine

    Get PDF
    In the field of predictive, preventive and personalised medicine, researchers are keen to identify novel and reliable ways to predict and diagnose disease, as well as to monitor patient response to therapeutic agents. In the last decade alone, the sensitivity of profiling technologies has undergone huge improvements in detection sensitivity, thus allowing quantification of minute samples, for example body fluids that were previously difficult to assay. As a consequence, there has been a huge increase in tear fluid investigation, predominantly in the field of ocular surface disease. As tears are a more accessible and less complex body fluid (than serum or plasma) and sampling is much less invasive, research is starting to focus on how disease processes affect the proteomic, lipidomic and metabolomic composition of the tear film. By determining compositional changes to tear profiles, crucial pathways in disease progression may be identified, allowing for more predictive and personalised therapy of the individual. This article will provide an overview of the various putative tear fluid biomarkers that have been identified to date, ranging from ocular surface disease and retinopathies to cancer and multiple sclerosis. Putative tear fluid biomarkers of ocular disorders, as well as the more recent field of systemic disease biomarkers, will be shown

    How a Diverse Research Ecosystem Has Generated New Rehabilitation Technologies: Review of NIDILRR’s Rehabilitation Engineering Research Centers

    Get PDF
    Over 50 million United States citizens (1 in 6 people in the US) have a developmental, acquired, or degenerative disability. The average US citizen can expect to live 20% of his or her life with a disability. Rehabilitation technologies play a major role in improving the quality of life for people with a disability, yet widespread and highly challenging needs remain. Within the US, a major effort aimed at the creation and evaluation of rehabilitation technology has been the Rehabilitation Engineering Research Centers (RERCs) sponsored by the National Institute on Disability, Independent Living, and Rehabilitation Research. As envisioned at their conception by a panel of the National Academy of Science in 1970, these centers were intended to take a “total approach to rehabilitation”, combining medicine, engineering, and related science, to improve the quality of life of individuals with a disability. Here, we review the scope, achievements, and ongoing projects of an unbiased sample of 19 currently active or recently terminated RERCs. Specifically, for each center, we briefly explain the needs it targets, summarize key historical advances, identify emerging innovations, and consider future directions. Our assessment from this review is that the RERC program indeed involves a multidisciplinary approach, with 36 professional fields involved, although 70% of research and development staff are in engineering fields, 23% in clinical fields, and only 7% in basic science fields; significantly, 11% of the professional staff have a disability related to their research. We observe that the RERC program has substantially diversified the scope of its work since the 1970’s, addressing more types of disabilities using more technologies, and, in particular, often now focusing on information technologies. RERC work also now often views users as integrated into an interdependent society through technologies that both people with and without disabilities co-use (such as the internet, wireless communication, and architecture). In addition, RERC research has evolved to view users as able at improving outcomes through learning, exercise, and plasticity (rather than being static), which can be optimally timed. We provide examples of rehabilitation technology innovation produced by the RERCs that illustrate this increasingly diversifying scope and evolving perspective. We conclude by discussing growth opportunities and possible future directions of the RERC program

    Complexity Theory for a New Managerial Paradigm: A Research Framework

    Get PDF
    In this work, we supply a theoretical framework of how organizations can embed complexity management and sustainable development into their policies and actions. The proposed framework may lead to a new management paradigm, attempting to link the main concepts of complexity theory, change management, knowledge management, sustainable development, and cybernetics. We highlight how the processes of organizational change have occurred as a result of the move to adapt to the changes in the various global and international business environments and how this transformation has led to the shift toward the present innovation economy. We also point how organizational change needs to deal with sustainability, so that the change may be consistent with present needs, without compromising the future

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)1.

    Get PDF
    In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field

    Pan-cancer analysis of whole genomes

    Get PDF
    Cancer is driven by genetic change, and the advent of massively parallel sequencing has enabled systematic documentation of this variation at the whole-genome scale(1-3). Here we report the integrative analysis of 2,658 whole-cancer genomes and their matching normal tissues across 38 tumour types from the Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium of the International Cancer Genome Consortium (ICGC) and The Cancer Genome Atlas (TCGA). We describe the generation of the PCAWG resource, facilitated by international data sharing using compute clouds. On average, cancer genomes contained 4-5 driver mutations when combining coding and non-coding genomic elements; however, in around 5% of cases no drivers were identified, suggesting that cancer driver discovery is not yet complete. Chromothripsis, in which many clustered structural variants arise in a single catastrophic event, is frequently an early event in tumour evolution; in acral melanoma, for example, these events precede most somatic point mutations and affect several cancer-associated genes simultaneously. Cancers with abnormal telomere maintenance often originate from tissues with low replicative activity and show several mechanisms of preventing telomere attrition to critical levels. Common and rare germline variants affect patterns of somatic mutation, including point mutations, structural variants and somatic retrotransposition. A collection of papers from the PCAWG Consortium describes non-coding mutations that drive cancer beyond those in the TERT promoter(4); identifies new signatures of mutational processes that cause base substitutions, small insertions and deletions and structural variation(5,6); analyses timings and patterns of tumour evolution(7); describes the diverse transcriptional consequences of somatic mutation on splicing, expression levels, fusion genes and promoter activity(8,9); and evaluates a range of more-specialized features of cancer genomes(8,10-18).Peer reviewe
    corecore