56 research outputs found

    Personal Safety Culture: A New Measure for General Aviation Pilots

    Get PDF
    Safety culture has been a subject of research for over three decades and is now widely accepted as a critical component of organizational safety programs both domestically and internationally. Through the development of a healthy safety culture, aviation organizations can improve safety processes, reduce mishaps, and mitigate risk more effectively. This is done through the holistic team efforts of an organization’s members and the organization’s leadership. How about aviators who are not part of an organization? Is it possible to identify a personal safety culture defined outside of the traditional organization? And, is it possible to create an instrument allowing pilots to conduct a self-assessment of their personal safety culture? The current research seeks to address these questions by developing such an instrument to measure personal safety culture in General Aviation pilots. The first version of the instrument was developed using resources from prior research studies and a literature review of over 160 publications. It was initially sent to experts in civilian aviation, academia and military sectors who conducted face validity assessments. Once revised, the instrument was tested using a sample drawn from a large southeastern university in the United States. All pilots were required to hold at least a private pilot certificate. A factor analysis conducted on the results of the preliminary study indicate factors that account for a significant amount of the variance in the model. These results are presented with recommendations for application of the self-assessment and thoughts on future research

    Field evaluation of a new antibody-based diagnostic for Schistosoma haematobium and S. mansoni at the point-of-care in northeast Zimbabwe

    Get PDF
    BACKGROUND: Rapid diagnostic tests (RDTs) for use at the point-of-care (POC) are likely to become increasingly useful as large-scale control programmes for schistosomiasis get underway. Given the low sensitivity of the reference standard egg count methods in detecting light infections, more sensitive tests will be required to monitor efforts aimed at eliminating schistosomiasis as advocated by the World Health Assembly Resolution 65.21 passed in 2012. METHODS: A recently developed RDT incorporating Schistosoma mansoni cercarial transformation fluid (SmCTF) for detection of anti-schistosome antibodies in human blood was here evaluated in children (mean age: 7.65 years; age range: 1-12 years) carrying light S. mansoni and S. haematobium infections in a schistosome-endemic area of Zimbabwe by comparison to standard parasitological techniques (i.e. the Kato-Katz faecal smear and urine filtration). Enzyme-linked immunosorbent assays (ELISAs) incorporating S. haematobium antigen preparations were also employed for additional comparison. RESULTS: The sensitivity of the SmCTF-RDT compared to standard parasitological methods was 100% while the specificity was 39.5%. It was found that the sera from RDT “false-positive” children showed significantly higher antibody titres in IgM-cercarial antigen preparation (CAP) and IgM-soluble egg antigen (SEA) ELISA assays than children identified by parasitology as “true-negatives”. CONCLUSIONS: Although further evaluations are necessary using more accurate reference standard tests, these results indicate that the RDT could be a useful tool for the rapid prevalence-mapping of both S. mansoni and S. haematobium in schistosome-endemic areas. It is affordable, user-friendly and allows for diagnosis of both schistosome species at the POC

    A certified plasmid reference material for the standardisation of BCR-ABL1 mRNA quantification by real-time quantitative PCR

    Get PDF
    Serial quantification of BCR–ABL1 mRNA is an important therapeutic indicator in chronic myeloid leukaemia, but there is a substantial variation in results reported by diff

    Multiorgan MRI findings after hospitalisation with COVID-19 in the UK (C-MORE): a prospective, multicentre, observational cohort study

    Get PDF
    Introduction: The multiorgan impact of moderate to severe coronavirus infections in the post-acute phase is still poorly understood. We aimed to evaluate the excess burden of multiorgan abnormalities after hospitalisation with COVID-19, evaluate their determinants, and explore associations with patient-related outcome measures. Methods: In a prospective, UK-wide, multicentre MRI follow-up study (C-MORE), adults (aged ≥18 years) discharged from hospital following COVID-19 who were included in Tier 2 of the Post-hospitalisation COVID-19 study (PHOSP-COVID) and contemporary controls with no evidence of previous COVID-19 (SARS-CoV-2 nucleocapsid antibody negative) underwent multiorgan MRI (lungs, heart, brain, liver, and kidneys) with quantitative and qualitative assessment of images and clinical adjudication when relevant. Individuals with end-stage renal failure or contraindications to MRI were excluded. Participants also underwent detailed recording of symptoms, and physiological and biochemical tests. The primary outcome was the excess burden of multiorgan abnormalities (two or more organs) relative to controls, with further adjustments for potential confounders. The C-MORE study is ongoing and is registered with ClinicalTrials.gov, NCT04510025. Findings: Of 2710 participants in Tier 2 of PHOSP-COVID, 531 were recruited across 13 UK-wide C-MORE sites. After exclusions, 259 C-MORE patients (mean age 57 years [SD 12]; 158 [61%] male and 101 [39%] female) who were discharged from hospital with PCR-confirmed or clinically diagnosed COVID-19 between March 1, 2020, and Nov 1, 2021, and 52 non-COVID-19 controls from the community (mean age 49 years [SD 14]; 30 [58%] male and 22 [42%] female) were included in the analysis. Patients were assessed at a median of 5·0 months (IQR 4·2–6·3) after hospital discharge. Compared with non-COVID-19 controls, patients were older, living with more obesity, and had more comorbidities. Multiorgan abnormalities on MRI were more frequent in patients than in controls (157 [61%] of 259 vs 14 [27%] of 52; p<0·0001) and independently associated with COVID-19 status (odds ratio [OR] 2·9 [95% CI 1·5–5·8]; padjusted=0·0023) after adjusting for relevant confounders. Compared with controls, patients were more likely to have MRI evidence of lung abnormalities (p=0·0001; parenchymal abnormalities), brain abnormalities (p<0·0001; more white matter hyperintensities and regional brain volume reduction), and kidney abnormalities (p=0·014; lower medullary T1 and loss of corticomedullary differentiation), whereas cardiac and liver MRI abnormalities were similar between patients and controls. Patients with multiorgan abnormalities were older (difference in mean age 7 years [95% CI 4–10]; mean age of 59·8 years [SD 11·7] with multiorgan abnormalities vs mean age of 52·8 years [11·9] without multiorgan abnormalities; p<0·0001), more likely to have three or more comorbidities (OR 2·47 [1·32–4·82]; padjusted=0·0059), and more likely to have a more severe acute infection (acute CRP >5mg/L, OR 3·55 [1·23–11·88]; padjusted=0·025) than those without multiorgan abnormalities. Presence of lung MRI abnormalities was associated with a two-fold higher risk of chest tightness, and multiorgan MRI abnormalities were associated with severe and very severe persistent physical and mental health impairment (PHOSP-COVID symptom clusters) after hospitalisation. Interpretation: After hospitalisation for COVID-19, people are at risk of multiorgan abnormalities in the medium term. Our findings emphasise the need for proactive multidisciplinary care pathways, with the potential for imaging to guide surveillance frequency and therapeutic stratification

    Testing a global standard for quantifying species recovery and assessing conservation impact.

    Get PDF
    Recognizing the imperative to evaluate species recovery and conservation impact, in 2012 the International Union for Conservation of Nature (IUCN) called for development of a "Green List of Species" (now the IUCN Green Status of Species). A draft Green Status framework for assessing species' progress toward recovery, published in 2018, proposed 2 separate but interlinked components: a standardized method (i.e., measurement against benchmarks of species' viability, functionality, and preimpact distribution) to determine current species recovery status (herein species recovery score) and application of that method to estimate past and potential future impacts of conservation based on 4 metrics (conservation legacy, conservation dependence, conservation gain, and recovery potential). We tested the framework with 181 species representing diverse taxa, life histories, biomes, and IUCN Red List categories (extinction risk). Based on the observed distribution of species' recovery scores, we propose the following species recovery categories: fully recovered, slightly depleted, moderately depleted, largely depleted, critically depleted, extinct in the wild, and indeterminate. Fifty-nine percent of tested species were considered largely or critically depleted. Although there was a negative relationship between extinction risk and species recovery score, variation was considerable. Some species in lower risk categories were assessed as farther from recovery than those at higher risk. This emphasizes that species recovery is conceptually different from extinction risk and reinforces the utility of the IUCN Green Status of Species to more fully understand species conservation status. Although extinction risk did not predict conservation legacy, conservation dependence, or conservation gain, it was positively correlated with recovery potential. Only 1.7% of tested species were categorized as zero across all 4 of these conservation impact metrics, indicating that conservation has, or will, play a role in improving or maintaining species status for the vast majority of these species. Based on our results, we devised an updated assessment framework that introduces the option of using a dynamic baseline to assess future impacts of conservation over the short term to avoid misleading results which were generated in a small number of cases, and redefines short term as 10 years to better align with conservation planning. These changes are reflected in the IUCN Green Status of Species Standard

    Should payments for biodiversity conservation be based on action or results?

    Full text link
    There is growing interest in the potential of payments for ecosystem services (PES) to encourage land managers to protect and enhance the environment. However, questions remain about how PES agreements should be designed. There is a division between schemes that structure payments by action or by results, with most biodiversity PES schemes, including European agri-environment schemes, paying by action; for example incentivising land managers to carry out actions believed to increase biodiversity. Payment by results is a common incentive structure in the private sector (e.g. labourers doing piece work or no-win no-fee lawyers) but rarer in PES. Using a theoretical modelling approach, we investigate the conditions under which each way of structuring payments may be more cost-effective in a biodiversity PES. Payment by action is favoured where there is a clear action that can be specified at an appropriate level and to which biodiversity is sensitive. We found that payment by results is favoured in degraded landscapes as incentives are created for managers to use their private knowledge and join the scheme only if they can produce the biodiversity services targeted by the scheme. Payment by results is also favoured where biodiversity is less sensitive to conservation action and when it is difficult for a central agency to determine an appropriate level of conservation action. This is because payment by results allows individual managers to optimise their level of action. The relative cost of monitoring action (compliance with an agreement to manage in a certain way) versus results (the presence of biodiversity) has a substantial effect on which payment structure is more efficient only when the central agency can accurately set an appropriate level of action. We illustrate these principles with examples based on agri-environment schemes. Synthesis and applications. Payment by results deserves more attention from those designing biodiversity PES (be they agri-environment schemes in agricultural landscapes or direct payment schemes in more intact ecosystems). This paper provides a formal framework to help policy makers identify the conditions under which payment by results or payment by action is most likely to yield cost-effective biodiversity conservation. © 2011 The Authors. Journal of Applied Ecology © 2011 British Ecological Society

    New horizons for managing the environment: a review of coupled social-ecological systems modeling

    Full text link
    C1 - Journal Articles RefereedConventional approaches to natural resource management are increasingly challenged by environmental problems that are embedded in highly complex systems with profound uncertainties. These so‐called social‐ecological systems (SESs) are characterized by strong links between the social and the ecological system and multiple interactions across spatial and temporal scales. New approaches are needed to manage those tightly coupled systems; however, basic understanding of their nonlinear behavior is still missing. Modeling is a traditional tool in natural resource management to study complex, dynamic systems. There is a long tradition of SES modeling, but the approach is now being more widely recognized in other fields, such as ecological and economic modeling, where issues such as nonlinear ecological dynamics and complex human decision making are receiving more attention. SES modeling is maturing as a discipline in its own right, incorporating ideas from other interdisciplinary fields such as resilience or complex systems research. In this paper, we provide an overview of the emergence and state of the art of this cross‐cutting field. Our analysis reveals the substantial potential of SES models to address issues that are of utmost importance for managing complex human‐environment relationships, such as: (i) the implications of ecological and social structure for resource management, (ii) uncertainty in natural and social systems and ways to address it, (iii) the role of coevolutionary processes in the dynamics of SESs, and (iv) the implications of microscale human decision making for sustainable resource management and conservation. The complexity of SESs and the lack of a common analytical framework, however, also pose significant challenges for this emerging field. There are clear research needs with respect to: (i) approaches that go beyond rather simple specifications of human decision making, (ii) development of coping strategies to deal with (irreducible) uncertainties, (iii) more explicit modeling of feedbacks between the social and ecological systems, and (iv) a conceptual and methodological framework for analyzing and modeling SESs. We provide ideas for tackling some of these challenges and indicate potential key focal areas for SES modeling in the future

    Homelessness in autistic women: Defining the research agenda

    Get PDF
    Background: Current evidence suggests that autistic individuals are at high risk for becoming and remaining in a cycle of homelessness. Key risk factors for homelessness disproportionately affect autistic people; however, we have limited understanding of how to best support autistic individuals accessing services. This gap in the evidence base is particularly acute for autistic women. Objective: As a first step to address this gap, we aimed to (1) map gaps in knowledge and practice; (2) identify priority areas for research and (3) develop recommendations for how to implement novel research and practice in this area. Methods: We conducted a collaborative workshop with an interdisciplinary group of 26 stakeholders to address our aims. Stakeholders included autistic women with experience of homelessness, researchers, health professionals, NGO representatives, and service providers. Results and recommendations: Two research priority areas were identified to map the prevalence and demographics of autistic women experiencing homelessness, and to delineate risk and protective factors for homelessness. Priority areas for improving provision of support included staff training to improve communication, awareness of autism and building trust with service providers, and recommendations for practical provision of support by services. Conclusions: Future research is critical to increase our knowledge of the pathways leading to homelessness for autistic women, and barriers to engaging with homelessness and social services. We need to use this knowledge to develop new ways of delivering targeted and inclusive support for autistic women, which could prevent or shorten periods of homelessness
    corecore