470 research outputs found

    Investigating local policy drivers for alcohol harm prevention: a comparative case study of two local authorities in England

    Get PDF
    Background: The considerable challenges associated with implementing national level alcohol policies have encouraged a renewed focus on the prospects for local-level policies in the UK and elsewhere. We adopted a case study approach to identify the major characteristics and drivers of differences in the patterns of local alcohol policies and services in two contrasting local authority (LA) areas in England. Methods: Data were collected via thirteen semi-structured interviews with key informants (including public health, licensing and trading standards) and documentary analysis, including harm reduction strategies and statements of licensing policy. A two-stage thematic analysis was used to categorize all relevant statements into seven over-arching themes, by which document sources were then also analysed. Results: Three of the seven over-arching themes (drink environment, treatment services and barriers and facilitators), provided for the most explanatory detail informing the contrasting policy responses of the two LAs: LA1 pursued a risk-informed strategy via a specialist police team working proactively with problem premises and screening systematically to identify riskier drinking. LA2 adopted a more upstream regulatory approach around restrictions on availability with less emphasis on co-ordinated screening and treatment measures. Conclusion: New powers over alcohol policy for LAs in England can produce markedly different policies for reducing alcohol-related harm. These difference are rooted in economic, opportunistic, organisational and personnel factors particular to the LAs themselves and may lead to closely tailored solutions in some policy areas and poorer co-ordination and attention in others

    The Symmetry of Partner Modelling

    Get PDF
    © 2016, International Society of the Learning Sciences, Inc. Collaborative learning has often been associated with the construction of a shared understanding of the situation at hand. The psycholinguistics mechanisms at work while establishing common grounds are the object of scientific controversy. We postulate that collaborative tasks require some level of mutual modelling, i.e. that each partner needs some model of what the other partners know/want/intend at a given time. We use the term “some model” to stress the fact that this model is not necessarily detailed or complete, but that we acquire some representations of the persons we interact with. The question we address is: Does the quality of the partner model depend upon the modeler’s ability to represent his or her partner? Upon the modelee’s ability to make his state clear to the modeler? Or rather, upon the quality of their interactions? We address this question by comparing the respective accuracies of the models built by different team members. We report on 5 experiments on collaborative problem solving or collaborative learning that vary in terms of tasks (how important it is to build an accurate model) and settings (how difficult it is to build an accurate model). In 4 studies, the accuracy of the model that A built about B was correlated with the accuracy of the model that B built about A, which seems to imply that the quality of interactions matters more than individual abilities when building mutual models. However, these findings do not rule out the fact that individual abilities also contribute to the quality of modelling process

    Mutational analysis of Rift Valley fever phlebovirus nucleocapsid protein indicates novel conserved, functional amino acids

    Get PDF
    Rift Valley fever phlebovirus (RVFV; Phenuiviridae, Phlebovirus) is an important mosquito-borne pathogen of both humans and ruminants. The RVFV genome is composed of tripartite, single stranded, negative or ambisense RNAs. The small (S) segment encodes both the nucleocapsid protein (N) and the non-structural protein (NSs). The N protein is responsible for the formation of the viral ribonucleoprotein (RNP) complexes, which are essential in the virus life cycle and for the transcription and replication of the viral genome. There is currently limited knowledge surrounding the roles of the RVFV nucleocapsid protein in viral infection other than its key functions: N protein multimerisation, encapsidation of the RNA genome and interactions with the RNA-dependent RNA polymerase, L. By bioinformatic comparison of the N sequences of fourteen phleboviruses, mutational analysis, minigenome assays and packaging assays, we have further characterised the RVFV N protein. Amino acids P11 and F149 in RVFV N play an essential role in the function of RNPs and are neither associated with N protein multimerisation nor known nucleocapsid protein functions and may have additional roles in the virus life cycle. Amino acid Y30 exhibited increased minigenome activity despite reduced RNA binding capacity. Additionally, we have determined that the N-terminal arm of N protein is not involved in N-L interactions. Elucidating the fundamental processes that involve the nucleocapsid protein will add to our understanding of this important viral protein and may influence future studies in the development of novel antiviral strategies

    Surgical resectability of pancreatic adenocarcinoma: CTA

    Get PDF
    Imaging studies play an important role in the diagnosis and management of patients with pancreatic adenocarcinoma. Computed tomography (CT) is the most widely available and best validated modality for imaging these patients. Meticulous technique following a well-designed pancreas protocol is essential for maximizing the diagnostic efficacy of CT. After the diagnosis of pancreatic adenocarcinoma is made, the key to management is staging to determine resectability. In practice, staging often entails predicting the presence or absence of vascular invasion by tumor, for which several radiologic grading systems exist. With advances in surgical techniques, the definition of resectability is in evolution, and it is crucial that radiologists have an understanding of the implications of findings that are relevant to the determination of resectability

    Feasibility of MR-Based Body Composition Analysis in Large Scale Population Studies

    Get PDF
    Introduction Quantitative and accurate measurements of fat and muscle in the body are important for prevention and diagnosis of diseases related to obesity and muscle degeneration. Manually segmenting muscle and fat compartments in MR body-images is laborious and time-consuming, hindering implementation in large cohorts. In the present study, the feasibility and success-rate of a Dixon-based MR scan followed by an intensity-normalised, non-rigid, multi-atlas based segmentation was investigated in a cohort of 3,000 subjects. Materials and Methods 3,000 participants in the in-depth phenotyping arm of the UK Biobank imaging study underwent a comprehensive MR examination. All subjects were scanned using a 1.5 T MR-scanner with the dual-echo Dixon Vibe protocol, covering neck to knees. Subjects were scanned with six slabs in supine position, without localizer. Automated body composition analysis was performed using the AMRA Profiler™ system, to segment and quantify visceral adipose tissue (VAT), abdominal subcutaneous adipose tissue (ASAT) and thigh muscles. Technical quality assurance was performed and a standard set of acceptance/rejection criteria was established. Descriptive statistics were calculated for all volume measurements and quality assurance metrics. Results Of the 3,000 subjects, 2,995 (99.83%) were analysable for body fat, 2,828 (94.27%) were analysable when body fat and one thigh was included, and 2,775 (92.50%) were fully analysable for body fat and both thigh muscles. Reasons for not being able to analyse datasets were mainly due to missing slabs in the acquisition, or patient positioned so that large parts of the volume was outside of the field-of-view. Discussion and Conclusions In conclusion, this study showed that the rapid UK Biobank MR-protocol was well tolerated by most subjects and sufficiently robust to achieve very high success-rate for body composition analysis. This research has been conducted using the UK Biobank Resource

    An Exploratory Study of Primary Care Physician Decision Making Regarding Total Joint Arthroplasty

    Get PDF
    BACKGROUND: For patients to experience the benefits of total joint arthroplasty (TJA), primary care physicians (PCPs) ought to know when to refer a patient for TJA and/or optimize nonsurgical treatment options for osteoarthritis (OA). OBJECTIVE: To evaluate the ability of physicians to make clinical treatment decisions. DESIGN AND PARTICIPANTS: A survey, using ten clinical vignettes, of PCPs in Indiana. MEASUREMENTS: A test score (range 0 to 10) was computed based on the number of correct answers consistent with published explicit appropriateness criteria for TJA. We also collected demographic characteristics and physicians’ perceived success rate of TJA in terms of pain relief and functional improvement. RESULTS: There were 149 PCPs (response rate = 61%) who participated. The mean test score was 6.5 ± 1.5. Only 17% correctly identified the published success rate of TJA (i.e., ≥90%). In multivariate analysis, the only physician-related variables associated with test score were ethnicity, board status, and perceived success rate of TJA. Physicians who were white (P = .001), board-certified (P = .04), and perceived a higher success rate of TJA (P = .004) had higher test scores. CONCLUSIONS: PCP knowledge with respect to guideline-concordant care for OA could be improved, specifically in deciding when to consider TJA versus optimizing nonsurgical options. Moreover, the perception of the success rate of TJA may influence a clinician’s decision making

    A comparison of methods to adjust for continuous covariates in the analysis of randomised trials

    Get PDF
    BACKGROUND: Although covariate adjustment in the analysis of randomised trials can be beneficial, adjustment for continuous covariates is complicated by the fact that the association between covariate and outcome must be specified. Misspecification of this association can lead to reduced power, and potentially incorrect conclusions regarding treatment efficacy. METHODS: We compared several methods of adjustment to determine which is best when the association between covariate and outcome is unknown. We assessed (a) dichotomisation or categorisation; (b) assuming a linear association with outcome; (c) using fractional polynomials with one (FP1) or two (FP2) polynomial terms; and (d) using restricted cubic splines with 3 or 5 knots. We evaluated each method using simulation and through a re-analysis of trial datasets. RESULTS: Methods which kept covariates as continuous typically had higher power than methods which used categorisation. Dichotomisation, categorisation, and assuming a linear association all led to large reductions in power when the true association was non-linear. FP2 models and restricted cubic splines with 3 or 5 knots performed best overall. CONCLUSIONS: For the analysis of randomised trials we recommend (1) adjusting for continuous covariates even if their association with outcome is unknown; (2) keeping covariates as continuous; and (3) using fractional polynomials with two polynomial terms or restricted cubic splines with 3 to 5 knots when a linear association is in doubt

    Promoting Patient Safety and Preventing Medical Error in Emergency Departments

    Full text link
    An estimated 108,000 people die each year from potentially preventable iatrogenic injury. One in 50 hospitalized patients experiences a preventable adverse event. Up to 3% of these injuries and events take place in emergency departments. With long and detailed training, morbidity and mortality conferences, and an emphasis on practitioner responsibility, medicine has traditionally faced the challenges of medical error and patient safety through an approach focused almost exclusively on individual practitioners. Yet no matter how well trained and how careful health care providers are, individuals will make mistakes because they are human. In general medicine, the study of adverse drug events has led the way to new methods of error detection and error prevention. A combination of chart reviews, incident logs, observation, and peer solicitation has provided a quantitative tool to demonstrate the effectiveness of interventions such as computer order entry and pharmacist order review. In emergency medicine (EM), error detection has focused on subjects of high liability: missed myocardial infarctions, missed appendicitis, and misreading of radiographs. Some system-level efforts in error prevention have focused on teamwork, on strengthening communication between pharmacists and emergency physicians, on automating drug dosing and distribution, and on rationalizing shifts. This article reviews the definitions, detection, and presentation of error in medicine and EM. Based on review of the current literature, recommendations are offered to enhance the likelihood of reduction of error in EM practice.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/74930/1/j.1553-2712.2000.tb00466.x.pd
    corecore