50 research outputs found

    Cervical dystonia incidence and diagnostic delay in a multiethnic population.

    Get PDF
    BackgroundCurrent cervical dystonia (CD) incidence estimates are based on small numbers in relatively ethnically homogenous populations. The frequency and consequences of delayed CD diagnosis is poorly characterized.ObjectivesTo determine CD incidence and characterize CD diagnostic delay within a large, multiethnic integrated health maintenance organization.MethodsWe identified incident CD cases using electronic medical records and multistage screening of more than 3 million Kaiser Permanente Northern California members from January 1, 2003, to December 31, 2007. A final diagnosis was made by movement disorders specialist consensus. Diagnostic delay was measured by questionnaire and health utilization data. Incidence rates were estimated assuming a Poisson distribution of cases and directly standardized to the 2000 U.S. census. Multivariate logistic regression models were employed to assess diagnoses and behaviors preceding CD compared with matched controls, adjusting for age, sex, and membership duration.ResultsCD incidence was 1.18/100,000 person-years (95% confidence interval [CI], 0.35-2.0; women, 1.81; men, 0.52) based on 200 cases over 15.4 million person-years. Incidence increased with age. Half of the CD patients interviewed reported diagnostic delay. Diagnoses more common in CD patients before the index date included essential tremor (odds ratio [OR] 68.1; 95% CI, 28.2-164.5), cervical disc disease (OR 3.83; 95% CI, 2.8-5.2), neck sprain/strain (OR 2.77; 95% CI, 1.99-3.62), anxiety (OR 2.24; 95% CI, 1.63-3.11) and depression (OR 1.94; 95% CI, 1.4-2.68).ConclusionsCD incidence is greater in women and increases with age. Diagnostic delay is common and associated with adverse effects. © 2019 International Parkinson and Movement Disorder Society

    Evidence synthesis as the basis for decision analysis: a method of selecting the best agricultural practices for multiple ecosystem services

    Get PDF
    Agricultural management practices have impacts not only on crops and livestock, but also on soil, water, wildlife, and ecosystem services. Agricultural research provides evidence about these impacts, but it is unclear how this evidence should be used to make decisions. Two methods are widely used in decision making: evidence synthesis and decision analysis. However, a system of evidence-based decision making that integrates these two methods has not yet been established. Moreover, the standard methods of evidence synthesis have a narrow focus (e.g., the effects of one management practice), but the standard methods of decision analysis have a wide focus (e.g., the comparative effectiveness of multiple management practices). Thus, there is a mismatch between the outputs from evidence synthesis and the inputs that are needed for decision analysis. We show how evidence for a wide range of agricultural practices can be reviewed and summarized simultaneously (“subject-wide evidence synthesis”), and how this evidence can be assessed by experts and used for decision making (“multiple-criteria decision analysis”). We show how these methods could be used by The Nature Conservancy (TNC) in California to select the best management practices for multiple ecosystem services in Mediterranean-type farmland and rangeland, based on a subject-wide evidence synthesis that was published by Conservation Evidence (www.conservationevidence.com). This method of “evidence-based decision analysis” could be used at different scales, from the local scale (farmers deciding which practices to adopt) to the national or international scale (policy makers deciding which practices to support through agricultural subsidies or other payments for ecosystem services). We discuss the strengths and weaknesses of this method, and we suggest some general principles for improving evidence synthesis as the basis for multi-criteria decision analysis

    Advancing specificity in delirium: The delirium subtyping initiative

    Get PDF
    BACKGROUND: Delirium, a common syndrome with heterogeneous etiologies and clinical presentations, is associated with poor long-term outcomes. Recording and analyzing all delirium equally could be hindering the field's understanding of pathophysiology and identification of targeted treatments. Current delirium subtyping methods reflect clinically evident features but likely do not account for underlying biology. METHODS: The Delirium Subtyping Initiative (DSI) held three sessions with an international panel of 25 experts. RESULTS: Meeting participants suggest further characterization of delirium features to complement the existing Diagnostic and Statistical Manual of Mental Disorders Fifth Edition Text Revision diagnostic criteria. These should span the range of delirium-spectrum syndromes and be measured consistently across studies. Clinical features should be recorded in conjunction with biospecimen collection, where feasible, in a standardized way, to determine temporal associations of biology coincident with clinical fluctuations. DISCUSSION: The DSI made recommendations spanning the breadth of delirium research including clinical features, study planning, data collection, and data analysis for characterization of candidate delirium subtypes. HIGHLIGHTS: Delirium features must be clearly defined, standardized, and operationalized. Large datasets incorporating both clinical and biomarker variables should be analyzed together. Delirium screening should incorporate communication and reasoning

    Sustainable and Low Greenhouse Gas Emitting Rice Production in Latin America and the Caribbean: A Review on the Transition from Ideality to Reality.

    Get PDF
    The burgeoning demand for rice in Latin America and Caribbean (LAC) exceeds supply, resulting in a rice deficit. To overcome this challenge, rice production should be increased, albeit sustainably. However, since rice production is associated with increases in the atmospheric concentration of two greenhouse gases (GHGs), namely methane (CH4) and nitrous oxide (N2O), the challenge is on ensuring that production increases are not associated with an increase in GHG emissions and thus do not cause an increase in GHG emission intensities. Based on current understanding of drivers of CH4 and N2O production, we provide here insights on the potential climate change mitigation benefits of management and technological options (i.e., seeding, tillage, irrigation, residue management) pursued in the LAC region. Studies conducted in the LAC region show intermittent irrigation or alternate wetting and drying of rice fields to reduce CH4 emissions by 25–70% without increasing N2O emissions. Results on yield changes associated with intermittent irrigation remain inconclusive. Compared to conventional tillage, no-tillage and anticipated tillage (i.e., fall tillage) cause a 21% and 25% reduction in CH4 emissions, respectively. From existing literature, it was unambiguous that the mitigation potential of most management strategies pursued in the LAC region need to be quantified while acknowledging country-specific conditions. While breeding high yielding and low emitting rice varieties may represent the most promising and possibly sustainable approach for achieving GHG emission reductions without demanding major changes in on-farm management practices, this is rather idealistic. We contend that a more realistic approach for realizing low GHG emitting rice production systems is to focus on increasing rice yields, for obvious food security reasons, which, while not reducing absolute emissions, should translate to a reduction in GHG emission intensities. Moreover, there is need to explore creative ways of incentivizing the adoption of promising combinations of management and technological options

    Advancing specificity in delirium: The delirium subtyping initiative

    Get PDF
    BACKGROUND: Delirium, a common syndrome with heterogeneous etiologies and clinical presentations, is associated with poor long-term outcomes. Recording and analyzing all delirium equally could be hindering the field's understanding of pathophysiology and identification of targeted treatments. Current delirium subtyping methods reflect clinically evident features but likely do not account for underlying biology. METHODS: The Delirium Subtyping Initiative (DSI) held three sessions with an international panel of 25 experts. RESULTS: Meeting participants suggest further characterization of delirium features to complement the existing Diagnostic and Statistical Manual of Mental Disorders Fifth Edition Text Revision diagnostic criteria. These should span the range of delirium-spectrum syndromes and be measured consistently across studies. Clinical features should be recorded in conjunction with biospecimen collection, where feasible, in a standardized way, to determine temporal associations of biology coincident with clinical fluctuations. DISCUSSION: The DSI made recommendations spanning the breadth of delirium research including clinical features, study planning, data collection, and data analysis for characterization of candidate delirium subtypes. HIGHLIGHTS: Delirium features must be clearly defined, standardized, and operationalized. Large datasets incorporating both clinical and biomarker variables should be analyzed together. Delirium screening should incorporate communication and reasoning
    corecore