1,524 research outputs found

    Workshop to identify critical windows of exposure for children's health: neurobehavioral work group summary.

    Get PDF
    This paper summarizes the deliberations of a work group charged with addressing specific questions relevant to risk estimation in developmental neurotoxicology. We focused on eight questions. a) Does it make sense to think about discrete windows of vulnerability in the development of the nervous system? If it does, which time periods are of greatest importance? b) Are there cascades of developmental disorders in the nervous system? For example, are there critical points that determine the course of development that can lead to differences in vulnerabilities at later times? c) Can information on critical windows suggest the most susceptible subgroups of children (i.e., age groups, socioeconomic status, geographic areas, race, etc.)? d) What are the gaps in existing data for the nervous system or end points of exposure to it? e) What are the best ways to examine exposure-response relationships and estimate exposures in vulnerable life stages? f) What other exposures that affect development at certain ages may interact with exposures of concern? g) How well do laboratory animal data predict human response? h) How can all of this information be used to improve risk assessment and public health (risk management)? In addressing these questions, we provide a brief overview of brain development from conception through adolescence and emphasize vulnerability to toxic insult throughout this period. Methodological issues focus on major variables that influence exposure or its detection through disruptions of behavior, neuroanatomy, or neurochemical end points. Supportive evidence from studies of major neurotoxicants is provided

    Designing an automated clinical decision support system to match clinical practice guidelines for opioid therapy for chronic pain

    Get PDF
    Abstract Background Opioid prescribing for chronic pain is common and controversial, but recommended clinical practices are followed inconsistently in many clinical settings. Strategies for increasing adherence to clinical practice guideline recommendations are needed to increase effectiveness and reduce negative consequences of opioid prescribing in chronic pain patients. Methods Here we describe the process and outcomes of a project to operationalize the 2003 VA/DOD Clinical Practice Guideline for Opioid Therapy for Chronic Non-Cancer Pain into a computerized decision support system (DSS) to encourage good opioid prescribing practices during primary care visits. We based the DSS on the existing ATHENA-DSS. We used an iterative process of design, testing, and revision of the DSS by a diverse team including guideline authors, medical informatics experts, clinical content experts, and end-users to convert the written clinical practice guideline into a computable algorithm to generate patient-specific recommendations for care based upon existing information in the electronic medical record (EMR), and a set of clinical tools. Results The iterative revision process identified numerous and varied problems with the initially designed system despite diverse expert participation in the design process. The process of operationalizing the guideline identified areas in which the guideline was vague, left decisions to clinical judgment, or required clarification of detail to insure safe clinical implementation. The revisions led to workable solutions to problems, defined the limits of the DSS and its utility in clinical practice, improved integration into clinical workflow, and improved the clarity and accuracy of system recommendations and tools. Conclusions Use of this iterative process led to development of a multifunctional DSS that met the approval of the clinical practice guideline authors, content experts, and clinicians involved in testing. The process and experiences described provide a model for development of other DSSs that translate written guidelines into actionable, real-time clinical recommendations.http://deepblue.lib.umich.edu/bitstream/2027.42/78267/1/1748-5908-5-26.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/2/1748-5908-5-26.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/3/1748-5908-5-26-S3.TIFFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/4/1748-5908-5-26-S2.TIFFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/5/1748-5908-5-26-S1.TIFFPeer Reviewe

    Validating the Time and Change test to screen for dementia in elderly Koreans

    Get PDF
    BACKGROUND: We assessed the applicability of the T&C test as an accurate and convenient means to screen for dementia in primary care and community settings. METHODS: The study group comprised 59 patients and 405 community participants, all of who were aged 65 years and over. The time component of the T&C test evaluated the ability of a subject to comprehend clock hands that indicated a time of 11:10, while the change component of the T&C test evaluated the ability of a subject to make 1,000 Won from a group of coins with smaller denominations (one 500, seven 100, and seven 50 Won coins). RESULTS: The T&C test had a sensitivity and specificity of 73.0 and 90.9%, respectively, and positive and negative predictive values of 93.1, and 66.7%, respectively. The test-retest and interobserver agreement rates were both 95% (κ = 0.91) (time interval, 24 hours). The association between the T&C test and K-MMSE test was modest, while significant (r = 0.422, p < 0.001). The T&C test scores were not influenced by educational status. CONCLUSIONS: We conclude that the T&C test is useful as supplemental testing of important domains (e.g., calculation, conceptualization, visuospatial) to traditional measures such as the MMSE. However, because T&C test is simple, rapid, and easy to use, it can be applied conveniently to elderly subjects by non-specialist personnel who receive training

    Measurement of the running of the QED coupling in small-angle Bhabha scattering at LEP

    Full text link
    Using the OPAL detector at LEP, the running of the effective QED coupling alpha(t) is measured for space-like momentum transfer from the angular distribution of small-angle Bhabha scattering. In an almost ideal QED framework, with very favourable experimental conditions, we obtain: Delta alpha(-6.07GeV^2) - Delta alpha(-1.81GeV^2) = (440 pm 58 pm 43 pm 30) X 10^-5, where the first error is statistical, the second is the experimental systematic and the third is the theoretical uncertainty. This agrees with current evaluations of alpha(t).The null hypothesis that alpha remains constant within the above interval of -t is excluded with a significance above 5sigma. Similarly, our results are inconsistent at the level of 3sigma with the hypothesis that only leptonic loops contribute to the running. This is currently the most significant direct measurment where the running alpha(t) is probed differentially within the measured t range.Comment: 43 pages, 12 figures, Submitted to Euro. Phys. J.

    A Measurement of Rb using a Double Tagging Method

    Get PDF
    The fraction of Z to bbbar events in hadronic Z decays has been measured by the OPAL experiment using the data collected at LEP between 1992 and 1995. The Z to bbbar decays were tagged using displaced secondary vertices, and high momentum electrons and muons. Systematic uncertainties were reduced by measuring the b-tagging efficiency using a double tagging technique. Efficiency correlations between opposite hemispheres of an event are small, and are well understood through comparisons between real and simulated data samples. A value of Rb = 0.2178 +- 0.0011 +- 0.0013 was obtained, where the first error is statistical and the second systematic. The uncertainty on Rc, the fraction of Z to ccbar events in hadronic Z decays, is not included in the errors. The dependence on Rc is Delta(Rb)/Rb = -0.056*Delta(Rc)/Rc where Delta(Rc) is the deviation of Rc from the value 0.172 predicted by the Standard Model. The result for Rb agrees with the value of 0.2155 +- 0.0003 predicted by the Standard Model.Comment: 42 pages, LaTeX, 14 eps figures included, submitted to European Physical Journal

    Measurement of the B+ and B-0 lifetimes and search for CP(T) violation using reconstructed secondary vertices

    Get PDF
    The lifetimes of the B+ and B-0 mesons, and their ratio, have been measured in the OPAL experiment using 2.4 million hadronic Z(0) decays recorded at LEP. Z(0) --> b (b) over bar decays were tagged using displaced secondary vertices and high momentum electrons and muons. The lifetimes were then measured using well-reconstructed charged and neutral secondary vertices selected in this tagged data sample. The results aretau(B+) = 1.643 +/- 0.037 +/- 0.025 pstau(Bo) = 1.523 +/- 0.057 +/- 0.053 pstau(B+)/tau(Bo) = 1.079 +/- 0.064 +/- 0.041,where in each case the first error is statistical and the second systematic.A larger data sample of 3.1 million hadronic Z(o) decays has been used to search for CP and CPT violating effects by comparison of inclusive b and (b) over bar hadron decays, No evidence fur such effects is seen. The CP violation parameter Re(epsilon(B)) is measured to be Re(epsilon(B)) = 0.001 +/- 0.014 +/- 0.003and the fractional difference between b and (b) over bar hadron lifetimes is measured to(Delta tau/tau)(b) = tau(b hadron) - tau((b) over bar hadron)/tau(average) = -0.001 +/- 0.012 +/- 0.008

    Usability evaluation of a clinical decision support tool for osteoporosis disease management

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Osteoporosis affects over 200 million people worldwide at a high cost to healthcare systems. Although guidelines are available, patients are not receiving appropriate diagnostic testing or treatment. Findings from a systematic review of osteoporosis interventions and a series of focus groups were used to develop a functional multifaceted tool that can support clinical decision-making in osteoporosis disease management at the point of care. The objective of our study was to assess how well the prototype met functional goals and usability needs.</p> <p>Methods</p> <p>We conducted a usability study for each component of the tool--the Best Practice Recommendation Prompt (BestPROMPT), the Risk Assessment Questionnaire (RAQ), and the Customised Osteoporosis Education (COPE) sheet--using the framework described by Kushniruk and Patel. All studies consisted of one-on-one sessions with a moderator using a standardised worksheet. Sessions were audio- and video-taped and transcribed verbatim. Data analysis consisted of a combination of qualitative and quantitative analyses.</p> <p>Results</p> <p>In study 1, physicians liked that the BestPROMPT can provide customised recommendations based on risk factors identified from the RAQ. Barriers included lack of time to use the tool, the need to alter clinic workflow to enable point-of-care use, and that the tool may disrupt the real reason for the visit. In study 2, patients completed the RAQ in a mean of 6 minutes, 35 seconds. Of the 42 critical incidents, 60% were navigational and most occurred when the first nine participants were using the stylus pen; no critical incidents were observed with the last six participants that used the touch screen. Patients thought that the RAQ questions were easy to read and understand, but they found it difficult to initiate the questionnaire. Suggestions for improvement included improving aspects of the interface and navigation. The results of study 3 showed that most patients were able to understand and describe sections of the COPE sheet, and all considered discussing the information with their physicians. Suggestions for improvement included simplifying the language and improving the layout.</p> <p>Conclusions</p> <p>Findings from the three studies informed changes to the tool and confirmed the importance of usability testing on all end users to reduce errors, and as an important step in the development process of knowledge translation interventions.</p

    Search for the standard model Higgs boson at LEP

    Get PDF

    Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √ s = 8 TeV with the ATLAS detector

    Get PDF
    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses 20.3 fb−1 of √ s = 8 TeV data collected in 2012 with the ATLAS detector at the LHC. Events are required to have at least one jet with pT > 120 GeV and no leptons. Nine signal regions are considered with increasing missing transverse momentum requirements between Emiss T > 150 GeV and Emiss T > 700 GeV. Good agreement is observed between the number of events in data and Standard Model expectations. The results are translated into exclusion limits on models with either large extra spatial dimensions, pair production of weakly interacting dark matter candidates, or production of very light gravitinos in a gauge-mediated supersymmetric model. In addition, limits on the production of an invisibly decaying Higgs-like boson leading to similar topologies in the final state are presente
    corecore