2,046 research outputs found

    Syncope diagnosis at referral to a tertiary syncope unit: an in-depth analysis of the fast II

    Get PDF
    OBJECTIVE: A substantial number of patients with a transient loss of consciousness (T-LOC) are referred to a tertiary syncope unit without a diagnosis. This study investigates the final diagnoses reached in patients who, on referral, were undiagnosed or inaccurately diagnosed in secondary care. METHODS: This study is an in-depth analysis of the recently published Fainting Assessment Study II, a prospective cohort study in a tertiary syncope unit. The diagnosis at the tertiary syncope unit was established after history taking (phase 1), following autonomic function tests (phase 2), and confirming after critical follow-up of 1.5-2 years, with the adjudicated diagnosis (phase 3) by a multidisciplinary committee. Diagnoses suggested by the referring physician were considered the phase 0 diagnosis. We determined the accuracy of the phase 0 diagnosis by comparing this with the phase 3 diagnosis. RESULTS: 51% (134/264) of patients had no diagnosis upon referral (phase 0), the remaining 49% (130/264) carried a diagnosis, but 80% (104/130) considered their condition unexplained. Of the patients undiagnosed at referral, three major causes of T-LOC were revealed: reflex syncope (69%), initial orthostatic hypotension (20%) and psychogenic pseudosyncope (13%) (sum > 100% due to cases with multiple causes). Referral diagnoses were either inaccurate or incomplete in 65% of the patients and were mainly altered at tertiary care assessment to reflex syncope, initial orthostatic hypotension or psychogenic pseudosyncope. A diagnosis of cardiac syncope at referral proved wrong in 17/18 patients. CONCLUSIONS: Syncope patients diagnosed or undiagnosed in primary and secondary care and referred to a syncope unit mostly suffer from reflex syncope, initial orthostatic hypotension or psychogenic pseudosyncope. These causes of T-LOC do not necessarily require ancillary tests, but can be diagnosed by careful history-taking. Besides access to a network of specialized syncope units, simple interventions, such as guideline-based structured evaluation, proper risk-stratification and critical follow-up may reduce diagnostic delay and improve diagnostic accuracy for syncope

    Developing more generalizable prediction models from pooled studies and large clustered data sets.

    Get PDF
    Prediction models often yield inaccurate predictions for new individuals. Large data sets from pooled studies or electronic healthcare records may alleviate this with an increased sample size and variability in sample characteristics. However, existing strategies for prediction model development generally do not account for heterogeneity in predictor-outcome associations between different settings and populations. This limits the generalizability of developed models (even from large, combined, clustered data sets) and necessitates local revisions. We aim to develop methodology for producing prediction models that require less tailoring to different settings and populations. We adopt internal-external cross-validation to assess and reduce heterogeneity in models' predictive performance during the development. We propose a predictor selection algorithm that optimizes the (weighted) average performance while minimizing its variability across the hold-out clusters (or studies). Predictors are added iteratively until the estimated generalizability is optimized. We illustrate this by developing a model for predicting the risk of atrial fibrillation and updating an existing one for diagnosing deep vein thrombosis, using individual participant data from 20 cohorts (N = 10 873) and 11 diagnostic studies (N = 10 014), respectively. Meta-analysis of calibration and discrimination performance in each hold-out cluster shows that trade-offs between average and heterogeneity of performance occurred. Our methodology enables the assessment of heterogeneity of prediction model performance during model development in multiple or clustered data sets, thereby informing researchers on predictor selection to improve the generalizability to different settings and populations, and reduce the need for model tailoring. Our methodology has been implemented in the R package metamisc

    Kognitiv-psychoedukative Gruppenintervention bei stationären Patienten mit depressiven Erkrankungen – Ergebnisse einer prospektiven Pilotstudie

    Get PDF
    Background: Psychoeducational interventions that provide disorder-related information in a goal-oriented and structured manner have been integrated in psychiatric and psychotherapeutic approaches. The present cognitive psychoeducational group programme for inpatients with affective disorders is based on a multidimensional functional illness concept which covers aspects of vulnerability, stressors and coping strategies. It covers information about the disorder and its treatment options, building up rewarding activities, cognitive restructuring and relapse prevention. Materials und Methods: This programme was developed and modified at the University of Munich, Department of Psychiatry (LMU). A feasibility study was set up in a follow-up single group design and analyses of variance (ANOVAs) were performed. A total of 231 patients participated in 46 groups. Results: 125 patients evaluated the effectiveness of the programme and its treatment strategies. The group programme was widely accepted among patients that were pharmacologically and psychotherapeutically treated: more than three quarters of the patients rated its contents to be informative, helpful and applicable to everyday living. Conclusions: Inpatients with affective disorders may already benefit from a structured group programme if it takes into account their cognitive and motivational deficits. The group leaders' didactic and psychotherapeutic strategies as well as the patients' exchanging ideas with each other play a central role. In the course of further investigations the programme was differentiated for patients with major depression or bipolar disorders

    Discovering patterns in drug-protein interactions based on their fingerprints

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The discovering of interesting patterns in drug-protein interaction data at molecular level can reveal hidden relationship among drugs and proteins and can therefore be of paramount importance for such application as drug design. To discover such patterns, we propose here a computational approach to analyze the molecular data of drugs and proteins that are known to have interactions with each other. Specifically, we propose to use a data mining technique called <it>Drug-Protein Interaction Analysis </it>(<it>D-PIA</it>) to determine if there are any commonalities in the fingerprints of the substructures of interacting drug and protein molecules and if so, whether or not any patterns can be generalized from them.</p> <p>Method</p> <p>Given a database of drug-protein interactions, <it>D-PIA </it>performs its tasks in several steps. First, for each drug in the database, the fingerprints of its molecular substructures are first obtained. Second, for each protein in the database, the fingerprints of its protein domains are obtained. Third, based on known interactions between drugs and proteins, an interdependency measure between the fingerprint of each drug substructure and protein domain is then computed. Fourth, based on the interdependency measure, drug substructures and protein domains that are significantly interdependent are identified. Fifth, the existence of interaction relationship between a previously unknown drug-protein pairs is then predicted based on their constituent substructures that are significantly interdependent.</p> <p>Results</p> <p>To evaluate the effectiveness of <it>D-PIA</it>, we have tested it with real drug-protein interaction data. <it>D-PIA </it>has been tested with real drug-protein interaction data including enzymes, ion channels, and protein-coupled receptors. Experimental results show that there are indeed patterns that one can discover in the interdependency relationship between drug substructures and protein domains of interacting drugs and proteins. Based on these relationships, a testing set of drug-protein data are used to see if <it>D-PIA </it>can correctly predict the existence of interaction between drug-protein pairs. The results show that the prediction accuracy can be very high. An AUC score of a ROC plot could reach as high as 75% which shows the effectiveness of this classifier.</p> <p>Conclusions</p> <p><it>D-PIA </it>has the advantage that it is able to perform its tasks effectively based on the fingerprints of drug and protein molecules without requiring any 3D information about their structures and <it>D-PIA </it>is therefore very fast to compute. <it>D-PIA </it>has been tested with real drug-protein interaction data and experimental results show that it can be very useful for predicting previously unknown drug-protein as well as protein-ligand interactions. It can also be used to tackle problems such as ligand specificity which is related directly and indirectly to drug design and discovery.</p

    Twelve tips for integrating massive open online course content into classroom teaching

    Get PDF
    Massive open online courses (MOOCs) are a novel and emerging mode of online learning. They offer the advantages of online learning and provide content including short video lectures, digital readings, interactive assignments, discussion fora, and quizzes. Besides stand-alone use, universities are also trying to integrate MOOC content into the regular curriculum creating blended learning programs. In this 12 tips article, we aim to provide guidelines for readers to integrate MOOC content from their own or from other institutions into regular classroom teaching based on the literature and our own experiences. We provide advice on how to select the right content, how to assess its quality and usefulness, and how to actually create a blend within your existing course

    Evaluation design for a complex intervention program targeting loneliness in non-institutionalized elderly Dutch people

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The aim of this paper is to provide the rationale for an evaluation design for a complex intervention program targeting loneliness among non-institutionalized elderly people in a Dutch community. Complex public health interventions characteristically use the combined approach of intervening on the individual and on the environmental level. It is assumed that the components of a complex intervention interact with and reinforce each other. Furthermore, implementation is highly context-specific and its impact is influenced by external factors. Although the entire community is exposed to the intervention components, each individual is exposed to different components with a different intensity.</p> <p>Methods/Design</p> <p>A logic model of change is used to develop the evaluation design. The model describes what outcomes may logically be expected at different points in time at the individual level. In order to address the complexity of a real-life setting, the evaluation design of the loneliness intervention comprises two types of evaluation studies. The first uses a quasi-experimental pre-test post-test design to evaluate the effectiveness of the overall intervention. A control community comparable to the intervention community was selected, with baseline measurements in 2008 and follow-up measurements scheduled for 2010. This study focuses on changes in the prevalence of loneliness and in the determinants of loneliness within individuals in the general elderly population. Complementarily, the second study is designed to evaluate the individual intervention components and focuses on delivery, reach, acceptance, and short-term outcomes. Different means of project records and surveys among participants are used to collect these data.</p> <p>Discussion</p> <p>Combining these two evaluation strategies has the potential to assess the effectiveness of the overall complex intervention and the contribution of the individual intervention components thereto.</p

    Circulating markers of arterial thrombosis and late-stage age-related macular degeneration: a case-control study.

    No full text
    PURPOSE: The aim of this study was to examine the relation of late-stage age-related macular degeneration (AMD) with markers of systemic atherothrombosis. METHODS: A hospital-based case-control study of AMD was undertaken in London, UK. Cases of AMD (n=81) and controls (n=77) were group matched for age and sex. Standard protocols were used for colour fundus photography and to classify AMD; physical examination included height, weight, history of or treatment for vascular-related diseases and smoking status. Blood samples were taken for measurement of fibrinogen, factor VIIc (FVIIc), factor VIIIc, prothrombin fragment F1.2 (F1.2), tissue plasminogen activator, and von Willebrand factor. Odds ratios from logistic regression analyses of each atherothrombotic marker with AMD were adjusted for age, sex, and established cardiovascular disease risk factors, including smoking, blood pressure, body mass index, and total cholesterol. RESULTS: After adjustment FVIIc and possibly F1.2 were inversely associated with the risk of AMD; per 1 standard deviation increase in these markers the odds ratio were, respectively, 0.62 (95% confidence interval 0.40, 0.95) and 0.71 (0.46, 1.09). None of the other atherothrombotic risk factors appeared to be related to AMD status. There was weak evidence that aspirin is associated with a lower risk of AMD. CONCLUSIONS: This study does not provide strong evidence of associations between AMD and systematic markers of arterial thrombosis, but the potential effects of FVIIc, and F1.2 are worthy of further investigation
    corecore