1,417 research outputs found

    Digital Pharmacovigilance: the medwatcher system for monitoring adverse events through automated processing of internet social media and crowdsourcing

    Full text link
    Thesis (Ph.D.)--Boston UniversityHalf of Americans take a prescription drug, medical devices are in broad use, and population coverage for many vaccines is over 90%. Nearly all medical products carry risk of adverse events (AEs), sometimes severe. However, pre- approval trials use small populations and exclude participants by specific criteria, making them insufficient to determine the risks of a product as used in the population. Existing post-marketing reporting systems are critical, but suffer from underreporting. Meanwhile, recent years have seen an explosion in adoption of Internet services and smartphones. MedWatcher is a new system that harnesses emerging technologies for pharmacovigilance in the general population. MedWatcher consists of two components, a text-processing module, MedWatcher Social, and a crowdsourcing module, MedWatcher Personal. With the natural language processing component, we acquire public data from the Internet, apply classification algorithms, and extract AE signals. With the crowdsourcing application, we provide software allowing consumers to submit AE reports directly. Our MedWatcher Social algorithm for identifying symptoms performs with 77% precision and 88% recall on a sample of Twitter posts. Our machine learning algorithm for identifying AE-related posts performs with 68% precision and 89% recall on a labeled Twitter corpus. For zolpidem tartrate, certolizumab pegol, and dimethyl fumarate, we compared AE profiles from Twitter with reports from the FDA spontaneous reporting system. We find some concordance (Spearman's rho= 0.85, 0.77, 0.82, respectively, for symptoms at MedDRA System Organ Class level). Where the sources differ, milder effects are overrepresented in Twitter. We also compared post-marketing profiles with trial results and found little concordance. MedWatcher Personal saw substantial user adoption, receiving 550 AE reports in a one-year period, including over 400 for one device, Essure. We categorized 400 Essure reports by symptom, compared them to 129 reports from the FDA spontaneous reporting system, and found high concordance (rho = 0.65) using MedDRA Preferred Term granularity. We also compared Essure Twitter posts with MedWatcher and FDA reports, and found rho= 0.25 and 0.31 respectively. MedWatcher represents a novel pharmacoepidemiology surveillance informatics system; our analysis is the first to compare AEs across social media, direct reporting, FDA spontaneous reports, and pre-approval trials

    Are community-based nurse-led self-management support interventions effective in chronic patients? Results of a systematic review and meta-analysis

    Get PDF
    The expansion of primary care and community-based service delivery systems is intended to meet emerging needs, reduce the costs of hospital-based ambulatory care and prevent avoidable hospital use by the provision of more appropriate care. Great emphasis has been placed on the role of self-management in the complex process of care of patient with long-term conditions. Several studies have determined that nurses, among the health professionals, are more recommended to promote health and deliver preventive programs within the primary care context. The aim of this systematic review and meta-analysis is to assess the efficacy of the nurse-led self-management support versus usual care evaluating patient outcomes in chronic care community programs. Systematic review was carried out in MEDLINE, CINAHL, Scopus and Web of Science including RCTs of nurse-led self-management support interventions performed to improve observer reported outcomes (OROs) and patients reported outcomes (PROs), with any method of communication exchange or education in a community setting on patients >18 years of age with a diagnosis of chronic diseases or multi-morbidity. Of the 7,279 papers initially retrieved, 29 met the inclusion criteria. Meta-analyses on systolic (SBP) and diastolic (DBP) blood pressure reduction (10 studies-3,881 patients) and HbA1c reduction (7 studies-2,669 patients) were carried-out. The pooled MD were: SBP -3.04 (95% CI -5.01--1.06), DBP -1.42 (95% CI -1.42--0.49) and HbA1c -0.15 (95% CI -0.32-0.01) in favor of the experimental groups. Meta-analyses of subgroups showed, among others, a statistically significant effect if the interventions were delivered to patients with diabetes (SBP) or CVD (DBP), if the nurses were specifically trained, if the studies had a sample size higher than 200 patients and if the allocation concealment was not clearly defined. Effects on other OROs and PROs as well as quality of life remain inconclusive

    EsPRESSo: Efficient Privacy-Preserving Evaluation of Sample Set Similarity

    Full text link
    Electronic information is increasingly often shared among entities without complete mutual trust. To address related security and privacy issues, a few cryptographic techniques have emerged that support privacy-preserving information sharing and retrieval. One interesting open problem in this context involves two parties that need to assess the similarity of their datasets, but are reluctant to disclose their actual content. This paper presents an efficient and provably-secure construction supporting the privacy-preserving evaluation of sample set similarity, where similarity is measured as the Jaccard index. We present two protocols: the first securely computes the (Jaccard) similarity of two sets, and the second approximates it, using MinHash techniques, with lower complexities. We show that our novel protocols are attractive in many compelling applications, including document/multimedia similarity, biometric authentication, and genetic tests. In the process, we demonstrate that our constructions are appreciably more efficient than prior work.Comment: A preliminary version of this paper was published in the Proceedings of the 7th ESORICS International Workshop on Digital Privacy Management (DPM 2012). This is the full version, appearing in the Journal of Computer Securit

    Automating Systematic Reviews

    Get PDF

    Building information modeling – A game changer for interoperability and a chance for digital preservation of architectural data?

    Get PDF
    Digital data associated with the architectural design-andconstruction process is an essential resource alongside -and even past- the lifecycle of the construction object it describes. Despite this, digital architectural data remains to be largely neglected in digital preservation research – and vice versa, digital preservation is so far neglected in the design-and-construction process. In the last 5 years, Building Information Modeling (BIM) has seen a growing adoption in the architecture and construction domains, marking a large step towards much needed interoperability. The open standard IFC (Industry Foundation Classes) is one way in which data is exchanged in BIM processes. This paper presents a first digital preservation based look at BIM processes, highlighting the history and adoption of the methods as well as the open file format standard IFC (Industry Foundation Classes) as one way to store and preserve BIM data

    Hospital-onset COVID-19 infection surveillance systems: a systematic review

    Get PDF
    Hospital-onset COVID-19 infections (HOCIs) are associated with excess morbidity and mortality in patients and healthcare workers. The aim of this review was to explore and describe the current literature in HOCI surveillance. Medline, EMBASE, the Cochrane Database of Systematic Reviews, the Cochrane Register of Controlled Trials, and MedRxiv were searched up to 30 November 2020 using broad search criteria. Articles of HOCI surveillance systems were included. Data describing HOCI definitions, HOCI incidence, types of HOCI identification surveillance systems, and level of system implementation were extracted. A total of 292 citations were identified. Nine studies on HOCI surveillance were included. Six studies reported on the proportion of HOCI among hospitalized COVID-19 patients, which ranged from 0 to 15.2%. Six studies provided HOCI case definitions. Standardized national definitions provided by the UK and US governments were identified. Four studies included healthcare workers in the surveillance. One study articulated a multimodal strategy of infection prevention and control practices including HOCI surveillance. All identified HOCI surveillance systems were implemented at institutional level, with eight studies focusing on all hospital inpatients and one study focusing on patients in the emergency department. Multiple types of surveillance were identified. Four studies reported automated surveillance, of which one included real-time analysis, and one included genomic data. Overall, the study quality was limited by the observational nature with short follow-up periods. In conclusion, HOCI case definitions and surveillance methods were developed pragmatically. Whilst standardized case definitions and surveillance systems are ideal for integration with existing routine surveillance activities and adoption in different settings, we acknowledged the difficulties in establishing such standards in the short-term

    Generation and Applications of Knowledge Graphs in Systems and Networks Biology

    Get PDF
    The acceleration in the generation of data in the biomedical domain has necessitated the use of computational approaches to assist in its interpretation. However, these approaches rely on the availability of high quality, structured, formalized biomedical knowledge. This thesis has the two goals to improve methods for curation and semantic data integration to generate high granularity biological knowledge graphs and to develop novel methods for using prior biological knowledge to propose new biological hypotheses. The first two publications describe an ecosystem for handling biological knowledge graphs encoded in the Biological Expression Language throughout the stages of curation, visualization, and analysis. Further, the second two publications describe the reproducible acquisition and integration of high-granularity knowledge with low contextual specificity from structured biological data sources on a massive scale and support the semi-automated curation of new content at high speed and precision. After building the ecosystem and acquiring content, the last three publications in this thesis demonstrate three different applications of biological knowledge graphs in modeling and simulation. The first demonstrates the use of agent-based modeling for simulation of neurodegenerative disease biomarker trajectories using biological knowledge graphs as priors. The second applies network representation learning to prioritize nodes in biological knowledge graphs based on corresponding experimental measurements to identify novel targets. Finally, the third uses biological knowledge graphs and develops algorithmics to deconvolute the mechanism of action of drugs, that could also serve to identify drug repositioning candidates. Ultimately, the this thesis lays the groundwork for production-level applications of drug repositioning algorithms and other knowledge-driven approaches to analyzing biomedical experiments

    When Silver Is As Good As Gold: Using Weak Supervision to Train Machine Learning Models on Social Media Data

    Get PDF
    Over the last decade, advances in machine learning have led to an exponential growth in artificial intelligence i.e., machine learning models capable of learning from vast amounts of data to perform several tasks such as text classification, regression, machine translation, speech recognition, and many others. While massive volumes of data are available, due to the manual curation process involved in the generation of training datasets, only a percentage of the data is used to train machine learning models. The process of labeling data with a ground-truth value is extremely tedious, expensive, and is the major bottleneck of supervised learning. To curtail this, the theory of noisy learning can be employed where data labeled through heuristics, knowledge bases and weak classifiers can be utilized for training, instead of data obtained through manual annotation. The assumption here is that a large volume of training data, which contains noise and acquired through an automated process, can compensate for the lack of manual labels. In this study, we utilize heuristic based approaches to create noisy silver standard datasets. We extensively tested the theory of noisy learning on four different applications by training several machine learning models using the silver standard dataset with several sample sizes and class imbalances and tested the performance using a gold standard dataset. Our evaluations on the four applications indicate the success of silver standard datasets in identifying a gold standard dataset. We conclude the study with evidence that noisy social media data can be utilized for weak supervisio
    • …
    corecore