1,567 research outputs found
Digital Pharmacovigilance: the medwatcher system for monitoring adverse events through automated processing of internet social media and crowdsourcing
Thesis (Ph.D.)--Boston UniversityHalf of Americans take a prescription drug, medical devices are in broad use, and population coverage for many vaccines is over 90%. Nearly all medical products carry risk of adverse events (AEs), sometimes severe. However, pre- approval trials use small populations and exclude participants by specific criteria, making them insufficient to determine the risks of a product as used in the population. Existing post-marketing reporting systems are critical, but suffer from underreporting. Meanwhile, recent years have seen an explosion in adoption of Internet services and smartphones. MedWatcher is a new system that harnesses emerging technologies for pharmacovigilance in the general population. MedWatcher consists of two components, a text-processing module,
MedWatcher Social, and a crowdsourcing module, MedWatcher Personal. With the natural language processing component, we acquire public data from the Internet, apply classification algorithms, and extract AE signals. With the crowdsourcing application, we provide software allowing consumers to submit AE reports directly.
Our MedWatcher Social algorithm for identifying symptoms performs with 77% precision and 88% recall on a sample of Twitter posts. Our machine learning algorithm for identifying AE-related posts performs with 68% precision and 89% recall on a labeled Twitter corpus. For zolpidem tartrate, certolizumab pegol, and dimethyl fumarate, we compared AE profiles from Twitter with reports from the FDA spontaneous reporting system. We find some concordance (Spearman's rho= 0.85, 0.77, 0.82, respectively, for symptoms at MedDRA System Organ Class level). Where the sources differ, milder effects are overrepresented in Twitter. We also compared post-marketing profiles with trial results and found little concordance.
MedWatcher Personal saw substantial user adoption, receiving 550 AE reports in a one-year period, including over 400 for one device, Essure. We categorized 400 Essure reports by symptom, compared them to 129 reports from the FDA spontaneous reporting system, and found high concordance (rho = 0.65) using MedDRA Preferred Term granularity. We also compared Essure Twitter posts with MedWatcher and FDA reports, and found rho= 0.25 and 0.31 respectively.
MedWatcher represents a novel pharmacoepidemiology surveillance informatics system; our analysis is the first to compare AEs across social media, direct reporting, FDA spontaneous reports, and pre-approval trials
Recommended from our members
Comparison of single and multiphase tracer test results from the Frio CO2 Pilot Study, Dayton, Texas
Bureau of Economic Geolog
Model Transport: Towards Scalable Transfer Learning on Manifolds
We consider the intersection of two research fields: transfer learning and statistics on manifolds. In particular, we consider, for manifold-valued data, transfer learning of tangent-space models such as Gaussians distributions, PCA, regression, or classifiers. Though one would hope to simply use ordinary Rn-transfer learning ideas, the manifold structure prevents it. We overcome this by basing our method on inner-product-preserving parallel transport, a well-known tool widely used in other problems of statistics on manifolds in computer vision. At first, this straightforward idea seems to suffer from an obvious shortcoming: Transporting large datasets is prohibitively expensive, hindering scalability. Fortunately, with our approach, we never transport data. Rather, we show how the statistical models themselves can be transported, and prove that for the tangent-space models above, the transport “commutes” with learning. Consequently, our compact framework, applicable to a large class of manifolds, is not restricted by the size of either the training or test sets. We demonstrate the approach by transferring PCA and logistic-regression models of real-world data involving 3D shapes and image descriptors
Multiple Sclerosis Lesion Detection Using Constrained GMM and Curve Evolution
This paper focuses on
the detection and segmentation of Multiple
Sclerosis (MS) lesions in magnetic resonance
(MRI) brain images. To capture the complex
tissue spatial layout, a probabilistic model
termed Constrained Gaussian Mixture Model (CGMM)
is proposed based on a mixture of multiple
spatially oriented Gaussians per tissue. The
intensity of a tissue is considered a global
parameter and is constrained, by a
parameter-tying scheme, to be the same value for
the entire set of Gaussians that are related to
the same tissue. MS lesions are identified as
outlier Gaussian components and are grouped to
form a new class in addition to the healthy
tissue classes. A probability-based curve
evolution technique is used to refine the
delineation of lesion boundaries. The proposed
CGMM-CE algorithm is used to segment 3D MRI
brain images with an arbitrary number of
channels. The CGMM-CE algorithm is automated
and does not require an atlas for initialization
or parameter learning. Experimental results on
both standard brain MRI simulation data and real
data indicate that the proposed method
outperforms previously suggested approaches,
especially for highly noisy data
A Deep Moving-camera Background Model
In video analysis, background models have many applications such as
background/foreground separation, change detection, anomaly detection,
tracking, and more. However, while learning such a model in a video captured by
a static camera is a fairly-solved task, in the case of a Moving-camera
Background Model (MCBM), the success has been far more modest due to
algorithmic and scalability challenges that arise due to the camera motion.
Thus, existing MCBMs are limited in their scope and their supported
camera-motion types. These hurdles also impeded the employment, in this
unsupervised task, of end-to-end solutions based on deep learning (DL).
Moreover, existing MCBMs usually model the background either on the domain of a
typically-large panoramic image or in an online fashion. Unfortunately, the
former creates several problems, including poor scalability, while the latter
prevents the recognition and leveraging of cases where the camera revisits
previously-seen parts of the scene. This paper proposes a new method, called
DeepMCBM, that eliminates all the aforementioned issues and achieves
state-of-the-art results. Concretely, first we identify the difficulties
associated with joint alignment of video frames in general and in a DL setting
in particular. Next, we propose a new strategy for joint alignment that lets us
use a spatial transformer net with neither a regularization nor any form of
specialized (and non-differentiable) initialization. Coupled with an
autoencoder conditioned on unwarped robust central moments (obtained from the
joint alignment), this yields an end-to-end regularization-free MCBM that
supports a broad range of camera motions and scales gracefully. We demonstrate
DeepMCBM's utility on a variety of videos, including ones beyond the scope of
other methods. Our code is available at https://github.com/BGU-CS-VIL/DeepMCBM .Comment: 26 paged, 5 figures. To be published in ECCV 202
Recommended from our members
Automated Vocabulary Discovery for Geo-Parsing Online Epidemic Intelligence
Background Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human.Results Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon.Conclusion The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting
- …