76 research outputs found
By promoting Donald Trump’s tweets, the media are helping to erode trust in democratic institutions
Since he announced he was running for the presidency in 2015, Donald Trump has used social media, and Twitter in particular, to connect with voters and to promote himself like no previous US politician. Devin J. Christensen analyzed media coverage of over 2,500 of Donald Trump’s tweets and found that those with the most inflammatory language were more likely to be highlighted. He warns that social media’s lack of editorial standards means its current use as a primary outlet for political communication threatens the future of American democracy
THE SEDS CONCEPTUAL FRAMEWORK FOR HEALTH POLICY JUSTICE
This dissertation suggests a conceptual framework for thinking about the justness of state health policy interventions. Instituting health regulations always involves limiting individual freedom for the sake of improving health outcomes. Both freedom and healthfulness are widely acknowledged to be foundational moral goods, but the state must often choose to protect one of these goods at the cost of the other. For this reason, the regulatory decisions of the state rightly attract significant attention from political theorists, philosophers, public policymakers, and everyday citizens as eminent issues of justice. However, despite near universal urgency to get health policy right, little consensus exists regarding the moral permissibility of discrete policy interventions. Even authors who agree about the justifiability of particular policies frequently offer incompatible explanations for their views. This dissertation seeks to intervene in these debates by offering a conceptual framework for comparing the plausibility of competing arguments about policy justice. This framework includes four criteria for assessing moral arguments’ plausibility: 1) Soundness, which considers whether arguments’ conclusions validly follow from plausible premises; 2) Endorsement, which considers the goods that arguments propose to secure in exchange for restricting individual freedom; 3) Desert, which considers how well arguments prescribe regulatory protection and restraint according to suffering and responsibility, respectively; and 4) Speech, which considers how arguments as political speech themselves support or undermine broader efforts to achieve justice in state policy. After defining the elements of the SEDS framework, the bulk of the dissertation applies it to arguments in favor of two familiar health policy cases: laws requiring the use of motorcycle helmets, and laws requiring vaccination against tetanus. These case studies show that more sustained theoretical reflection on our justifying arguments is warranted: political theorists, empirical social scientists, and policymakers routinely invoke justifications for state policy that fail to reflect basic and noncontroversial assumptions about justice and rhetoric. By identifying the strengths of arguments for common health policy interventions, the SEDS framework improves our thinking about how to balance the goods of health and freedom and gestures toward actionable ways we might achieve more just health policies.Doctor of Philosoph
DNA Methylation Arrays as Surrogate Measures of Cell mixture Distribution
There has been a long-standing need in biomedical research for a method that quantifies the normally mixed composition of leukocytes beyond what is possible by simple histological or flow cytometric assessments. The latter is restricted by the labile nature of protein epitopes, requirements for cell processing, and timely cell analysis. In a diverse array of diseases and following numerous immune-toxic exposures, leukocyte composition will critically inform the underlying immuno-biology to most chronic medical conditions. Emerging research demonstrates that DNA methylation is responsible for cellular differentiation, and when measured in whole peripheral blood, serves to distinguish cancer cases from controls
Recommended from our members
Recursively partitioned mixture model clustering of DNA methylation data using biologically informed correlation structures
DNA methylation is a well-recognized epigenetic mechanism that has been the subject of a growing
body of literature typically focused on the identification and study of profiles of DNA methylation and their
association with human diseases and exposures. In recent years, a number of unsupervised clustering algorithms,
both parametric and non-parametric, have been proposed for clustering large-scale DNA methylation
data. However, most of these approaches do not incorporate known biological relationships of measured
features, and in some cases, rely on unrealistic assumptions regarding the nature of DNA methylation. Here,
we propose a modified version of a recursively partitioned mixture model (RPMM) that integrates information
related to the proximity of CpG loci within the genome to inform correlation structures from which subsequent
clustering analysis is based. Using simulations and four methylation data sets, we demonstrate that
integrating biologically informative correlation structures within RPMM resulted in improved goodness-offit,
clustering consistency, and the ability to detect biologically meaningful clusters compared to methods
which ignore such correlation. Integrating biologically-informed correlation structures to enhance modeling
techniques is motivated by the rapid increase in resolution of DNA methylation microarrays and the increasing
understanding of the biology of this epigenetic mechanism.Keywords: Model-based clustering, Genomic data, Finite mixture models epigenetic
Bayesian Multimodel Inference for Geostatistical Regression Models
The problem of simultaneous covariate selection and parameter inference for spatial regression models is considered. Previous research has shown that failure to take spatial correlation into account can influence the outcome of standard model selection methods. A Markov chain Monte Carlo (MCMC) method is investigated for the calculation of parameter estimates and posterior model probabilities for spatial regression models. The method can accommodate normal and non-normal response data and a large number of covariates. Thus the method is very flexible and can be used to fit spatial linear models, spatial linear mixed models, and spatial generalized linear mixed models (GLMMs). The Bayesian MCMC method also allows a priori unequal weighting of covariates, which is not possible with many model selection methods such as Akaike's information criterion (AIC). The proposed method is demonstrated on two data sets. The first is the whiptail lizard data set which has been previously analyzed by other researchers investigating model selection methods. Our results confirmed the previous analysis suggesting that sandy soil and ant abundance were strongly associated with lizard abundance. The second data set concerned pollution tolerant fish abundance in relation to several environmental factors. Results indicate that abundance is positively related to Strahler stream order and a habitat quality index. Abundance is negatively related to percent watershed disturbance
Recommended from our members
DNA methylation arrays as surrogate measures of cell mixture distribution
Background: There has been a long-standing need in biomedical research for a method that quantifies the normally mixed composition of leukocytes beyond what is possible by simple histological or flow cytometric assessments. The latter is restricted by the labile nature of protein epitopes, requirements for cell processing, and timely cell analysis. In a diverse array of diseases and following numerous immune-toxic exposures, leukocyte composition will critically inform the underlying immuno-biology to most chronic medical conditions. Emerging research demonstrates that DNA methylation is responsible for cellular differentiation, and when measured in whole peripheral blood, serves to distinguish cancer cases from controls.
Results: Here we present a method, similar to regression calibration, for inferring changes in the distribution of white blood cells between different subpopulations (e. g. cases and controls) using DNA methylation signatures, in combination with a previously obtained external validation set consisting of signatures from purified leukocyte samples. We validate the fundamental idea in a cell mixture reconstruction experiment, then demonstrate our method on DNA methylation data sets from several studies, including data from a Head and Neck Squamous Cell Carcinoma (HNSCC) study and an ovarian cancer study. Our method produces results consistent with prior biological findings, thereby validating the approach.
Conclusions: Our method, in combination with an appropriate external validation set, promises new opportunities for large-scale immunological studies of both disease states and noxious exposures.Keywords: Down syndrome,
Absolute counts,
Variable analysis,
Lung cancer,
Measurement error,
Peripheral blood,
Stem cells,
Gene expression,
T lymphocyte subsets,
Ovarian cance
Recommended from our members
Review of processing and analysis methods for DNA methylation array data
The promise of epigenome-wide association studies (EWAS) and cancer specific somatic changes in improving our understanding of cancer coupled with the decreasing cost and increasing coverage of DNA methylation microarrays, has brought about a surge in the use of these technologies. Here, we aim to provide both a review of issues encountered in the processing and analysis of array-based DNA methylation data, as well as to summarize advantages of recent approaches proposed for handling those issues; focusing on approaches publicly available in open-source environments such as R and Bioconductor. The processing tools and analysis flowchart described we hope will facilitate researchers to effectively use these powerful DNA methylation array-based platforms, thereby advancing our understanding of human health and disease.Keywords: Processing, Microarray, Analysis, DNA methylation, Bioconductor and R package
Multi-messenger observations of a binary neutron star merger
On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ~1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40+8-8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 Mo. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ~40 Mpc) less than 11 hours after the merger by the One- Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ~9 and ~16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta
Global prevalence and genotype distribution of hepatitis C virus infection in 2015 : A modelling study
Publisher Copyright: © 2017 Elsevier LtdBackground The 69th World Health Assembly approved the Global Health Sector Strategy to eliminate hepatitis C virus (HCV) infection by 2030, which can become a reality with the recent launch of direct acting antiviral therapies. Reliable disease burden estimates are required for national strategies. This analysis estimates the global prevalence of viraemic HCV at the end of 2015, an update of—and expansion on—the 2014 analysis, which reported 80 million (95% CI 64–103) viraemic infections in 2013. Methods We developed country-level disease burden models following a systematic review of HCV prevalence (number of studies, n=6754) and genotype (n=11 342) studies published after 2013. A Delphi process was used to gain country expert consensus and validate inputs. Published estimates alone were used for countries where expert panel meetings could not be scheduled. Global prevalence was estimated using regional averages for countries without data. Findings Models were built for 100 countries, 59 of which were approved by country experts, with the remaining 41 estimated using published data alone. The remaining countries had insufficient data to create a model. The global prevalence of viraemic HCV is estimated to be 1·0% (95% uncertainty interval 0·8–1·1) in 2015, corresponding to 71·1 million (62·5–79·4) viraemic infections. Genotypes 1 and 3 were the most common cause of infections (44% and 25%, respectively). Interpretation The global estimate of viraemic infections is lower than previous estimates, largely due to more recent (lower) prevalence estimates in Africa. Additionally, increased mortality due to liver-related causes and an ageing population may have contributed to a reduction in infections. Funding John C Martin Foundation.publishersversionPeer reviewe
- …