36 research outputs found
Cosmological parameters from SDSS and WMAP
We measure cosmological parameters using the three-dimensional power spectrum
P(k) from over 200,000 galaxies in the Sloan Digital Sky Survey (SDSS) in
combination with WMAP and other data. Our results are consistent with a
``vanilla'' flat adiabatic Lambda-CDM model without tilt (n=1), running tilt,
tensor modes or massive neutrinos. Adding SDSS information more than halves the
WMAP-only error bars on some parameters, tightening 1 sigma constraints on the
Hubble parameter from h~0.74+0.18-0.07 to h~0.70+0.04-0.03, on the matter
density from Omega_m~0.25+/-0.10 to Omega_m~0.30+/-0.04 (1 sigma) and on
neutrino masses from <11 eV to <0.6 eV (95%). SDSS helps even more when
dropping prior assumptions about curvature, neutrinos, tensor modes and the
equation of state. Our results are in substantial agreement with the joint
analysis of WMAP and the 2dF Galaxy Redshift Survey, which is an impressive
consistency check with independent redshift survey data and analysis
techniques. In this paper, we place particular emphasis on clarifying the
physical origin of the constraints, i.e., what we do and do not know when using
different data sets and prior assumptions. For instance, dropping the
assumption that space is perfectly flat, the WMAP-only constraint on the
measured age of the Universe tightens from t0~16.3+2.3-1.8 Gyr to
t0~14.1+1.0-0.9 Gyr by adding SDSS and SN Ia data. Including tensors, running
tilt, neutrino mass and equation of state in the list of free parameters, many
constraints are still quite weak, but future cosmological measurements from
SDSS and other sources should allow these to be substantially tightened.Comment: Minor revisions to match accepted PRD version. SDSS data and ppt
figures available at http://www.hep.upenn.edu/~max/sdsspars.htm
A communal catalogue reveals Earth’s multiscale microbial diversity
Our growing awareness of the microbial world’s importance and diversity contrasts starkly with our limited understanding of its fundamental structure. Despite recent advances in DNA sequencing, a lack of standardized protocols and common analytical frameworks impedes comparisons among studies, hindering the development of global inferences about microbial life on Earth. Here we present a meta-analysis of microbial community samples collected by hundreds of researchers for the Earth Microbiome Project. Coordinated protocols and new analytical methods, particularly the use of exact sequences instead of clustered operational taxonomic units, enable bacterial and archaeal ribosomal RNA gene sequences to be followed across multiple studies and allow us to explore patterns of diversity at an unprecedented scale. The result is both a reference database giving global context to DNA sequence data and a framework for incorporating data from future studies, fostering increasingly complete characterization of Earth’s microbial diversity
A communal catalogue reveals Earth's multiscale microbial diversity
Our growing awareness of the microbial world's importance and diversity contrasts starkly with our limited understanding of its fundamental structure. Despite recent advances in DNA sequencing, a lack of standardized protocols and common analytical frameworks impedes comparisons among studies, hindering the development of global inferences about microbial life on Earth. Here we present a meta-analysis of microbial community samples collected by hundreds of researchers for the Earth Microbiome Project. Coordinated protocols and new analytical methods, particularly the use of exact sequences instead of clustered operational taxonomic units, enable bacterial and archaeal ribosomal RNA gene sequences to be followed across multiple studies and allow us to explore patterns of diversity at an unprecedented scale. The result is both a reference database giving global context to DNA sequence data and a framework for incorporating data from future studies, fostering increasingly complete characterization of Earth's microbial diversity.Peer reviewe
Recommended from our members
Averting biodiversity collapse in tropical forest protected areas
The rapid disruption of tropical forests probably imperils global biodiversity more than any other contemporary phenomenon¹⁻³. With deforestation advancing quickly, protected areas are increasingly becoming final refuges for threatened species and natural ecosystem processes. However, many protected areas in the tropics are themselves vulnerable to human encroachment and other environmental stresses⁴⁻⁹. As pressures mount, it is vital to know whether existing reserves can sustain their biodiversity. A critical constraint in addressing this question has been that data describing a broad array of biodiversity groups have been unavailable for a sufficiently large and representative sample of reserves. Here we present a uniquely comprehensive data set on changes over the past 20 to 30 years in 31 functional groups of species and 21 potential drivers of environmental change, for 60 protected areas stratified across the world’s major tropical regions. Our analysis reveals great variation in reserve ‘health’: about half of all reserves have been effective or performed passably, but the rest are experiencing an erosion of biodiversity that is often alarmingly widespread taxonomically and functionally. Habitat disruption, hunting and forest-product exploitation were the strongest predictors of declining reserve health. Crucially, environmental changes immediately outside reserves seemed nearly as important as those inside in determining their ecological fate, with changes inside reserves strongly mirroring those occurring around them. These findings suggest that tropical protected areas are often intimately linked ecologically to their surrounding habitats, and that a failure to stem broad-scale loss and degradation of such habitats could sharply increase the likelihood of serious biodiversity declines.Keywords: Ecology, Environmental scienc
Clinical Validation of a Machine-Learned, Point-of-Care System to IDENTIFY Functionally Significant Coronary Artery Disease
Many clinical studies have shown wide performance variation in tests to identify coronary artery disease (CAD). Coronary computed tomography angiography (CCTA) has been identified as an effective rule-out test but is not widely available in the USA, particularly so in rural areas. Patients in rural areas are underserved in the healthcare system as compared to urban areas, rendering it a priority population to target with highly accessible diagnostics. We previously developed a machine-learned algorithm to identify the presence of CAD (defined by functional significance) in patients with symptoms without the use of radiation or stress. The algorithm requires 215 s temporally synchronized photoplethysmographic and orthogonal voltage gradient signals acquired at rest. The purpose of the present work is to validate the performance of the algorithm in a frozen state (i.e., no retraining) in a large, blinded dataset from the IDENTIFY trial. IDENTIFY is a multicenter, selectively blinded, non-randomized, prospective, repository study to acquire signals with paired metadata from subjects with symptoms indicative of CAD within seven days prior to either left heart catheterization or CCTA. The algorithm\u27s sensitivity and specificity were validated using a set of unseen patient signals ( = 1816). Pre-specified endpoints were chosen to demonstrate a rule-out performance comparable to CCTA. The ROC-AUC in the validation set was 0.80 (95% CI: 0.78-0.82). This performance was maintained in both male and female subgroups. At the pre-specified cut point, the sensitivity was 0.85 (95% CI: 0.82-0.88), and the specificity was 0.58 (95% CI: 0.54-0.62), passing the pre-specified endpoints. Assuming a 4% disease prevalence, the NPV was 0.99. Algorithm performance is comparable to tertiary center testing using CCTA. Selection of a suitable cut-point results in the same sensitivity and specificity performance in females as in males. Therefore, a medical device embedding this algorithm may address an unmet need for a non-invasive, front-line point-of-care test for CAD (without any radiation or stress), thus offering significant benefits to the patient, physician, and healthcare system
Development and validation of a machine learned algorithm to IDENTIFY functionally significant coronary artery disease
Introduction: Multiple trials have demonstrated broad performance ranges for tests attempting to detect coronary artery disease. The most common test, SPECT, requires capital-intensive equipment, the use of radionuclides, induction of stress, and time off work and/or travel. Presented here are the development and clinical validation of an office-based machine learned algorithm to identify functionally significant coronary artery disease without radiation, expensive equipment or induced patient stress.
Materials and methods: The IDENTIFY trial (NCT03864081) is a prospective, multicenter, non-randomized, selectively blinded, repository study to collect acquired signals paired with subject meta-data, including outcomes, from subjects with symptoms of coronary artery disease. Time synchronized orthogonal voltage gradient and photoplethysmographic signals were collected for 230 seconds from recumbent subjects at rest within seven days of either left heart catheterization or coronary computed tomography angiography. Following machine learning on a proportion of these data (N = 2,522), a final algorithm was selected, along with a pre-specified cut point on the receiver operating characteristic curve for clinical validation. An unseen set of subject signals (N = 965) was used to validate the algorithm.
Results: At the pre-specified cut point, the sensitivity for detecting functionally significant coronary artery disease was 0.73 (95% CI: 0.68-0.78), and the specificity was 0.68 (0.62-0.74). There exists a point on the receiver operating characteristic curve at which the negative predictive value is the same as coronary computed tomographic angiography, 0.99, assuming a disease incidence of 0.04, yielding sensitivity of 0.89 and specificity of 0.42. Selecting a point at which the positive predictive value is maximized, 0.12, yields sensitivity of 0.39 and specificity of 0.88.
Conclusion: The performance of the machine learned algorithm presented here is comparable to common tertiary center testing for coronary artery disease. Employing multiple cut points on the receiver operating characteristic curve can yield the negative predictive value of coronary computed tomographic angiography and a positive predictive value approaching that of myocardial perfusion imaging. As such, a system employing this algorithm may address the need for a non-invasive, no radiation, no stress, front line test, and hence offer significant advantages to the patient, their physician, and healthcare system
Multicenter validation of a machine learning phase space electro-mechanical pulse wave analysis to predict elevated left ventricular end diastolic pressure at the point-of-care
BACKGROUND: Phase space is a mechanical systems approach and large-scale data representation of an object in 3-dimensional space. Whether such techniques can be applied to predict left ventricular pressures non-invasively and at the point-of-care is unknown.
OBJECTIVE: This study prospectively validated a phase space machine-learned approach based on a novel electro-mechanical pulse wave method of data collection through orthogonal voltage gradient (OVG) and photoplethysmography (PPG) for the prediction of elevated left ventricular end diastolic pressure (LVEDP).
METHODS: Consecutive outpatients across 15 US-based healthcare centers with symptoms suggestive of coronary artery disease were enrolled at the time of elective cardiac catheterization and underwent OVG and PPG data acquisition immediately prior to angiography with signals paired with LVEDP (IDENTIFY; NCT #03864081). The primary objective was to validate a ML algorithm for prediction of elevated LVEDP using a definition of ≥ 25 mmHg (study cohort) and normal LVEDP ≤ 12 mmHg (control cohort), using AUC as the measure of diagnostic accuracy. Secondary objectives included performance of the ML predictor in a propensity matched cohort (age and gender) and performance for an elevated LVEDP across a spectrum of comparative LVEDP (increments). Features were extracted from the OVG and PPG datasets and were analyzed using machine-learning approaches.
RESULTS: The study cohort consisted of 684 subjects stratified into three LVEDP categories, ≤ 12 mmHg (N = 258), LVEDP 13-24 mmHg (N = 347), and LVEDP ≥ 25 mmHg (N = 79). Testing of the ML predictor demonstrated an AUC of 0.81 (95% CI 0.76-0.86) for the prediction of an elevated LVEDP with a sensitivity of 82% and specificity of 68%, respectively. Among a propensity matched cohort (N = 79) the ML predictor demonstrated a similar result AUC 0.79 (95% CI: 0.72-0.8). Using a constant definition of elevated LVEDP and varying the lower threshold across LVEDP the ML predictor demonstrated and AUC ranging from 0.79-0.82.
CONCLUSION: The phase space ML analysis provides a robust prediction for an elevated LVEDP at the point-of-care. These data suggest a potential role for an OVG and PPG derived electro-mechanical pulse wave strategy to determine if LVEDP is elevated in patients with symptoms suggestive of cardiac disease