1,517 research outputs found
Quality control in public participation assessments of water quality: the OPAL Water Survey
BACKGROUND: Public participation in scientific data collection is a rapidly expanding field. In water quality surveys, the involvement of the public, usually as trained volunteers, generally includes the identification of aquatic invertebrates to a broad taxonomic level. However, quality assurance is often not addressed and remains a key concern for the acceptance of publicly-generated water quality data. The Open Air Laboratories (OPAL) Water Survey, launched in May 2010, aimed to encourage interest and participation in water science by developing a 'low-barrier-to-entry' water quality survey. During 2010, over 3000 participant-selected lakes and ponds were surveyed making this the largest public participation lake and pond survey undertaken to date in the UK. But the OPAL approach of using untrained volunteers and largely anonymous data submission exacerbates quality control concerns. A number of approaches were used in order to address data quality issues including: sensitivity analysis to determine differences due to operator, sampling effort and duration; direct comparisons of identification between participants and experienced scientists; the use of a self-assessment identification quiz; the use of multiple participant surveys to assess data variability at single sites over short periods of time; comparison of survey techniques with other measurement variables and with other metrics generally considered more accurate. These quality control approaches were then used to screen the OPAL Water Survey data to generate a more robust dataset. RESULTS: The OPAL Water Survey results provide a regional and national assessment of water quality as well as a first national picture of water clarity (as suspended solids concentrations). Less than 10 % of lakes and ponds surveyed were ‘poor’ quality while 26.8 % were in the highest water quality band. CONCLUSIONS: It is likely that there will always be a question mark over untrained volunteer generated data simply because quality assurance is uncertain, regardless of any post hoc data analyses. Quality control at all stages, from survey design, identification tests, data submission and interpretation can all increase confidence such that useful data can be generated by public participants
The Leeway of Shipping Containers at Different Immersion Levels
The leeway of 20-foot containers in typical distress conditions is
established through field experiments in a Norwegian fjord and in open-ocean
conditions off the coast of France with wind speed ranging from calm to 14 m/s.
The experimental setup is described in detail and certain recommendations given
for experiments on objects of this size. The results are compared with the
leeway of a scaled-down container before the full set of measured leeway
characteristics are compared with a semi-analytical model of immersed
containers. Our results are broadly consistent with the semi-analytical model,
but the model is found to be sensitive to choice of drag coefficient and makes
no estimate of the cross-wind leeway of containers. We extend the results from
the semi-analytical immersion model by extrapolating the observed leeway
divergence and estimates of the experimental uncertainty to various realistic
immersion levels. The sensitivity of these leeway estimates at different
immersion levels are tested using a stochastic trajectory model. Search areas
are found to be sensitive to the exact immersion levels, the choice of drag
coefficient and somewhat less sensitive to the inclusion of leeway divergence.
We further compare the search areas thus found with a range of trajectories
estimated using the semi-analytical model with only perturbations to the
immersion level. We find that the search areas calculated without estimates of
crosswind leeway and its uncertainty will grossly underestimate the rate of
expansion of the search areas. We recommend that stochastic trajectory models
of container drift should account for these uncertainties by generating search
areas for different immersion levels and with the uncertainties in crosswind
and downwind leeway reported from our field experiments.Comment: 25 pages, 11 figures and 5 tables; Ocean Dynamics, Special Issue on
Advances in Search and Rescue at Sea (2012
Recommended from our members
Sustaining Water Resources: Environmental and Economic Impact
Water is essential to human health and economic development due to its utilization in sanitation, agriculture, and energy. Supplying water to an expanding world population requires simultaneous consideration of multiple societal sectors competing for limited resources. Water conservation, supply augmentation, distribution, and treatment of contaminants must work in concert to ensure water sustainability. Water is linked to other sectors, and the quantity and quality of water resources are changing. The efficient use of water in agriculture, the largest user of water worldwide, via drip irrigation is described as is the use of energy-intensive reverse osmosis to supplement freshwater supplies. Efforts to manage watersheds and model their responses to severe weather events are discussed along with efforts to improve the predictability of their function. The regional competition for water resources impacts both energy and water supply reliability, which requires that nations balance both for sustainable economic development. The use of water and energy in the US is described which provides a lens through which to both rethink the interrelationship of water and energy as well as evaluate technological developments. Advances in nanotechnology are highlighted as one emerging technology. These results underscore the multifaceted nature of water sustainability, its interrelationship to energy and economic development, and the need to develop, manage and regulate water systems in a concerted manner
Molecular heterogeneity in major urinary proteins of Mus musculus subspecies: potential candidates involved in speciation
When hybridisation carries a cost, natural selection is predicted to favour evolution of traits that allow assortative mating (reinforcement). Incipient speciation between the two European house mouse subspecies, Mus musculus domesticus and M.m.musculus, sharing a hybrid zone, provides an opportunity to understand evolution of assortative mating at a molecular level. Mouse urine odours allow subspecific mate discrimination, with assortative preferences evident in the hybrid zone but not in allopatry. Here we assess the potential of MUPs (major urinary proteins) as candidates for signal divergence by comparing MUP expression in urine samples from the Danish hybrid zone border (contact) and from allopatric populations. Mass spectrometric characterisation identified novel MUPs in both subspecies involving mostly new combinations of amino acid changes previously observed in M.m.domesticus. The subspecies expressed distinct MUP signatures, with most MUPs expressed by only one subspecies. Expression of at least eight MUPs showed significant subspecies divergence both in allopatry and contact zone. Another seven MUPs showed divergence in expression between the subspecies only in the contact zone, consistent with divergence by reinforcement. These proteins are candidates for the semiochemical barrier to hybridisation, providing an opportunity to characterise the nature and evolution of a putative species recognition signal
Light echoes reveal an unexpectedly cool Eta Carinae during its 19th-century Great Eruption
Eta Carinae (Eta Car) is one of the most massive binary stars in the Milky
Way. It became the second-brightest star in the sky during its mid-19th century
"Great Eruption," but then faded from view (with only naked-eye estimates of
brightness). Its eruption is unique among known astronomical transients in that
it exceeded the Eddington luminosity limit for 10 years. Because it is only 2.3
kpc away, spatially resolved studies of the nebula have constrained the ejected
mass and velocity, indicating that in its 19th century eruption, Eta Car
ejected more than 10 M_solar in an event that had 10% of the energy of a
typical core-collapse supernova without destroying the star. Here we report the
discovery of light echoes of Eta Carinae which appear to be from the 1838-1858
Great Eruption. Spectra of these light echoes show only absorption lines, which
are blueshifted by -210 km/s, in good agreement with predicted expansion
speeds. The light-echo spectra correlate best with those of G2-G5 supergiant
spectra, which have effective temperatures of ~5000 K. In contrast to the class
of extragalactic outbursts assumed to be analogs of Eta Car's Great Eruption,
the effective temperature of its outburst is significantly cooler than allowed
by standard opaque wind models. This indicates that other physical mechanisms
like an energetic blast wave may have triggered and influenced the eruption.Comment: Accepted for publication by Nature; 4 pages, 4 figures, SI: 6 pages,
3 figures, 5 table
High platelet reactivity in patients with acute coronary syndromes undergoing percutaneous coronary intervention: Randomised controlled trial comparing prasugrel and clopidogrel
Background: Prasugrel is more effective than clopidogrel in reducing platelet aggregation in acute coronary syndromes. Data available on prasugrel reloading in clopidogrel treated patients with high residual platelet reactivity (HRPR) i.e. poor responders, is limited. Objectives: To determine the effects of prasugrel loading on platelet function in patients on clopidogrel and high platelet reactivity undergoing percutaneous coronary intervention for acute coronary syndrome (ACS). Patients: Patients with ACS on clopidogrel who were scheduled for PCI found to have a platelet reactivity ≥40 AUC with the Multiplate Analyzer, i.e. “poor responders” were randomised to prasugrel (60 mg loading and 10 mg maintenance dose) or clopidogrel (600 mg reloading and 150 mg maintenance dose). The primary outcome measure was proportion of patients with platelet reactivity <40 AUC 4 hours after loading with study medication, and also at one hour (secondary outcome). 44 patients were enrolled and the study was terminated early as clopidogrel use decreased sharply due to introduction of newer P2Y12 inhibitors. Results: At 4 hours after study medication 100% of patients treated with prasugrel compared to 91% of those treated with clopidogrel had platelet reactivity <40 AUC (p = 0.49), while at 1 hour the proportions were 95% and 64% respectively (p = 0.02). Mean platelet reactivity at 4 and 1 hours after study medication in prasugrel and clopidogrel groups respectively were 12 versus 22 (p = 0.005) and 19 versus 34 (p = 0.01) respectively. Conclusions: Routine platelet function testing identifies patients with high residual platelet reactivity (“poor responders”) on clopidogrel. A strategy of prasugrel rather than clopidogrel reloading results in earlier and more sustained suppression of platelet reactivity. Future trials need to identify if this translates into clinical benefit
Recommended from our members
Automation of route identification and optimisation based on data-mining and chemical intuition.
Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.This work was funded in part by the EPSRC project “Terpenebased Manufacturing for Sustainable Chemical Feedstocks” EP/K014889. The PhD scholarship of WC is funded by the EPSRC Doctoral Training Centre in Sustainable Chemical Technologies (EP/G03768X/1). We gratefully acknowledge collaboration with RELX Intellectual Properties SA and their technical support, which enabled us to mine REAXYS. PMJ is grateful to Peterhouse and the Cambridge Trust for PhD scholarships
A Blast Wave from the 1843 Eruption of Eta Carinae
Very massive stars shed much of their mass in violent precursor eruptions as
luminous blue variables (LBVs) before reaching their most likely end as
supernovae, but the cause of LBV eruptions is unknown. The 19th century
eruption of Eta Carinae, the prototype of these events, ejected about 12 solar
masses at speeds of 650 km/s, with a kinetic energy of almost 10^50 ergs. Some
faster material with speeds up to 1000-2000 km/s had previously been reported
but its full distribution was unknown. Here I report observations of much
faster material with speeds up to 3500-6000 km/s, reaching farther from the
star than the fastest material in earlier reports. This fast material roughly
doubles the kinetic energy of the 19th century event, and suggests that it
released a blast wave now propagating ahead of the massive ejecta. Thus, Eta
Car's outer shell now mimics a low-energy supernova remnant. The eruption has
usually been discussed in terms of an extreme wind driven by the star's
luminosity, but fast material reported here suggests that it was powered by a
deep-seated explosion rivalling a supernova, perhaps triggered by the
pulsational pair instability. This may alter interpretations of similar events
seen in other galaxies.Comment: 10 pages, 3 color figs, supplementary information. Accepted by Natur
Regulation of neutrophil senescence by microRNAs
Neutrophils are rapidly recruited to sites of tissue injury or infection, where they protect against invading pathogens. Neutrophil functions are limited by a process of neutrophil senescence, which renders the cells unable to respond to chemoattractants, carry out respiratory burst, or degranulate. In parallel, aged neutrophils also undergo spontaneous apoptosis, which can be delayed by factors such as GMCSF. This is then followed by their subsequent removal by phagocytic cells such as macrophages, thereby preventing unwanted inflammation and tissue damage. Neutrophils translate mRNA to make new proteins that are important in maintaining functional longevity. We therefore hypothesised that neutrophil functions and lifespan might be regulated by microRNAs expressed within human neutrophils. Total RNA from highly purified neutrophils was prepared and subjected to microarray analysis using the Agilent human miRNA microarray V3. We found human neutrophils expressed a selected repertoire of 148 microRNAs and that 6 of these were significantly upregulated after a period of 4 hours in culture, at a time when the contribution of apoptosis is negligible. A list of predicted targets for these 6 microRNAs was generated from http://mirecords.biolead.org and compared to mRNA species downregulated over time, revealing 83 genes targeted by at least 2 out of the 6 regulated microRNAs. Pathway analysis of genes containing binding sites for these microRNAs identified the following pathways: chemokine and cytokine signalling, Ras pathway, and regulation of the actin cytoskeleton. Our data suggest that microRNAs may play a role in the regulation of neutrophil senescence and further suggest that manipulation of microRNAs might represent an area of future therapeutic interest for the treatment of inflammatory disease
Estimating the number needed to treat from continuous outcomes in randomised controlled trials: methodological challenges and worked example using data from the UK Back Pain Exercise and Manipulation (BEAM) trial
Background
Reporting numbers needed to treat (NNT) improves interpretability of trial results. It is unusual that continuous outcomes are converted to numbers of individual responders to treatment (i.e., those who reach a particular threshold of change); and deteriorations prevented are only rarely considered. We consider how numbers needed to treat can be derived from continuous outcomes; illustrated with a worked example showing the methods and challenges.
Methods
We used data from the UK BEAM trial (n = 1, 334) of physical treatments for back pain; originally reported as showing, at best, small to moderate benefits. Participants were randomised to receive 'best care' in general practice, the comparator treatment, or one of three manual and/or exercise treatments: 'best care' plus manipulation, exercise, or manipulation followed by exercise. We used established consensus thresholds for improvement in Roland-Morris disability questionnaire scores at three and twelve months to derive NNTs for improvements and for benefits (improvements gained+deteriorations prevented).
Results
At three months, NNT estimates ranged from 5.1 (95% CI 3.4 to 10.7) to 9.0 (5.0 to 45.5) for exercise, 5.0 (3.4 to 9.8) to 5.4 (3.8 to 9.9) for manipulation, and 3.3 (2.5 to 4.9) to 4.8 (3.5 to 7.8) for manipulation followed by exercise. Corresponding between-group mean differences in the Roland-Morris disability questionnaire were 1.6 (0.8 to 2.3), 1.4 (0.6 to 2.1), and 1.9 (1.2 to 2.6) points.
Conclusion
In contrast to small mean differences originally reported, NNTs were small and could be attractive to clinicians, patients, and purchasers. NNTs can aid the interpretation of results of trials using continuous outcomes. Where possible, these should be reported alongside mean differences. Challenges remain in calculating NNTs for some continuous outcomes
- …