845 research outputs found

    Radial Velocities as an Exoplanet Discovery Method

    Full text link
    The precise radial velocity technique is a cornerstone of exoplanetary astronomy. Astronomers measure Doppler shifts in the star's spectral features, which track the line-of/sight gravitational accelerations of a star caused by the planets orbiting it. The method has its roots in binary star astronomy, and exoplanet detection represents the low-companion-mass limit of that application. This limit requires control of several effects of much greater magnitude than the signal sought: the motion of the telescope must be subtracted, the instrument must be calibrated, and spurious Doppler shifts "jitter" must be mitigated or corrected. Two primary forms of instrumental calibration are the stable spectrograph and absorption cell methods, the former being the path taken for the next generation of spectrographs. Spurious, apparent Doppler shifts due to non-center-of-mass motion (jitter) can be the result of stellar magnetic activity or photospheric motions and granulation. Several avoidance, mitigation, and correction strategies exist, including careful analysis of line shapes and radial velocity wavelength dependence.Comment: Invited review chapter. 13pp. v2 includes corrections to Eqs 3-6, updated references, and minor edit

    Ageing, Muscle Power and Physical Function: A Systematic Review and Implications for Pragmatic Training Interventions.

    Get PDF
    BACKGROUND: The physiological impairments most strongly associated with functional performance in older people are logically the most efficient therapeutic targets for exercise training interventions aimed at improving function and maintaining independence in later life. OBJECTIVES: The objectives of this review were to (1) systematically review the relationship between muscle power and functional performance in older people; (2) systematically review the effect of power training (PT) interventions on functional performance in older people; and (3) identify components of successful PT interventions relevant to pragmatic trials by scoping the literature. METHODS: Our approach involved three stages. First, we systematically reviewed evidence on the relationship between muscle power, muscle strength and functional performance and, second, we systematically reviewed PT intervention studies that included both muscle power and at least one index of functional performance as outcome measures. Finally, taking a strong pragmatic perspective, we conducted a scoping review of the PT evidence to identify the successful components of training interventions needed to provide a minimally effective training dose to improve physical function. RESULTS: Evidence from 44 studies revealed a positive association between muscle power and indices of physical function, and that muscle power is a marginally superior predictor of functional performance than muscle strength. Nine studies revealed maximal angular velocity of movement, an important component of muscle power, to be positively associated with functional performance and a better predictor of functional performance than muscle strength. We identified 31 PT studies, characterised by small sample sizes and incomplete reporting of interventions, resulting in less than one-in-five studies judged as having a low risk of bias. Thirteen studies compared traditional resistance training with PT, with ten studies reporting the superiority of PT for either muscle power or functional performance. Further studies demonstrated the efficacy of various methods of resistance and functional task PT on muscle power and functional performance, including low-load PT and low-volume interventions. CONCLUSIONS: Maximal intended movement velocity, low training load, simple training methods, low-volume training and low-frequency training were revealed as components offering potential for the development of a pragmatic intervention. Additionally, the research area is dominated by short-term interventions producing short-term gains with little consideration of the long-term maintenance of functional performance. We believe the area would benefit from larger and higher-quality studies and consideration of optimal long-term strategies to develop and maintain muscle power and physical function over years rather than weeks

    Impact of Emerging Antiviral Drug Resistance on Influenza Containment and Spread: Influence of Subclinical Infection and Strategic Use of a Stockpile Containing One or Two Drugs

    Get PDF
    BACKGROUND: Wide-scale use of antiviral agents in the event of an influenza pandemic is likely to promote the emergence of drug resistance, with potentially deleterious effects for outbreak control. We explored factors promoting resistance within a dynamic infection model, and considered ways in which one or two drugs might be distributed to delay the spread of resistant strains or mitigate their impact. METHODS AND FINDINGS: We have previously developed a novel deterministic model of influenza transmission that simulates treatment and targeted contact prophylaxis, using a limited stockpile of antiviral agents. This model was extended to incorporate subclinical infections, and the emergence of resistant virus strains under the selective pressure imposed by various uses of one or two antiviral agents. For a fixed clinical attack rate, R(0) rises with the proportion of subclinical infections thus reducing the number of infections amenable to treatment or prophylaxis. In consequence, outbreak control is more difficult, but emergence of drug resistance is relatively uncommon. Where an epidemic may be constrained by use of a single antiviral agent, strategies that combine treatment and prophylaxis are most effective at controlling transmission, at the cost of facilitating the spread of resistant viruses. If two drugs are available, using one drug for treatment and the other for prophylaxis is more effective at preventing propagation of mutant strains than either random allocation or drug cycling strategies. Our model is relatively straightforward, and of necessity makes a number of simplifying assumptions. Our results are, however, consistent with the wider body of work in this area and are able to place related research in context while extending the analysis of resistance emergence and optimal drug use within the constraints of a finite drug stockpile. CONCLUSIONS: Combined treatment and prophylaxis represents optimal use of antiviral agents to control transmission, at the cost of drug resistance. Where two drugs are available, allocating different drugs to cases and contacts is likely to be most effective at constraining resistance emergence in a pandemic scenario

    Cell-to-Cell Stochastic Variation in Gene Expression Is a Complex Genetic Trait

    Get PDF
    The genetic control of common traits is rarely deterministic, with many genes contributing only to the chance of developing a given phenotype. This incomplete penetrance is poorly understood and is usually attributed to interactions between genes or interactions between genes and environmental conditions. Because many traits such as cancer can emerge from rare events happening in one or very few cells, we speculate an alternative and complementary possibility where some genotypes could facilitate these events by increasing stochastic cell-to-cell variations (or ‘noise’). As a very first step towards investigating this possibility, we studied how natural genetic variation influences the level of noise in the expression of a single gene using the yeast S. cerevisiae as a model system. Reproducible differences in noise were observed between divergent genetic backgrounds. We found that noise was highly heritable and placed under a complex genetic control. Scanning the genome, we mapped three Quantitative Trait Loci (QTL) of noise, one locus being explained by an increase in noise when transcriptional elongation was impaired. Our results suggest that the level of stochasticity in particular molecular regulations may differ between multicellular individuals depending on their genotypic background. The complex genetic architecture of noise buffering couples genetic to non-genetic robustness and provides a molecular basis to the probabilistic nature of complex traits

    Aspiration–sclerotherapy Results in Effective Control of Liver Volume in Patients with Liver Cysts

    Get PDF
    Purpose To study the extent to which aspiration–sclerotherapy reduces liver volume and whether this therapy results in relief of symptoms. Results Four patients, group I, with isolated large liver cysts, and 11 patients, group II, with polycystic livers, underwent aspiration–sclerotherapy. Average volume of aspirated cyst fluid was 1,044 ml (range 225–2,000 ml) in group I and 1,326 ml (range 40–4,200 ml) in group II. Mean liver volume before the procedure was 2,157 ml (range 1,706–2,841 ml) in group I and 4,086 ml (range 1,553–7,085 ml) in group II. This decreased after the procedure to 1,757 ml (range 1,479–2,187 ml) in group I. In group II there was a statistically significant decrease to 3,347 ml (range 1,249–6,930 ml, P = 0.008). Volume reduction was 17.1% (range −34.7% to −4.1%) and 19.2% (range −53.9% to +2.4%) in groups I and II, respectively. Clinical severity of all symptoms decreased, except for involuntary weight loss and pain in group II. Conclusion Aspiration–sclerotherapy is an effective means of achieving liver volume reduction and relief of symptoms

    Cost Analysis of Various Low Pathogenic Avian Influenza Surveillance Systems in the Dutch Egg Layer Sector

    Get PDF
    Background: As low pathogenic avian influenza viruses can mutate into high pathogenic viruses the Dutch poultry sector implemented a surveillance system for low pathogenic avian influenza (LPAI) based on blood samples. It has been suggested that egg yolk samples could be sampled instead of blood samples to survey egg layer farms. To support future decision making about AI surveillance economic criteria are important. Therefore a cost analysis is performed on systems that use either blood or eggs as sampled material. Methodology/Principal Findings: The effectiveness of surveillance using egg or blood samples was evaluated using scenario tree models. Then an economic model was developed that calculates the total costs for eight surveillance systems that have equal effectiveness. The model considers costs for sampling, sample preparation, sample transport, testing, communication of test results and for the confirmation test on false positive results. The surveillance systems varied in sampled material (eggs or blood), sampling location (farm or packing station) and location of sample preparation (laboratory or packing station). It is shown that a hypothetical system in which eggs are sampled at the packing station and samples prepared in a laboratory had the lowest total costs (i.e. J 273,393) a year. Compared to this a hypothetical system in which eggs are sampled at the farm and samples prepared at a laboratory, and the currently implemented system in which blood is sampled at the farm and samples prepared at a laboratory have 6 % and 39 % higher costs respectively

    f(R) theories

    Get PDF
    Over the past decade, f(R) theories have been extensively studied as one of the simplest modifications to General Relativity. In this article we review various applications of f(R) theories to cosmology and gravity - such as inflation, dark energy, local gravity constraints, cosmological perturbations, and spherically symmetric solutions in weak and strong gravitational backgrounds. We present a number of ways to distinguish those theories from General Relativity observationally and experimentally. We also discuss the extension to other modified gravity theories such as Brans-Dicke theory and Gauss-Bonnet gravity, and address models that can satisfy both cosmological and local gravity constraints.Comment: 156 pages, 14 figures, Invited review article in Living Reviews in Relativity, Published version, Comments are welcom

    A chemical survey of exoplanets with ARIEL

    Get PDF
    Thousands of exoplanets have now been discovered with a huge range of masses, sizes and orbits: from rocky Earth-like planets to large gas giants grazing the surface of their host star. However, the essential nature of these exoplanets remains largely mysterious: there is no known, discernible pattern linking the presence, size, or orbital parameters of a planet to the nature of its parent star. We have little idea whether the chemistry of a planet is linked to its formation environment, or whether the type of host star drives the physics and chemistry of the planet’s birth, and evolution. ARIEL was conceived to observe a large number (~1000) of transiting planets for statistical understanding, including gas giants, Neptunes, super-Earths and Earth-size planets around a range of host star types using transit spectroscopy in the 1.25–7.8 μm spectral range and multiple narrow-band photometry in the optical. ARIEL will focus on warm and hot planets to take advantage of their well-mixed atmospheres which should show minimal condensation and sequestration of high-Z materials compared to their colder Solar System siblings. Said warm and hot atmospheres are expected to be more representative of the planetary bulk composition. Observations of these warm/hot exoplanets, and in particular of their elemental composition (especially C, O, N, S, Si), will allow the understanding of the early stages of planetary and atmospheric formation during the nebular phase and the following few million years. ARIEL will thus provide a representative picture of the chemical nature of the exoplanets and relate this directly to the type and chemical environment of the host star. ARIEL is designed as a dedicated survey mission for combined-light spectroscopy, capable of observing a large and well-defined planet sample within its 4-year mission lifetime. Transit, eclipse and phase-curve spectroscopy methods, whereby the signal from the star and planet are differentiated using knowledge of the planetary ephemerides, allow us to measure atmospheric signals from the planet at levels of 10–100 part per million (ppm) relative to the star and, given the bright nature of targets, also allows more sophisticated techniques, such as eclipse mapping, to give a deeper insight into the nature of the atmosphere. These types of observations require a stable payload and satellite platform with broad, instantaneous wavelength coverage to detect many molecular species, probe the thermal structure, identify clouds and monitor the stellar activity. The wavelength range proposed covers all the expected major atmospheric gases from e.g. H2O, CO2, CH4 NH3, HCN, H2S through to the more exotic metallic compounds, such as TiO, VO, and condensed species. Simulations of ARIEL performance in conducting exoplanet surveys have been performed – using conservative estimates of mission performance and a full model of all significant noise sources in the measurement – using a list of potential ARIEL targets that incorporates the latest available exoplanet statistics. The conclusion at the end of the Phase A study, is that ARIEL – in line with the stated mission objectives – will be able to observe about 1000 exoplanets depending on the details of the adopted survey strategy, thus confirming the feasibility of the main science objectives.Peer reviewedFinal Published versio

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO

    Measurement of the Forward-Backward Asymmetry in the B -> K(*) mu+ mu- Decay and First Observation of the Bs -> phi mu+ mu- Decay

    Get PDF
    We reconstruct the rare decays B+K+μ+μB^+ \to K^+\mu^+\mu^-, B0K(892)0μ+μB^0 \to K^{*}(892)^0\mu^+\mu^-, and Bs0ϕ(1020)μ+μB^0_s \to \phi(1020)\mu^+\mu^- in a data sample corresponding to 4.4fb14.4 {\rm fb^{-1}} collected in ppˉp\bar{p} collisions at s=1.96TeV\sqrt{s}=1.96 {\rm TeV} by the CDF II detector at the Fermilab Tevatron Collider. Using 121±16121 \pm 16 B+K+μ+μB^+ \to K^+\mu^+\mu^- and 101±12101 \pm 12 B0K0μ+μB^0 \to K^{*0}\mu^+\mu^- decays we report the branching ratios. In addition, we report the measurement of the differential branching ratio and the muon forward-backward asymmetry in the B+B^+ and B0B^0 decay modes, and the K0K^{*0} longitudinal polarization in the B0B^0 decay mode with respect to the squared dimuon mass. These are consistent with the theoretical prediction from the standard model, and most recent determinations from other experiments and of comparable accuracy. We also report the first observation of the Bs0ϕμ+μdecayandmeasureitsbranchingratioB^0_s \to \phi\mu^+\mu^- decay and measure its branching ratio {\mathcal{B}}(B^0_s \to \phi\mu^+\mu^-) = [1.44 \pm 0.33 \pm 0.46] \times 10^{-6}using using 27 \pm 6signalevents.Thisiscurrentlythemostrare signal events. This is currently the most rare B^0_s$ decay observed.Comment: 7 pages, 2 figures, 3 tables. Submitted to Phys. Rev. Let
    corecore