70 research outputs found

    A dusty pinwheel nebula around the massive star WR 104

    Get PDF
    Wolf-Rayet (WR) stars are luminous massive blue stars thought to be immediate precursors to the supernova terminating their brief lives. The existence of dust shells around such stars has been enigmatic since their discovery some 30 years ago; the intense radiation field from the star should be inimical to dust survival. Although dust-creation models, including those involving interacting stellar winds from a companion star, have been put forward, high-resolution observations are required to understand this phenomena. Here we present resolved images of the dust outflow around Wolf-Rayet WR 104, obtained with novel imaging techniques, revealing detail on scales corresponding to about 40 AU at the star. Our maps show that the dust forms a spatially confined stream following precisely a linear (or Archimedian) spiral trajectory. Images taken at two separate epochs show a clear rotation with a period of 220 +/- 30 days. Taken together, these findings prove that a binary star is responsible for the creation of the circumstellar dust, while the spiral plume makes WR 104 the prototype of a new class of circumstellar nebulae unique to interacting wind systems.Comment: 7 pages, 2 figures, Appearing in Nature (1999 April 08

    Direct high-precision measurement of the magnetic moment of the proton

    Full text link
    The spin-magnetic moment of the proton μp\mu_p is a fundamental property of this particle. So far μp\mu_p has only been measured indirectly, analysing the spectrum of an atomic hydrogen maser in a magnetic field. Here, we report the direct high-precision measurement of the magnetic moment of a single proton using the double Penning-trap technique. We drive proton-spin quantum jumps by a magnetic radio-frequency field in a Penning trap with a homogeneous magnetic field. The induced spin-transitions are detected in a second trap with a strong superimposed magnetic inhomogeneity. This enables the measurement of the spin-flip probability as a function of the drive frequency. In each measurement the proton's cyclotron frequency is used to determine the magnetic field of the trap. From the normalized resonance curve, we extract the particle's magnetic moment in units of the nuclear magneton μp=2.792847350(9)μN\mu_p=2.792847350(9)\mu_N. This measurement outperforms previous Penning trap measurements in terms of precision by a factor of about 760. It improves the precision of the forty year old indirect measurement, in which significant theoretical bound state corrections were required to obtain μp\mu_p, by a factor of 3. By application of this method to the antiproton magnetic moment μpˉ\mu_{\bar{p}} the fractional precision of the recently reported value can be improved by a factor of at least 1000. Combined with the present result, this will provide a stringent test of matter/antimatter symmetry with baryons.Comment: published in Natur

    Incorporating prior knowledge improves detection of differences in bacterial growth rate

    Get PDF
    BACKGROUND: Robust statistical detection of differences in the bacterial growth rate can be challenging, particularly when dealing with small differences or noisy data. The Bayesian approach provides a consistent framework for inferring model parameters and comparing hypotheses. The method captures the full uncertainty of parameter values, whilst making effective use of prior knowledge about a given system to improve estimation. RESULTS: We demonstrated the application of Bayesian analysis to bacterial growth curve comparison. Following extensive testing of the method, the analysis was applied to the large dataset of bacterial responses which are freely available at the web-resource, ComBase. Detection was found to be improved by using prior knowledge from clusters of previously analysed experimental results at similar environmental conditions. A comparison was also made to a more traditional statistical testing method, the F-test, and Bayesian analysis was found to perform more conclusively and to be capable of attributing significance to more subtle differences in growth rate. CONCLUSIONS: We have demonstrated that by making use of existing experimental knowledge, it is possible to significantly improve detection of differences in bacterial growth rate

    A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part II: an illustrative example

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Popular predictive models for estimating morbidity probability after heart surgery are compared critically in a unitary framework. The study is divided into two parts. In the first part modelling techniques and intrinsic strengths and weaknesses of different approaches were discussed from a theoretical point of view. In this second part the performances of the same models are evaluated in an illustrative example.</p> <p>Methods</p> <p>Eight models were developed: Bayes linear and quadratic models, <it>k</it>-nearest neighbour model, logistic regression model, Higgins and direct scoring systems and two feed-forward artificial neural networks with one and two layers. Cardiovascular, respiratory, neurological, renal, infectious and hemorrhagic complications were defined as morbidity. Training and testing sets each of 545 cases were used. The optimal set of predictors was chosen among a collection of 78 preoperative, intraoperative and postoperative variables by a stepwise procedure. Discrimination and calibration were evaluated by the area under the receiver operating characteristic curve and Hosmer-Lemeshow goodness-of-fit test, respectively.</p> <p>Results</p> <p>Scoring systems and the logistic regression model required the largest set of predictors, while Bayesian and <it>k</it>-nearest neighbour models were much more parsimonious. In testing data, all models showed acceptable discrimination capacities, however the Bayes quadratic model, using only three predictors, provided the best performance. All models showed satisfactory generalization ability: again the Bayes quadratic model exhibited the best generalization, while artificial neural networks and scoring systems gave the worst results. Finally, poor calibration was obtained when using scoring systems, <it>k</it>-nearest neighbour model and artificial neural networks, while Bayes (after recalibration) and logistic regression models gave adequate results.</p> <p>Conclusion</p> <p>Although all the predictive models showed acceptable discrimination performance in the example considered, the Bayes and logistic regression models seemed better than the others, because they also had good generalization and calibration. The Bayes quadratic model seemed to be a convincing alternative to the much more usual Bayes linear and logistic regression models. It showed its capacity to identify a minimum core of predictors generally recognized as essential to pragmatically evaluate the risk of developing morbidity after heart surgery.</p

    A dusty torus around the luminous young star LkHa 101

    Get PDF
    A star forms when a cloud of dust and gas collapses. It is generally believed that this collapse first produces a flattened rotating disk, through which matter is fed onto the embryonic star at the center of the disk. When the temperature and density at the center of the star pass a critical threshold, thermonuclear fusion begins. The remaining disk, which can still contain up to 0.3 times the mass of the star, is then sculpted and eventually dissipated by the radiation and wind from the newborn star. Unfortunately this picture of the structure and evolution of the disk remains speculative because of the lack of morphological data of sufficient resolution and uncertainties regarding the underlying physical processes. Here we present resolved images of a young star, LkHa 101 in which the structure of the inner accretion disk is resolved. We find that the disk is almost face-on, with a central gap (or cavity) and a hot inner edge. The cavity is bigger than previous theoretical predictions, and we infer that the position of the inner edge is probably determined by sublimation of dust grains by direct stellar radiation, rather than by disk reprocessing or the viscous heating processes as usually assumed.Comment: 7 pages, 1 figure. Appears in Nature, 22 Feb, 2001 (Vol 409

    Physics, Astrophysics and Cosmology with Gravitational Waves

    Get PDF
    Gravitational wave detectors are already operating at interesting sensitivity levels, and they have an upgrade path that should result in secure detections by 2014. We review the physics of gravitational waves, how they interact with detectors (bars and interferometers), and how these detectors operate. We study the most likely sources of gravitational waves and review the data analysis methods that are used to extract their signals from detector noise. Then we consider the consequences of gravitational wave detections and observations for physics, astrophysics, and cosmology.Comment: 137 pages, 16 figures, Published version <http://www.livingreviews.org/lrr-2009-2

    Resolving fluorescent species by their brightness and diffusion using correlated photon-counting histograms

    Get PDF
    Fluorescence fluctuation spectroscopy (FFS) refers to techniques that analyze fluctuations in the fluorescence emitted by fluorophores diffusing in a small volume and can be used to distinguish between populations of molecules that exhibit differences in brightness or diffusion. For example, fluorescence correlation spectroscopy (FCS) resolves species through their diffusion by analyzing correlations in the fluorescence over time; photon counting histograms (PCH) and related methods based on moment analysis resolve species through their brightness by analyzing fluctuations in the photon counts. Here we introduce correlated photon counting histograms (cPCH), which uses both types of information to simultaneously resolve fluorescent species by their brightness and diffusion. We define the cPCH distribution by the probability to detect both a particular number of photons at the current time and another number at a later time. FCS and moment analysis are special cases of the moments of the cPCH distribution, and PCH is obtained by summing over the photon counts in either channel. cPCH is inherently a dual channel technique, and the expressions we develop apply to the dual colour case. Using simulations, we demonstrate that two species differing in both their diffusion and brightness can be better resolved with cPCH than with either FCS or PCH. Further, we show that cPCH can be extended both to longer dwell times to improve the signal-to-noise and to the analysis of images. By better exploiting the information available in fluorescence fluctuation spectroscopy, cPCH will be an enabling methodology for quantitative biology

    On computational approaches for size-and-shape distributions from sedimentation velocity analytical ultracentrifugation

    Get PDF
    Sedimentation velocity analytical ultracentrifugation has become a very popular technique to study size distributions and interactions of macromolecules. Recently, a method termed two-dimensional spectrum analysis (2DSA) for the determination of size-and-shape distributions was described by Demeler and colleagues (Eur Biophys J 2009). It is based on novel ideas conceived for fitting the integral equations of the size-and-shape distribution to experimental data, illustrated with an example but provided without proof of the principle of the algorithm. In the present work, we examine the 2DSA algorithm by comparison with the mathematical reference frame and simple well-known numerical concepts for solving Fredholm integral equations, and test the key assumptions underlying the 2DSA method in an example application. While the 2DSA appears computationally excessively wasteful, key elements also appear to be in conflict with mathematical results. This raises doubts about the correctness of the results from 2DSA analysis

    A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part I: model planning

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Different methods have recently been proposed for predicting morbidity in intensive care units (ICU). The aim of the present study was to critically review a number of approaches for developing models capable of estimating the probability of morbidity in ICU after heart surgery. The study is divided into two parts. In this first part, popular models used to estimate the probability of class membership are grouped into distinct categories according to their underlying mathematical principles. Modelling techniques and intrinsic strengths and weaknesses of each model are analysed and discussed from a theoretical point of view, in consideration of clinical applications.</p> <p>Methods</p> <p>Models based on Bayes rule, <it>k-</it>nearest neighbour algorithm, logistic regression, scoring systems and artificial neural networks are investigated. Key issues for model design are described. The mathematical treatment of some aspects of model structure is also included for readers interested in developing models, though a full understanding of mathematical relationships is not necessary if the reader is only interested in perceiving the practical meaning of model assumptions, weaknesses and strengths from a user point of view.</p> <p>Results</p> <p>Scoring systems are very attractive due to their simplicity of use, although this may undermine their predictive capacity. Logistic regression models are trustworthy tools, although they suffer from the principal limitations of most regression procedures. Bayesian models seem to be a good compromise between complexity and predictive performance, but model recalibration is generally necessary. <it>k</it>-nearest neighbour may be a valid non parametric technique, though computational cost and the need for large data storage are major weaknesses of this approach. Artificial neural networks have intrinsic advantages with respect to common statistical models, though the training process may be problematical.</p> <p>Conclusion</p> <p>Knowledge of model assumptions and the theoretical strengths and weaknesses of different approaches are fundamental for designing models for estimating the probability of morbidity after heart surgery. However, a rational choice also requires evaluation and comparison of actual performances of locally-developed competitive models in the clinical scenario to obtain satisfactory agreement between local needs and model response. In the second part of this study the above predictive models will therefore be tested on real data acquired in a specialized ICU.</p
    corecore