250 research outputs found

    Stroboscopic quantum optomechanics

    Get PDF
    We consider an optomechanical cavity that is driven stroboscopically by a train of short pulses. By suitably choosing the inter-pulse spacing we show that ground-state cooling and mechanical squeezing can be achieved, even in the presence of mechanical dissipation and for moderate radiation-pressure interaction. We provide a full quantum-mechanical treatment of stroboscopic backaction-evading measurements, for which we give a simple analytic insight, and discuss preparation and verification of squeezed mechanical states. We further consider stroboscopic driving of a pair of non-interacting mechanical resonators coupled to a common cavity field, and show that they can be simultaneously cooled and entangled. Stroboscopic quantum optomechanics extends measurement-based quantum control of mechanical systems beyond the good-cavity limit.Comment: 9 + 4 pages, 5 figure

    Zearalenone production and growth in drinking water inoculated with Fusarium graminearum

    Get PDF
    Production of the mycotoxin zearalenone (ZEN) was examined in drinking water inoculated with Fusarium graminearum. The strain employed was isolated from a US water distribution system. ZEN was purified with an immunoaffinity column and quantified by high-performance liquid chromatography (HPLC) with fluorescence detection. The extracellular yield of ZEN was 15.0 ng l−1. Visual growth was observed. Ergosterol was also indicative of growth and an average of 6.2 μg l−1 was obtained. Other compounds were also detected although remain unidentified. There is no equivalent information available. More work is required on metabolite expression in water as mycotoxins have consequences for human and animal health. The levels detected in this study were low. Water needs to be accepted as a potential source as it attracts high quality demands in terms of purity.Fundação para a Ciência e a Tecnologia (FCT

    The Photometric LSST Astronomical Time-series Classification Challenge PLAsTiCC: Selection of a Performance Metric for Classification Probabilities Balancing Diverse Science Goals

    Get PDF
    Classification of transient and variable light curves is an essential step in using astronomical observations to develop an understanding of the underlying physical processes from which they arise. However, upcoming deep photometric surveys, including the Large Synoptic Survey Telescope (LSST), will produce a deluge of low signal-to-noise data for which traditional type estimation procedures are inappropriate. Probabilistic classification is more appropriate for such data but is incompatible with the traditional metrics used on deterministic classifications. Furthermore, large survey collaborations like LSST intend to use the resulting classification probabilities for diverse science objectives, indicating a need for a metric that balances a variety of goals. We describe the process used to develop an optimal performance metric for an open classification challenge that seeks to identify probabilistic classifiers that can serve many scientific interests. The Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC) aims to identify promising techniques for obtaining classification probabilities of transient and variable objects by engaging a broader community beyond astronomy. Using mock classification probability submissions emulating realistically complex archetypes of those anticipated of PLAsTiCC, we compare the sensitivity of two metrics of classification probabilities under various weighting schemes, finding that both yield results that are qualitatively consistent with intuitive notions of classification performance. We thus choose as a metric for PLAsTiCC a weighted modification of the cross-entropy because it can be meaningfully interpreted in terms of information content. Finally, we propose extensions of our methodology to ever more complex challenge goals and suggest some guiding principles for approaching the choice of a metric of probabilistic data products

    Evaluation of probabilistic photometric redshift estimation approaches for the Rubin Observatory Legacy Survey of Space and Time (LSST)

    Get PDF
    Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing 12 photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time Dark Energy Science Collaboration. By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/underbreadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics

    Results of the Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC)

    Get PDF
    Next-generation surveys like the Legacy Survey of Space and Time (LSST) on the Vera C. Rubin Observatory (Rubin) will generate orders of magnitude more discoveries of transients and variable stars than previous surveys. To prepare for this data deluge, we developed the Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC), a competition that aimed to catalyze the development of robust classifiers under LSST-like conditions of a nonrepresentative training set for a large photometric test set of imbalanced classes. Over 1000 teams participated in PLAsTiCC, which was hosted in the Kaggle data science competition platform between 2018 September 28 and 2018 December 17, ultimately identifying three winners in 2019 February. Participants produced classifiers employing a diverse set of machine-learning techniques including hybrid combinations and ensemble averages of a range of approaches, among them boosted decision trees, neural networks, and multilayer perceptrons. The strong performance of the top three classifiers on Type Ia supernovae and kilonovae represent a major improvement over the current state of the art within astronomy. This paper summarizes the most promising methods and evaluates their results in detail, highlighting future directions both for classifier development and simulation needs for a next-generation PLAsTiCC data set

    A Unified Analysis of Four Cosmic Shear Surveys

    Get PDF
    In the past few years, several independent collaborations have presented cosmological constraints from tomographic cosmic shear analyses. These analyses differ in many aspects: the datasets, the shear and photometric redshift estimation algorithms, the theory model assumptions, and the inference pipelines. To assess the robustness of the existing cosmic shear results, we present in this paper a unified analysis of four of the recent cosmic shear surveys: the Deep Lens Survey (DLS), the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), the Science Verification data from the Dark Energy Survey (DES-SV), and the 450 deg2^{2} release of the Kilo-Degree Survey (KiDS-450). By using a unified pipeline, we show how the cosmological constraints are sensitive to the various details of the pipeline. We identify several analysis choices that can shift the cosmological constraints by a significant fraction of the uncertainties. For our fiducial analysis choice, considering a Gaussian covariance, conservative scale cuts, assuming no baryonic feedback contamination, identical cosmological parameter priors and intrinsic alignment treatments, we find the constraints (mean, 16% and 84% confidence intervals) on the parameter S8≡σ8(Ωm/0.3)0.5S_{8}\equiv \sigma_{8}(\Omega_{\rm m}/0.3)^{0.5} to be S8=0.94−0.045+0.046S_{8}=0.94_{-0.045}^{+0.046} (DLS), 0.66−0.071+0.0700.66_{-0.071}^{+0.070} (CFHTLenS), 0.84−0.061+0.0620.84_{-0.061}^{+0.062} (DES-SV) and 0.76−0.049+0.0480.76_{-0.049}^{+0.048} (KiDS-450). From the goodness-of-fit and the Bayesian evidence ratio, we determine that amongst the four surveys, the two more recent surveys, DES-SV and KiDS-450, have acceptable goodness-of-fit and are consistent with each other. The combined constraints are S8=0.79−0.041+0.042S_{8}=0.79^{+0.042}_{-0.041}, which is in good agreement with the first year of DES cosmic shear results and recent CMB constraints from the Planck satellite.Comment: 22 pages, 15 figures, 7 tables; submitted to MNRA

    Performance Assessment in Fingerprinting and Multi Component Quantitative NMR Analyses

    Get PDF
    An interlaboratory comparison (ILC) was organized with the aim to set up quality control indicators suitable for multicomponent quantitative analysis by nuclear magnetic resonance (NMR) spectroscopy. A total of 36 NMR data sets (corresponding to 1260 NMR spectra) were produced by 30 participants using 34 NMR spectrometers. The calibration line method was chosen for the quantification of a five-component model mixture. Results show that quantitative NMR is a robust quantification tool and that 26 out of 36 data sets resulted in statistically equivalent calibration lines for all considered NMR signals. The performance of each laboratory was assessed by means of a new performance index (named Qp-score) which is related to the difference between the experimental and the consensus values of the slope of the calibration lines. Laboratories endowed with a Qp-score falling within the suitable acceptability range are qualified to produce NMR spectra that can be considered statistically equivalent in terms of relative intensities of the signals. In addition, the specific response of nuclei to the experimental excitation/relaxation conditions was addressed by means of the parameter named NR. NR is related to the difference between the theoretical and the consensus slopes of the calibration lines and is specific for each signal produced by a well-defined set of acquisition parameters
    • …
    corecore