236 research outputs found
Detailed quantification of glacier elevation and mass changes in South Georgia
Most glaciers in South America and on the Antarctic Peninsula are retreating and thinning. They are considered strong contributors to global sea level rise. However, there is a lack of glacier mass balance studies in other areas of the Southern Hemisphere, such as the surrounding Antarctic Islands. Here, we present a detailed quantification of the 21st century glacier elevation and mass changes for the entire South Georgia Island using bi-static synthetic aperture radar interferometry between 2000 and 2013. The results suggest a significant mass loss since the beginning of the present century. We calculate an average glacier mass balance of -1.04 0.09 m w.e.a(-1) and a mass loss rate of 2.28 0.19 Gt a(-1) (2000-2013), contributing 0.006 0.001 mm a(-1) to sea-level rise. Additionally, we calculate a subaqueous mass loss of 0.77 0.04 Gt a(-1) (2003-2016), with an area change at the marine and lake-terminating glacier fronts of -6.58 0.33 km(2) a(-1), corresponding to similar to 4% of the total glacier area. Overall, we observe negative mass balance rates in South Georgia, with the highest thinning and retreat rates at the large outlet glaciers located at the north-east coast. Although the spaceborne remote sensing dataset analysed in this research is a key contribution to better understanding of the glacier changes in South Georgia, more detailed field measurements, glacier dynamics studies or further long-term analysis with high-resolution regional climate models are required to precisely identify the forcing factors
Simultaneous Quantification and Identification of Individual Chemicals in Metabolite Mixtures by Two-Dimensional Extrapolated Time-Zero 1H−13C HSQC (HSQC0)
The Photometric LSST Astronomical Time-series Classification Challenge PLAsTiCC: Selection of a Performance Metric for Classification Probabilities Balancing Diverse Science Goals
Classification of transient and variable light curves is an essential step in using astronomical observations to develop an understanding of the underlying physical processes from which they arise. However, upcoming deep photometric surveys, including the Large Synoptic Survey Telescope (LSST), will produce a deluge of low signal-to-noise data for which traditional type estimation procedures are inappropriate. Probabilistic classification is more appropriate for such data but is incompatible with the traditional metrics used on deterministic classifications. Furthermore, large survey collaborations like LSST intend to use the resulting classification probabilities for diverse science objectives, indicating a need for a metric that balances a variety of goals. We describe the process used to develop an optimal performance metric for an open classification challenge that seeks to identify probabilistic classifiers that can serve many scientific interests. The Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC) aims to identify promising techniques for obtaining classification probabilities of transient and variable objects by engaging a broader community beyond astronomy. Using mock classification probability submissions emulating realistically complex archetypes of those anticipated of PLAsTiCC, we compare the sensitivity of two metrics of classification probabilities under various weighting schemes, finding that both yield results that are qualitatively consistent with intuitive notions of classification performance. We thus choose as a metric for PLAsTiCC a weighted modification of the cross-entropy because it can be meaningfully interpreted in terms of information content. Finally, we propose extensions of our methodology to ever more complex challenge goals and suggest some guiding principles for approaching the choice of a metric of probabilistic data products
Evaluation of probabilistic photometric redshift estimation approaches for the Rubin Observatory Legacy Survey of Space and Time (LSST)
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing 12 photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time Dark Energy Science Collaboration. By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/underbreadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics
Evaluation of probabilistic photometric redshift estimation approaches for The Rubin Observatory Legacy Survey of Space and Time (LSST)
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing 12 photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time Dark Energy Science Collaboration. By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/underbreadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics
Deconvolution of Two-Dimensional NMR Spectra by Fast Maximum Likelihood Reconstruction: Application to Quantitative Metabolomics
Results of the Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC)
Next-generation surveys like the Legacy Survey of Space and Time (LSST) on the Vera C. Rubin Observatory (Rubin) will generate orders of magnitude more discoveries of transients and variable stars than previous surveys. To prepare for this data deluge, we developed the Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC), a competition that aimed to catalyze the development of robust classifiers under LSST-like conditions of a nonrepresentative training set for a large photometric test set of imbalanced classes. Over 1000 teams participated in PLAsTiCC, which was hosted in the Kaggle data science competition platform between 2018 September 28 and 2018 December 17, ultimately identifying three winners in 2019 February. Participants produced classifiers employing a diverse set of machine-learning techniques including hybrid combinations and ensemble averages of a range of approaches, among them boosted decision trees, neural networks, and multilayer perceptrons. The strong performance of the top three classifiers on Type Ia supernovae and kilonovae represent a major improvement over the current state of the art within astronomy. This paper summarizes the most promising methods and evaluates their results in detail, highlighting future directions both for classifier development and simulation needs for a next-generation PLAsTiCC data set
The Wor1-like Protein Fgp1 Regulates Pathogenicity, Toxin Synthesis and Reproduction in the Phytopathogenic Fungus Fusarium graminearum
WOR1 is a gene for a conserved fungal regulatory protein controlling the dimorphic switch and pathogenicity determents in Candida albicans and its ortholog in the plant pathogen Fusarium oxysporum, called SGE1, is required for pathogenicity and expression of key plant effector proteins. F. graminearum, an important pathogen of cereals, is not known to employ switching and no effector proteins from F. graminearum have been found to date that are required for infection. In this study, the potential role of the WOR1-like gene in pathogenesis was tested in this toxigenic fungus. Deletion of the WOR1 ortholog (called FGP1) in F. graminearum results in greatly reduced pathogenicity and loss of trichothecene toxin accumulation in infected wheat plants and in vitro. The loss of toxin accumulation alone may be sufficient to explain the loss of pathogenicity to wheat. Under toxin-inducing conditions, expression of genes for trichothecene biosynthesis and many other genes are not detected or detected at lower levels in Δfgp1 strains. FGP1 is also involved in the developmental processes of conidium formation and sexual reproduction and modulates a morphological change that accompanies mycotoxin production in vitro. The Wor1-like proteins in Fusarium species have highly conserved N-terminal regions and remarkably divergent C-termini. Interchanging the N- and C- terminal portions of proteins from F. oxysporum and F. graminearum resulted in partial to complete loss of function. Wor1-like proteins are conserved but have evolved to regulate pathogenicity in a range of fungi, likely by adaptations to the C-terminal portion of the protein
Evaluation of probabilistic photometric redshift estimation approaches for The Rubin Observatory Legacy Survey of Space and Time (LSST)
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best
encapsulated by photometric redshift (photo-z) posterior probability density functions (PDFs). A plethora of photo-z PDF
estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results
of a comprehensive experiment comparing 12 photo-z algorithms applied to mock data produced for The Rubin Observatory
Legacy Survey of Space and Time Dark Energy Science Collaboration. By supplying perfect prior information, in the form of the
complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions
underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate
and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z
point estimates. We report systematic biases and overall over/underbreadth of the photo-z PDFs of many popular codes, which
may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of
established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate loss as a promising
metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize
the need for science-specific performance metrics
From Data to Software to Science with the Rubin Observatory LSST
The Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) dataset
will dramatically alter our understanding of the Universe, from the origins of
the Solar System to the nature of dark matter and dark energy. Much of this
research will depend on the existence of robust, tested, and scalable
algorithms, software, and services. Identifying and developing such tools ahead
of time has the potential to significantly accelerate the delivery of early
science from LSST. Developing these collaboratively, and making them broadly
available, can enable more inclusive and equitable collaboration on LSST
science.
To facilitate such opportunities, a community workshop entitled "From Data to
Software to Science with the Rubin Observatory LSST" was organized by the LSST
Interdisciplinary Network for Collaboration and Computing (LINCC) and partners,
and held at the Flatiron Institute in New York, March 28-30th 2022. The
workshop included over 50 in-person attendees invited from over 300
applications. It identified seven key software areas of need: (i) scalable
cross-matching and distributed joining of catalogs, (ii) robust photometric
redshift determination, (iii) software for determination of selection
functions, (iv) frameworks for scalable time-series analyses, (v) services for
image access and reprocessing at scale, (vi) object image access (cutouts) and
analysis at scale, and (vii) scalable job execution systems.
This white paper summarizes the discussions of this workshop. It considers
the motivating science use cases, identified cross-cutting algorithms,
software, and services, their high-level technical specifications, and the
principles of inclusive collaborations needed to develop them. We provide it as
a useful roadmap of needs, as well as to spur action and collaboration between
groups and individuals looking to develop reusable software for early LSST
science.Comment: White paper from "From Data to Software to Science with the Rubin
Observatory LSST" worksho
- …