66 research outputs found

    Qserv: a distributed shared-nothing database for the LSST catalog

    Get PDF
    The LSST project will provide public access to a database catalog that, in its final year, is estimated to include 26 billion stars and galaxies in dozens of trillion detections in multiple petabytes. Because we are not aware of an existing open-source database implementation that has been demonstrated to efficiently satisfy astronomers' spatial self-joining and cross-matching queries at this scale, we have implemented Qserv, a distributed shared-nothing SQL database query system. To speed development, Qserv relies on two successful open-source software packages: the MySQL RDBMS and the Xrootd distributed file system. We describe Qserv's design, architecture, and ability to scale to LSST's data requirements. We illustrate its potential with test results on a 150-node cluster using 55 billion rows and 30 terabytes of simulated data. These results demonstrate the soundness of Qserv's approach and the scale it achieves on today's hardware

    Parametrization and Classification of 20 Billion LSST Objects: Lessons from SDSS

    Get PDF
    The Large Synoptic Survey Telescope (LSST) will be a large, wide-field ground-based system designed to obtain, starting in 2015, multiple images of the sky that is visible from Cerro Pachon in Northern Chile. About 90% of the observing time will be devoted to a deep-wide-fast survey mode which will observe a 20,000 deg2^2 region about 1000 times during the anticipated 10 years of operations (distributed over six bands, ugrizyugrizy). Each 30-second long visit will deliver 5σ\sigma depth for point sources of r24.5r\sim24.5 on average. The co-added map will be about 3 magnitudes deeper, and will include 10 billion galaxies and a similar number of stars. We discuss various measurements that will be automatically performed for these 20 billion sources, and how they can be used for classification and determination of source physical and other properties. We provide a few classification examples based on SDSS data, such as color classification of stars, color-spatial proximity search for wide-angle binary stars, orbital-color classification of asteroid families, and the recognition of main Galaxy components based on the distribution of stars in the position-metallicity-kinematics space. Guided by these examples, we anticipate that two grand classification challenges for LSST will be 1) rapid and robust classification of sources detected in difference images, and 2) {\it simultaneous} treatment of diverse astrometric and photometric time series measurements for an unprecedentedly large number of objects.Comment: Presented at the "Classification and Discovery in Large Astronomical Surveys" meeting, Ringberg Castle, 14-17 October, 200

    Infrared Nonlinear Optics

    Get PDF
    Contains reports on six research projects.Defense Advanced Research Projects Agency Universities Research Initiative (Contract N00014-46-K-0760)Strategic Defense Initiative/Innovative Science & Technology, managed by the U.S. Naval Research Laboratory (Contract N00014-87-K-2031)National Science Foundation (Grant EET-87-18417

    Infrared Nonlinear Optics

    Get PDF
    Contains reports on four research projects.DARPA/Universities Research Initiative (Contract NOOO14-46-K-0760)SDI/IST managed by the Naval Research Laboratory (Contract N00014-87-K-2031

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be 24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with δ<+34.5\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    LSST Science Book, Version 2.0

    Get PDF
    A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at http://www.lsst.org/lsst/sciboo

    Public health and valorization of genome-based technologies: a new model

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The success rate of timely translation of genome-based technologies to commercially feasible products/services with applicability in health care systems is significantly low. We identified both industry and scientists neglect health policy aspects when commercializing their technology, more specifically, Public Health Assessment Tools (PHAT) and early on involvement of decision makers through which market authorization and reimbursements are dependent. While Technology Transfer (TT) aims to facilitate translation of ideas into products, Health Technology Assessment, one component of PHAT, for example, facilitates translation of products/processes into healthcare services and eventually comes up with recommendations for decision makers. We aim to propose a new model of valorization to optimize integration of genome-based technologies into the healthcare system.</p> <p>Methods</p> <p>The method used to develop our model is an adapted version of the Fish Trap Model and the Basic Design Cycle.</p> <p>Results</p> <p>We found although different, similarities exist between TT and PHAT. Realizing the potential of being mutually beneficial justified our proposal of their relative parallel initiation. We observed that the Public Health Genomics Wheel should be included in this relative parallel activity to ensure all societal/policy aspects are dealt with preemptively by both stakeholders. On further analysis, we found out this whole process is dependent on the Value of Information. As a result, we present our LAL (Learning Adapting Leveling) model which proposes, based on market demand; TT and PHAT by consultation/bi-lateral communication should advocate for relevant technologies. This can be achieved by public-private partnerships (PPPs). These widely defined PPPs create the innovation network which is a developing, consultative/collaborative-networking platform between TT and PHAT. This network has iterations and requires learning, assimilating and using knowledge developed and is called absorption capacity. We hypothesize that the higher absorption capacity, higher success possibility. Our model however does not address the phasing out of technology although we believe the same model can be used to simultaneously phase out a technology.</p> <p>Conclusions</p> <p>This model proposes to facilitate optimization/decrease the timeframe of integration in healthcare. It also helps industry and researchers to come to a strategic decision at an early stage, about technology being developed thus, saving on resources, hence minimizing failures.</p
    corecore