107 research outputs found

    ULDA user's guide

    Get PDF
    The Uniform Low Dispersion Archive (ULDA) is a software system which, in one sitting, allows one to obtain copies on one's personal computer of those International Ultraviolet Explorer (IUE) low dispersion spectra that are of interest to the user. Overviews and use instructions are given for two programs, one to search for and select spectra, and the other to convert those spectra into a form suitable for the user's image processing system

    Mining Knowledge in Astrophysical Massive Data Sets

    Full text link
    Modern scientific data mainly consist of huge datasets gathered by a very large number of techniques and stored in very diversified and often incompatible data repositories. More in general, in the e-science environment, it is considered as a critical and urgent requirement to integrate services across distributed, heterogeneous, dynamic "virtual organizations" formed by different resources within a single enterprise. In the last decade, Astronomy has become an immensely data rich field due to the evolution of detectors (plates to digital to mosaics), telescopes and space instruments. The Virtual Observatory approach consists into the federation under common standards of all astronomical archives available worldwide, as well as data analysis, data mining and data exploration applications. The main drive behind such effort being that once the infrastructure will be completed, it will allow a new type of multi-wavelength, multi-epoch science which can only be barely imagined. Data Mining, or Knowledge Discovery in Databases, while being the main methodology to extract the scientific information contained in such MDS (Massive Data Sets), poses crucial problems since it has to orchestrate complex problems posed by transparent access to different computing environments, scalability of algorithms, reusability of resources, etc. In the present paper we summarize the present status of the MDS in the Virtual Observatory and what is currently done and planned to bring advanced Data Mining methodologies in the case of the DAME (DAta Mining & Exploration) project.Comment: Pages 845-849 1rs International Conference on Frontiers in Diagnostics Technologie

    Interoperable geographically distributed astronomical infrastructures: technical solutions

    Get PDF
    The increase of astronomical data produced by a new generation of observational tools poses the need to distribute data and to bring computation close to the data. Trying to answer this need, we set up a federated data and computing infrastructure involving an international cloud facility, EGI federated, and a set of services implementing IVOA standards and recommendations for authentication, data sharing and resource access. In this paper we describe technical problems faced, specifically we show the designing, technological and architectural solutions adopted. We depict our technological overall solution to bring data close to computation resources. Besides the adopted solutions, we propose some points for an open discussion on authentication and authorization mechanisms.Comment: 4 pages, 1 figure, submitted to Astronomical Society of the Pacific (ASP

    Interconnecting the Virtual Observatory with computational grid infrastructures

    Get PDF
    The term 'grid', in the Virtual Observatory (VO) context, has mainly been used to indicate a set of interoperable services, allowing transparent access to a set of geographically distributed and heterogeneous archives and catalogues, data exchange and analysis, etc. The design of the VO has been however mainly geared at allowing users to access registered services

    Astronomical Data Analysis Software and Systems XXVI

    Get PDF
    This volume contains papers presented at the twenty-sixth annual conference on Astronomical Data Analysis Software and Systems (ADASS XXVI). The ADASS conference is the premier conference for the exchange of information about astronomical software and is held each year, hosted by a different astronomical institution. The conference provides a forum for astronomers, software engineers and data specialists from all around the world to discuss software, algorithms, technologies and recommendations applied to all aspects of astronomy, from telescope operations to data reduction, from data management to computing. The key themes for ADASS XXVI included: long-term data management in astronomical data archives, management of scientific and data analysis projects, connections between large databases and data reduction and analysis, HPC and distributed computing, the usage of Python in astronomy, and others. The conference also touched upon data modelling in astronomy and other topics, including demo booths, "bird of a feather" sessions, focus demos, and tutorials. This proceedings volume presents over a hundred and eighty reports from the oral, posters and other contributions to the conference

    VisIVO: an interoperable visualisation tool for Virtual Observatory data

    Get PDF
    We present VisIVO a software for the visualisation and analysis of astrophysical data which can be retrieved from the Virtual Observatory framework and for cosmological simulations. VisIVO is VO standards compliant and supports the most important astronomical data formats such as FITS, HDF5 and VOTables. Data can be retrieved directly connecting to an available VO service (i.e., VizieR WS), loaded in the local computer memory where they can be further selected, visualised and manipulated

    Planck LFI DPC Implementation Status Report

    Get PDF
    Version 1.0 reviewed by ESA at the Planck SGS Implementation Review (Jan 2007) Version 2.0 reviewed by ESA at the Planck SGS Readiness Review (2008)This report, the Planck/LFI DPC Implementation Status Report, is aimed at describing the status of the implementation of the LFI Data Processing Centre (DPC). It will be a self-standing document summarizing the status of the DPC pipeline implementation for the Planck SGS Readiness Review. The report includes all the development activities based on Work Packages and focuses on the main important topics at this stage of development. It should be noted that the SGS1 activity was reported, during the last three years, in the usual bimonthly report as recommended in the SGS Design Review (Nov 2004) and the SGS2 activity was reported in form of presentations during the Science Team Meetings, quarterly based

    DAS: a data management system for instrument tests and operations

    Full text link
    The Data Access System (DAS) is a metadata and data management software system, providing a reusable solution for the storage of data acquired both from telescopes and auxiliary data sources during the instrument development phases and operations. It is part of the Customizable Instrument WorkStation system (CIWS-FW), a framework for the storage, processing and quick-look at the data acquired from scientific instruments. The DAS provides a data access layer mainly targeted to software applications: quick-look displays, pre-processing pipelines and scientific workflows. It is logically organized in three main components: an intuitive and compact Data Definition Language (DAS DDL) in XML format, aimed for user-defined data types; an Application Programming Interface (DAS API), automatically adding classes and methods supporting the DDL data types, and providing an object-oriented query language; a data management component, which maps the metadata of the DDL data types in a relational Data Base Management System (DBMS), and stores the data in a shared (network) file system. With the DAS DDL, developers define the data model for a particular project, specifying for each data type the metadata attributes, the data format and layout (if applicable), and named references to related or aggregated data types. Together with the DDL user-defined data types, the DAS API acts as the only interface to store, query and retrieve the metadata and data in the DAS system, providing both an abstract interface and a data model specific one in C, C++ and Python. The mapping of metadata in the back-end database is automatic and supports several relational DBMSs, including MySQL, Oracle and PostgreSQL.Comment: Accepted for pubblication on ADASS Conference Serie

    Management of the science ground segment for the Euclid mission

    Get PDF
    Euclid is an ESA mission aimed at understanding the nature of dark energy and dark matter by using simultaneously two probes (weak lensing and baryon acoustic oscillations). The mission will observe galaxies and clusters of galaxies out to z~2, in a wide extra-galactic survey covering 15000 deg2, plus a deep survey covering an area of 40 deg\ub2. The payload is composed of two instruments, an imager in the visible domain (VIS) and an imager-spectrometer (NISP) covering the near-infrared. The launch is planned in Q4 of 2020. The elements of the Euclid Science Ground Segment (SGS) are the Science Operations Centre (SOC) operated by ESA and nine Science Data Centres (SDCs) in charge of data processing, provided by the Euclid Consortium (EC), formed by over 110 institutes spread in 15 countries. SOC and the EC started several years ago a tight collaboration in order to design and develop a single, cost-efficient and truly integrated SGS. The distributed nature, the size of the data set, and the needed accuracy of the results are the main challenges expected in the design and implementation of the SGS. In particular, the huge volume of data (not only Euclid data but also ground based data) to be processed in the SDCs will require distributed storage to avoid data migration across SDCs. This paper describes the management challenges that the Euclid SGS is facing while dealing with such complexity. The main aspect is related to the organisation of a geographically distributed software development team. In principle algorithms and code is developed in a large number of institutes, while data is actually processed at fewer centers (the national SDCs) where the operational computational infrastructures are maintained. The software produced for data handling, processing and analysis is built within a common development environment defined by the SGS System Team, common to SOC and ECSGS, which has already been active for several years. The code is built incrementally through different levels of maturity, going from prototypes (developed mainly by scientists) to production code (engineered and tested at the SDCs). A number of incremental challenges (infrastructure, data processing and integrated) have been included in the Euclid SGS test plan to verify the correctness and accuracy of the developed systems
    • …
    corecore