4,956 research outputs found

    A 3D Automated Classification Scheme for the TAUVEX data pipeline

    Full text link
    In order to develop a pipeline for automated classification of stars to be observed by the TAUVEX ultraviolet space Telescope, we employ an artificial neural network (ANN) technique for classifying stars by using synthetic spectra in the UV region from 1250\AA to 3220\AA as the training set and International Ultraviolet Explorer (IUE) low resolution spectra as the test set. Both the data sets have been pre-processed to mimic the observations of the TAUVEX ultraviolet imager. We have successfully classified 229 stars from the IUE low resolution catalog to within 3-4 spectral sub-class using two different simulated training spectra, the TAUVEX spectra of 286 spectral types and UVBLUE spectra of 277 spectral types. Further, we have also been able to obtain the colour excess (i.e. E(B-V) in magnitude units) or the interstellar reddening for those IUE spectra which have known reddening to an accuracy of better than 0.1 magnitudes. It has been shown that even with the limitation of data from just photometric bands, ANNs have not only classified the stars, but also provided satisfactory estimates for interstellar extinction. The ANN based classification scheme has been successfully tested on the simulated TAUVEX data pipeline. It is expected that the same technique can be employed for data validation in the ultraviolet from the virtual observatories. Finally, the interstellar extinction estimated by applying the ANNs on the TAUVEX data base would provide an extensive extinction map for our galaxy and which could in turn be modeled for the dust distribution in the galaxy.Comment: 8 pages, 12 figures, Accepted for publication in MNRAS; High resolution figures available from the authors on reques

    The MACHO Data Pipeline

    Get PDF
    The MACHO experiment is searching for dark matter in the halo of the Galaxy by monitoring more than 20 million stars in the LMC and Galactic bulge for gravitational microlensing events. The hardware consists of a 50 inch telescope, a two-color 32 megapixel ccd camera, and a network of computers. On clear nights the system generates up to 8 GB of raw data and 1 GB of reduced data. The computer system is responsible for all realtime control tasks, for data reduction, and for storing all data associated with each observation in a data base. The subject of this paper is the software system that handles these functions. It is an integrated system controlled by Petri nets that consists of multiple processes communicating via mailboxes and a bulletin board. The system is highly automated, readily extensible, and incorporates flexible error recovery capabilities. It is implemented with C++ in a Unix environment.Comment: 36 pages, 6 Postscript figures, uuencoded compressed tar. Submitted to PAS

    SSTRED: A data-processing and metadata-generating pipeline for CHROMIS and CRISP

    Full text link
    We present a data pipeline for the newly installed SST/CHROMIS imaging spectrometer, as well as for the older SST/CRISP spectropolarimeter. The aim is to provide observers with a user-friendly data pipeline, that delivers science-ready data with the metadata needed for archival. We generalized the CRISPRED data pipeline for multiple instruments and added metadata according to recommendations worked out as part of the SOLARNET project. We made improvements to several steps in the pipeline, including the MOMFBD image restoration. A part of that is a new fork of the MOMFBD program called REDUX, with several new features that are needed in the new pipeline. The CRISPEX data viewer has been updated to accommodate data cubes stored in this format. The pipeline code, as well as REDUX and CRISPEX are all freely available through git repositories or web download. We derive expressions for combining statistics of individual frames into statistics for a set of frames. We define a new extension to the World Coordinate System, that allow us to specify cavity errors as distortions to the spectral coordinate.Comment: Draf

    IoT Traffic Simulation: Data Pipeline

    Get PDF
    This poster discusses the data pipeline for a project involving car and road sensor data, which is processed for creating machine learning models for traffic and weather behavior. The car data includes the coordinates of the car, and the road data includes max/min temperatures, precipitation and humidity. The simulated data is ingested by Kafka and consumed by Spark for stream processing. The data is persisted in Cassandra, a NoSQL datastore. These systems are deployed on NCSA hardware and Microsoft Azure. Each system runs on a cluster of virtual machines. The pipeline is integrated with Python APIs.Ope

    Enabling a High Throughput Real Time Data Pipeline for a Large Radio Telescope Array with GPUs

    Get PDF
    The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5GiB/s, grouped into 8s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5TFLOP/s (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exascale facilities.Comment: Version accepted by Comp. Phys. Com
    • …
    corecore