120 research outputs found

    The Infrared Imaging Spectrograph (IRIS) for TMT: Data Reduction System

    Get PDF
    IRIS (InfraRed Imaging Spectrograph) is the diffraction-limited first light instrument for the Thirty Meter Telescope (TMT) that consists of a near-infrared (0.84 to 2.4 μ\mum) imager and integral field spectrograph (IFS). The IFS makes use of a lenslet array and slicer for spatial sampling, which will be able to operate in 100's of different modes, including a combination of four plate scales from 4 milliarcseconds (mas) to 50 mas with a large range of filters and gratings. The imager will have a field of view of 34×\times34 arcsec2^{2} with a plate scale of 4 mas with many selectable filters. We present the preliminary design of the data reduction system (DRS) for IRIS that need to address all of these observing modes. Reduction of IRIS data will have unique challenges since it will provide real-time reduction and analysis of the imaging and spectroscopic data during observational sequences, as well as advanced post-processing algorithms. The DRS will support three basic modes of operation of IRIS; reducing data from the imager, the lenslet IFS, and slicer IFS. The DRS will be written in Python, making use of open-source astronomical packages available. In addition to real-time data reduction, the DRS will utilize real-time visualization tools, providing astronomers with up-to-date evaluation of the target acquisition and data quality. The quicklook suite will include visualization tools for 1D, 2D, and 3D raw and reduced images. We discuss the overall requirements of the DRS and visualization tools, as well as necessary calibration data to achieve optimal data quality in order to exploit science cases across all cosmic distance scales.Comment: 13 pages, 2 figures, 6 tables, Proceeding 9913-165 of the SPIE Astronomical Telescopes + Instrumentation 201

    XML for Domain Viewpoints

    Get PDF
    Within research institutions like CERN (European Organization for Nuclear Research) there are often disparate databases (different in format, type and structure) that users need to access in a domain-specific manner. Users may want to access a simple unit of information without having to understand detail of the underlying schema or they may want to access the same information from several different sources. It is neither desirable nor feasible to require users to have knowledge of these schemas. Instead it would be advantageous if a user could query these sources using his or her own domain models and abstractions of the data. This paper describes the basis of an XML (eXtended Markup Language) framework that provides this functionality and is currently being developed at CERN. The goal of the first prototype was to explore the possibilities of XML for data integration and model management. It shows how XML can be used to integrate data sources. The framework is not only applicable to CERN data sources but other environments too.Comment: 9 pages, 6 figures, conference report from SCI'2001 Multiconference on Systemics & Informatics, Florid

    MobDSL: a domain specific language for multiple mobile platform deployment

    Get PDF
    There is increasing interest in establishing a presence in the mobile application market, with platforms including Apple iPhone, Google Android and Microsoft Windows Mobile. Because of the differences in platform languages, frameworks, and device hardware, development of an application for more than one platform can be a difficult task. In this paper we address this problem by the creation of a mobile Domain Specific Language (DSL). Domain analysis was carried out using two case studies, inferring basic requirements of the language. The paper further introduces a language calculus definition and provides discussion how it fits the domain analysis, and any issues found in our approach

    Evaluation of optimization techniques for aggregation

    Get PDF
    Aggregations are almost always done at the top of operator tree after all selections and joins in a SQL query. But actually they can be done before joins and make later joins much cheaper when used properly. Although some enumeration algorithms considering eager aggregation are proposed, no sufficient evaluations are available to guide the adoption of this technique in practice. And no evaluations are done for real data sets and real queries with estimated cardinalities. That means it is not known how eager aggregation performs in the real world. In this thesis, a new estimation method for group by and join combining traditional estimation method and index-based join sampling is proposed and evaluated. Two enumeration algorithms considering eager aggregation are implemented and compared in the context of estimated cardinality. We find that the new estimation method works well with little overhead and that under certain conditions, eager aggregation can dramatically accelerate queries

    Database Migration: A Literature Review and Case Study

    Get PDF
    This literature review provides an overview of various areas of research in database migration. Specific areas which are addressed are legacy migration, migrating between different database models, reverse engineering, schema design and translation, and security. Additional literature is considered which provides a general overview of the topic. Some case study literature is included with an emphasis on library science studies. This literature review is then applied to a case study migration project at the University of North Carolina at Chapel Hill in order to determine where the literature was helpful and where not, as well as where more research may be needed. Conclusions are drawn that the theoretical literature is quite comprehensive, but that literature having more practical application could certainly be strengthened

    Outline of a Decision Support System for Area-Wide Water Quality Planning

    Get PDF
    This working paper outlines requirements for an implementation of a computerized decision support system which addresses the technical aspects of area-wide water quality planning. The framework for this work is in the context of the environmental law adopted in the United States during 1972. This law, known as the Federal Water Pollution Control Act Amendments of 1972, specifies various requirements that both municipal and industrial discharges must eventually conform. By 1977 municipal waste treatment plants must have in place secondary treatment facilities and for industry it is necessary to utilize what is referred to as "best practical technology" for waste treatment. Under certain circumstances as described in section 303 of the law further treatment may be required to meet water quality standards. Section 208 of the Federal Water Pollution Control Act Amendments of 1972 calls for area-wide implementation of technical and management planning, with the objectives of meeting 1983 water quality goals and establishing a plan for municipal and industrial facilities construction over a twenty year period. Emphasis is placed on locally controlled planning, on dealing with non-point sources as well as point sources, and on consideration of both structural and nonstructural control methods. The scope of present examination is limited to those aspects of technical planning which are amenable to implementation within the framework of a computerized decision support system

    The Case for Learned Index Structures

    Full text link
    Indexes are models: a B-Tree-Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not. In this exploratory research paper, we start from this premise and posit that all existing index structures can be replaced with other types of models, including deep-learning models, which we term learned indexes. The key idea is that a model can learn the sort order or structure of lookup keys and use this signal to effectively predict the position or existence of records. We theoretically analyze under which conditions learned indexes outperform traditional index structures and describe the main challenges in designing learned index structures. Our initial results show, that by using neural nets we are able to outperform cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory over several real-world data sets. More importantly though, we believe that the idea of replacing core components of a data management system through learned models has far reaching implications for future systems designs and that this work just provides a glimpse of what might be possible

    DESIGNING A GENERALIZED MULTIPLE CRITERIA DECISION SUPPORT SYSTEM

    Get PDF
    Decision support systems are of many kinds depending on the models and techniques employed in them. Multiple criteria decision making techniques constitute an important class of DSS with unique software requirements. This paper stresses the importance of interactive MCDM methods since these facilitate learning through all stages of the decision making process. We first describe some features of Multiple Criteria Decision Support Systems ( MCDSSs) that distinguish them from classical DSSs. We then outline a software architecture for a MCDSS which has three basic components: a Dialog Manager, an MCDM Model Manager, and a Data Manager. We describe the interactions that occur between these three software components in an integrated MCDSS and outline a design for the Data Manager which is based on a concept of levels of data abstraction.Information Systems Working Papers Serie
    corecore