5,092,373 research outputs found

    Redundant data management system

    Get PDF
    Redundant data management system solves problem of operating redundant equipment in real time environment where failures are detected, isolated, and switched in simple manner. System consists of quadruply-redundant computer, input/output control units, and data buses. System inherently contains failure detection, isolation, and switching function

    Automated Data Management Information System (ADMIS)

    Get PDF
    ADMIS stores and controls data and documents associated with manned space flight effort. System contains all data oriented toward a specific document; it is primary source of reports generated by the system. Each group of records is composed of one document record, one distribution record for each recipient of the document, and one summary record

    The Dark Energy Survey Data Management System

    Full text link
    The Dark Energy Survey collaboration will study cosmic acceleration with a 5000 deg2 griZY survey in the southern sky over 525 nights from 2011-2016. The DES data management (DESDM) system will be used to process and archive these data and the resulting science ready data products. The DESDM system consists of an integrated archive, a processing framework, an ensemble of astronomy codes and a data access framework. We are developing the DESDM system for operation in the high performance computing (HPC) environments at NCSA and Fermilab. Operating the DESDM system in an HPC environment offers both speed and flexibility. We will employ it for our regular nightly processing needs, and for more compute-intensive tasks such as large scale image coaddition campaigns, extraction of weak lensing shear from the full survey dataset, and massive seasonal reprocessing of the DES data. Data products will be available to the Collaboration and later to the public through a virtual-observatory compatible web portal. Our approach leverages investments in publicly available HPC systems, greatly reducing hardware and maintenance costs to the project, which must deploy and maintain only the storage, database platforms and orchestration and web portal nodes that are specific to DESDM. In Fall 2007, we tested the current DESDM system on both simulated and real survey data. We used Teragrid to process 10 simulated DES nights (3TB of raw data), ingesting and calibrating approximately 250 million objects into the DES Archive database. We also used DESDM to process and calibrate over 50 nights of survey data acquired with the Mosaic2 camera. Comparison to truth tables in the case of the simulated data and internal crosschecks in the case of the real data indicate that astrometric and photometric data quality is excellent.Comment: To be published in the proceedings of the SPIE conference on Astronomical Instrumentation (held in Marseille in June 2008). This preprint is made available with the permission of SPIE. Further information together with preprint containing full quality images is available at http://desweb.cosmology.uiuc.edu/wik

    PANGAEA information system for glaciological data management

    Get PDF
    Specific parameters determined on cores from continental ice sheets or glaciers can be used to reconstruct former climate. To use this scientific resource effectively an information system is needed which guarantees consistent longtime storage of data and provides easy access for the scientific community.An information system to archive any data of paleoclimatic relevance, together with the related metadata, raw data and evaluated paleoclimatic data, is presented. The system, based on a relational database, provides standardized import and export routines, easy access with uniform retrieval functions, and tools for the visualization of the data. The network is designed as a client/server system providing access through the Internet with proprietary client software including a high functionality or read-only access on published data via the World Wide Web

    Adaptive Data Stream Management System Using Learning Automata

    Full text link
    In many modern applications, data are received as infinite, rapid, unpredictable and time- variant data elements that are known as data streams. Systems which are able to process data streams with such properties are called Data Stream Management Systems (DSMS). Due to the unpredictable and time- variant properties of data streams as well as system, adaptivity of the DSMS is a major requirement for each DSMS. Accordingly, determining parameters which are effective on the most important performance metric of a DSMS (i.e., response time) and analysing them will affect on designing an adaptive DSMS. In this paper, effective parameters on response time of DSMS are studied and analysed and a solution is proposed for DSMSs' adaptivity. The proposed adaptive DSMS architecture includes a learning unit that frequently evaluates system to adjust the optimal value for each of tuneable effective. Learning Automata is used as the learning mechanism of the learning unit to adjust the value of tuneable effective parameters. So, when system faces some changes, the learning unit increases performance by tuning each of tuneable effective parameters to its optimum value. Evaluation results illustrate that after a while, parameters reach their optimum value and then DSMS's adaptivity will be improved considerably

    Online monitoring system and data management for KamLAND

    Full text link
    In January 22, 2002, KamLAND started the data-taking. The KamLAND detector is a complicated system which consists of liquid scintillator, buffer oil, spherical balloon and so on. In order to maintain the detector safety, we constructed monitoring system which collect detector status information such as balloon weight, liquid scintillator oil level and so on. In addition, we constructed continuous Rn monitoring system for the 7^7Be solar neutrino detection. The KamLAND monitoring system consists of various network, LON, 1-Wire, and TCP/IP, and these are indispensable for continuous experimental data acquisition.Comment: Submitted to Nucl.Instrum.Meth.

    Seismic data clustering management system

    Get PDF
    This is the abstract of the paper given at the conference. Copyright @ 2011 The Authors.Over the last years, seismic images have increasingly played a vital role to the study of earthquakes. The large volume of seismic data that has been accumulated has created the need to develop sophisticated systems to manage this kind of data. Seismic interpretation can play a much more active role in the evaluation of large volumes of data by providing at an early stage vital information relating to the framework of potential producing levels. [1] This work presents a novel method to manage and analyse seismic data. The data is initially turned into clustering maps using clustering techniques [2] [3] [4] [5] [6], in order to be analysed on the platform. These clustering maps can then be analysed with the friendly-user interface of Seismic 1 which is based on .Net framework architecture [7]. This feature permits the porting of the application in any Windows – based computer as also to many other Linux based environments, using the Mono project functionality [8], so it can run an application using the No-Touch Deployment [7]. The platform supports two ways of processing seismic data. Firstly, a fast multifunctional version of the classical region-growing segmentation algorithm [9], [10] is applied to various areas of interest permitting their precise definition and labelling. Moreover, this algorithm is assigned to automatically allocate new earthquakes to a particular cluster based upon the magnitude of the centre of gravity of the existing clusters; or create a new cluster if all centers of gravity are above a predefined by the user upper threshold point. Secondly, a visual technique is used to record the behaviour of a cluster of earthquakes in a designated area. In this way, the system functions as a dynamic temporal simulator which depicts sequences of earthquakes on a map [11]

    IMPLEMENTING A MODERN TEMPORAL DATA MANAGEMENT SYSTEM

    Get PDF
    Temporal data management is a concept that has been around for many years. A temporal data management system (TDMS) manages data that is tracked over time. In this paper, the authors present an Oracle-based implementation of a TDMS that provides access to temporal data. The design and implementation presented in this paper are presented at a high level, with the significant features such as reference intervals and temporal relationships. The most notable TDMS benefits are a semi-portable solution and an implementation that maximizes on native database features. The paper finally presents an evaluation of the TDMS implementation with a feature comparison and benchmarking.Temporal data management

    GSE, data management system programmers/User' manual

    Get PDF
    The GSE data management system is a computerized program which provides for a central storage source for key data associated with the mechanical ground support equipment (MGSE). Eight major sort modes can be requested by the user. Attributes that are printed automatically with each sort include the GSE end item number, description, class code, functional code, fluid media, use location, design responsibility, weight, cost, quantity, dimensions, and applicable documents. Multiple subsorts are available for the class code, functional code, fluid media, use location, design responsibility, and applicable document categories. These sorts and how to use them are described. The program and GSE data bank may be easily updated and expanded
    corecore