439,251 research outputs found

    Scalable Database Access Technologies for ATLAS Distributed Computing

    Full text link
    ATLAS event data processing requires access to non-event data (detector conditions, calibrations, etc.) stored in relational databases. The database-resident data are crucial for the event data reconstruction processing steps and often required for user analysis. A main focus of ATLAS database operations is on the worldwide distribution of the Conditions DB data, which are necessary for every ATLAS data processing job. Since Conditions DB access is critical for operations with real data, we have developed the system where a different technology can be used as a redundant backup. Redundant database operations infrastructure fully satisfies the requirements of ATLAS reprocessing, which has been proven on a scale of one billion database queries during two reprocessing campaigns of 0.5 PB of single-beam and cosmics data on the Grid. To collect experience and provide input for a best choice of technologies, several promising options for efficient database access in user analysis were evaluated successfully. We present ATLAS experience with scalable database access technologies and describe our approach for prevention of database access bottlenecks in a Grid computing environment.Comment: 6 pages, 7 figures. To be published in the proceedings of DPF-2009, Detroit, MI, July 2009, eConf C09072

    Event processing using database technology

    Get PDF
    This tutorial deals with applications that help systems and individuals respond to critical conditions in their environments. The identification of critical conditions requires correlating vast amounts of data within and outside an enterprise. Conditions that signal opportunities or threats are defined by complex patterns of data over time, space and other attributes. Systems and individuals have models (expectations) of behaviors of their environments, and applications notify them when reality – as determined by measurements and estimates – deviate from their expectations. Components of event systems are also sent information to validate their current models and when specific responses are required. Valuable information is that which supports or contradicts current expectations or that which requires an action on the part of the receiver. A major problem today is information overload; this problem can be solved by identifying what information is critical, complementing existing pull technology with sophisticated push technology, and filtering out non-critical data

    Analysing Temporal Relations – Beyond Windows, Frames and Predicates

    Get PDF
    This article proposes an approach to rely on the standard operators of relational algebra (including grouping and ag- gregation) for processing complex event without requiring window specifications. In this way the approach can pro- cess complex event queries of the kind encountered in appli- cations such as emergency management in metro networks. This article presents Temporal Stream Algebra (TSA) which combines the operators of relational algebra with an analy- sis of temporal relations at compile time. This analysis de- termines which relational algebra queries can be evaluated against data streams, i. e. the analysis is able to distinguish valid from invalid stream queries. Furthermore the analysis derives functions similar to the pass, propagation and keep invariants in Tucker's et al. \Exploiting Punctuation Seman- tics in Continuous Data Streams". These functions enable the incremental evaluation of TSA queries, the propagation of punctuations, and garbage collection. The evaluation of TSA queries combines bulk-wise and out-of-order processing which makes it tolerant to workload bursts as they typically occur in emergency management. The approach has been conceived for efficiently processing complex event queries on top of a relational database system. It has been deployed and tested on MonetDB

    Digital Forensics Event Graph Reconstruction

    Get PDF
    Ontological data representation and data normalization can provide a structured way to correlate digital artifacts. This can reduce the amount of data that a forensics examiner needs to process in order to understand the sequence of events that happened on the system. However, ontology processing suffers from large disk consumption and a high computational cost. This paper presents Property Graph Event Reconstruction (PGER), a novel data normalization and event correlation system that leverages a native graph database to improve the speed of queries common in ontological data. PGER reduces the processing time of event correlation grammars and maintains accuracy over a relational database storage format

    Configuring the LHCb readout network using a database

    Get PDF
    The LHCb readout system is composed of hundreds of electronic boards, an event-building network based on Gigabit Ethernet switches and an online processing farm. The Experiment Control System (ECS) configures the system from the Online Configuration database. This database contains device parameters, the hierarchical structure and the connectivity information of the system. In addition the switches in the event-building network require routing tables that have to be generated according to the connectivity. We apply the Entity Relationship model to represent the connectivity of the system. SQL code builds the routing tables using the information contained in the Configuration database

    DataCell: Exploiting the Power of Relational Databases for Efficient Stream Processing

    Get PDF
    Designed for complex event processing, DataCell is a research prototype database system in the area of sensor stream systems. Under development at CWI, it belongs to the MonetDB database system family. CWI researchers innovatively built a stream engine directly on top of a database kernel, thus exploiting and merging technologies from the stream world and the rich area of database literature. The results are very promising
    • …
    corecore