9 research outputs found

    DIPBench: An Independent Benchmark for Data-Intensive Integration Processes

    Get PDF
    The integration of heterogeneous data sources is one of the main challenges within the area of data engineering. Due to the absence of an independent and universal benchmark for data-intensive integration processes, we propose a scalable benchmark, called DIPBench (Data intensive integration Process Benchmark), for evaluating the performance of integration systems. This benchmark could be used for subscription systems, like replication servers, distributed and federated DBMS or message-oriented middleware platforms like Enterprise Application Integration (EAI) servers and Extraction Transformation Loading (ETL) tools. In order to reach the mentioned universal view for integration processes, the benchmark is designed in a conceptual, process-driven way. The benchmark comprises 15 integration process types. We specify the source and target data schemas and provide a toolsuite for the initialization of the external systems, the execution of the benchmark and the monitoring of the integration system's performance. The core benchmark execution may be influenced by three scale factors. Finally, we discuss a metric unit used for evaluating the measured integration system's performance, and we illustrate our reference benchmark implementation for federated DBMS

    Multi Echelon Inventory Optimization

    Get PDF

    Combining experimental and theoretical methods to learn about the reactivity of gas-processing metalloenzymes

    Get PDF
    International audienceAfter enzymes were first discovered in the late XIX century, and for the first seventy years of enzymology, kinetic experiments were the only source of information about enzyme mechanisms. Over the following fifty years, these studies were taken over by approaches that give information at the molecular level, such as crystallography, spectroscopy and theoretical chemistry (as emphasized by the Nobel Prize in Chemistry awarded last year to M. Karplus, M. Levitt and A. Warshel). In this review, we thoroughly discuss the interplay between the information obtained from theoretical and experimental methods, by focussing on enzymes that process small molecules such as H 2 or CO 2 (hydrogenases, CO-dehydrogenase and carbonic anhydrase), and that are therefore relevant in the context of energy and environment. We argue that combining theoretical chemistry (DFT, MD, QM/MM) and detailed investigations that make use of modern kinetic methods, such as protein film voltammetry, is an innovative way of learning about individual steps and/or complex reactions that are part of the catalytic cycles. We illustrate this with recent results from our labs and others, including studies of gas transport along substrate channels, long range proton transfer, and mechanisms of catalysis, inhibition or inactivation. Broader context Some reactions which are very important in the context of energy and environment, such as the conversion between CO and CO2 , or H+ and H2 , are catalyzed in living organisms by large and complex enzymes that use inorganic active sites to transform substrates, chains of redox centers to transfer electrons, ionizable amino acids to transfer protons, and networks of hydrophobic cavities to guide the diffusion of substrates and products within the protein. This highly sophisticated biological plumbing and wiring makes turnover frequencies of thousands of substrate molecules per second possible. Understanding the molecular details of catalysis is still a challenge. We explain in this review how a great deal of information can be obtained using an interdisciplinary approach that combines state-of-the art kinetics and computational chemistry. This differs from—and complements—the more traditional strategies that consist in trying to see the catalytic intermediates using methods that rely on the interaction between light and matter, such as X-ray diffraction and spectroscopic techniques

    Programming Using Automata and Transducers

    Get PDF
    Automata, the simplest model of computation, have proven to be an effective tool in reasoning about programs that operate over strings. Transducers augment automata to produce outputs and have been used to model string and tree transformations such as natural language translations. The success of these models is primarily due to their closure properties and decidable procedures, but good properties come at the price of limited expressiveness. Concretely, most models only support finite alphabets and can only represent small classes of languages and transformations. We focus on addressing these limitations and bridge the gap between the theory of automata and transducers and complex real-world applications: Can we extend automata and transducer models to operate over structured and infinite alphabets? Can we design languages that hide the complexity of these formalisms? Can we define executable models that can process the input efficiently? First, we introduce succinct models of transducers that can operate over large alphabets and design BEX, a language for analysing string coders. We use BEX to prove the correctness of UTF and BASE64 encoders and decoders. Next, we develop a theory of tree transducers over infinite alphabets and design FAST, a language for analysing tree-manipulating programs. We use FAST to detect vulnerabilities in HTML sanitizers, check whether augmented reality taggers conflict, and optimize and analyze functional programs that operate over lists and trees. Finally, we focus on laying the foundations of stream processing of hierarchical data such as XML files and program traces. We introduce two new efficient and executable models that can process the input in a left-to-right linear pass: symbolic visibly pushdown automata and streaming tree transducers. Symbolic visibly pushdown automata are closed under Boolean operations and can specify and efficiently monitor complex properties for hierarchical structures over infinite alphabets. Streaming tree transducers can express and efficiently process complex XML transformations while enjoying decidable procedures

    Rowing performance monitoring system development

    Get PDF
    The aim of this work was to develop sensory devices and data acquisition system to facilitate investigations into the mechanics of the rowing system, comprising the rower(s), boat and oars. As such, the parameters to be measured were: boat and seat position, velocity and acceleration; oar force; foot force; oar angle and rower heart rate. An oar force sensor was designed that fitted into the cavity of a modified oarlock. This sensor design is cheap, yields sound results and its presence is almost not noticeable to the rower. A review of previously applied methods of oar force measurement, predating 1900, is included. Foot force is of interest to many different fields of research, thus there is a large amount of literature on the subject of foot force measurement. A comprehensive review of this literature is used to aid in the design of the required sensor. The combination of a non-simple dynamic loading (i.e. time varying spatially distributed normal and shear forces), with static foot position distinguishes the problem of measuring the force under the feet during rowing from most previously considered cases. A strain gauge-based force sensing plate was designed to measure both normal force distribution and unidirectional shear force under the feet. Sample results are presented from a study with international class New Zealand rowers on a rowing ergometer. The sensor, performs well under normal force loadings, but needs modification to measure shear accurately. Possible modifications are suggested. While only a single oar angle, known as the sweep angle, was required to be measured, a sensor combination capable of measuring the spatial orientation of the oar relative to the boat was conceived. A new method of relative orientation estimation, via approximation of the Rodrigues' vector, which allows relative weighting of sensory data, was derived. Unfortunately, calibration issues prevented the gathering of meaningful data in the time available. A full theoretical development, including a new calibration scheme, which should alleviate the encountered problems, is included. While the motion of the rower within the boat is an important consideration in the dynamics of the rowing system, few previous researchers have measured it. These previous methods are briefly described, before the sensor used in this study, the optical rotary encoder, is detailed. Differentiation of the encoder signal to obtain seat velocity and acceleration relative to the boat was achieved using a purpose designed simple Kalman filter. The kinematic parameters of the boat, i.e. position, velocity and acceleration were measured using a combination of accelerometer and submerged impeller. The information from these two sensors was combined using a variant of the Kalman filter used in the differentiation of the encoder signal. The combination of the seat and boat kinematics allows study of the motion of the system centre of mass. Supplying power to, and collecting data from the above sensory devices was a purpose built data acquisition system dubbed ORAC (On-the-water Rowing Acquisition Computer). ORAC was designed to transmit the collected information, in real-time, to a remote laptop computer via wireless LAN, but the system used proved to have insufficient range, and hence ORAC was used as a standalone computer
    corecore