1,125 research outputs found
Generalized Lineage-Aware Temporal Windows: Supporting Outer and Anti Joins in Temporal-Probabilistic Databases
The result of a temporal-probabilistic (TP) join with negation includes, at
each time point, the probability with which a tuple of a positive relation
matches none of the tuples in a negative relation , for a
given join condition . TP outer and anti joins thus resemble the
characteristics of relational outer and anti joins also in the case when there
exist time points at which input tuples from have non-zero
probabilities to be and input tuples from have non-zero
probabilities to be , respectively. For the computation of TP joins with
negation, we introduce generalized lineage-aware temporal windows, a mechanism
that binds an output interval to the lineages of all the matching valid tuples
of each input relation. We group the windows of two TP relations into three
disjoint sets based on the way attributes, lineage expressions and intervals
are produced. We compute all windows in an incremental manner, and we show that
pipelined computations allow for the direct integration of our approach into
PostgreSQL. We thereby alleviate the prevalent redundancies in the interval
computations of existing approaches, which is proven by an extensive
experimental evaluation with real-world datasets
Beam Loss Patterns at the LHC Collimators: Measurements & Simulations
The Beam Loss Monitoring (BLM) system of the Large Hadron Collider (LHC) detects particle losses of circulating beams and initiates an emergency extraction of the beam in case that the BLM thresholds are exceeded. This protection is required as energy deposition in the accelerator equipment due to secondary shower particles can reach critical levels; causing damage to the beam-line components and quenches of superconducting magnets. Robust and movable beam line elements, so-called collimators, are the aperture limitations of the LHC. Consequently, they are exposed to the excess of lost beam particles and their showers. Proton loss patterns at LHC collimators have to be determined to interpret the signal of the BLM detectors and to set adequate BLM thresholds for the protection of collimators and other equipment in case of unacceptably increased loss rates. The first part of this work investigates the agreement of BLM detector measurements with simulations for an LHC-like collimation setup. The setup consists of one LHC collimator and three LHC BLM detectors mounted in the Super Proton Synchrotron (SPS). The geometry is modeled in the Monte Carlo particle code Fluka. The impact scenario of the beam during the measurements is determined for simulations, and the measured BLM detector signals are compared with the simulated signals. This procedure results in a determination of an overall accuracy for the prediction of the BLM signa ls, and thus also for the prediction of BLM thresholds, by simulations. It includes an assessment of BLM-signal deviation due to simplifications and misalignment of the geometry in the simulation, physics parameters of the simulation, and uncertainties for the beam impact scenario. At the same time this study is an integral check for the BLM electronics and the data acquisition system. The relative agreement of measurements and simulations ranges between 20% and 70%, depending on the detector type. The second part of this work is devoted to the prediction of BLM detector signals for the actual LHC collimation geometry and a larger set of collimator types. Again, Fluka was employed as simulation tool. The relation between the BLM signals and energy deposition in the collimators - as the crucial scaling variable for damage to the collimators - is investigated. The study focuses on the variation of the BLM signals and the BLM signal-to-energy deposition ratio due to misalignment, and different beam impact scenarios. It results in ratios of BLM signal to energy deposition in the collimator which allow to predict BLM thresholds at collimators for given damage limits of the collimators
Snapshot Semantics for Temporal Multiset Relations (Extended Version)
Snapshot semantics is widely used for evaluating queries over temporal data:
temporal relations are seen as sequences of snapshot relations, and queries are
evaluated at each snapshot. In this work, we demonstrate that current
approaches for snapshot semantics over interval-timestamped multiset relations
are subject to two bugs regarding snapshot aggregation and bag difference. We
introduce a novel temporal data model based on K-relations that overcomes these
bugs and prove it to correctly encode snapshot semantics. Furthermore, we
present an efficient implementation of our model as a database middleware and
demonstrate experimentally that our approach is competitive with native
implementations and significantly outperforms such implementations on queries
that involve aggregation.Comment: extended version of PVLDB pape
Query Results over Ongoing Databases that Remain Valid as Time Passes By (Extended Version)
Ongoing time point now is used to state that a tuple is valid from the start
point onward. For database systems ongoing time points have far-reaching
implications since they change continuously as time passes by. State-of-the-art
approaches deal with ongoing time points by instantiating them to the reference
time. The instantiation yields query results that are only valid at the chosen
time and get invalidated as time passes by. We propose a solution that keeps
ongoing time points uninstantiated during query processing. We do so by
evaluating predicates and functions at all possible reference times. This
renders query results independent of a specific reference time and yields
results that remain valid as time passes by. As query results, we propose
ongoing relations that include a reference time attribute. The value of the
reference time attribute is restricted by predicates and functions on ongoing
attributes. We describe and evaluate an efficient implementation of ongoing
data types and operations in PostgreSQL.Comment: Extended version of ICDE pape
Comparative Study of the Effect of ACE-Inhibitors and Other Antihypertensive Agents on Proteinuria in Diabetic Patients
Several studies during the past 15 years have shown that antihypertensive therapy with different types of drugs can reduce microalbuminuria or clinical proteinuria and retard the progression toward end-stage renal failure. However, some authors reported disparate renal protective effects of different antihypertensive drugs in diabetic animals and humans. In an attempt to resolve the controversy surrounding this possibility, previously we reported a meta-analysis of published studies in diabetics with microalbuminuria or overt proteinuria treated with conventional agents, angiotensin-converting enzyme (ACE) inhibitors, or calcium antagonists (Ca2+ antagonists). Here we present an updated meta-analysis of published studies in diabetics with microalbuminuria or clinical proteinuria (UProt), treated during ≥ 4 weeks with ACE inhibitors, Ca2+ antagonists, or conventional therapy (diuretic and/or β-blocker). Despite similar blood pressure (BP) reductions, UProt tended to decrease more on ACE inhibitors (on average -45%) than on conventional therapy (on average -23%) or Ca2+ antagonists other than nifedipine (on average -35%); in contrast, UProt tended to increase slightly on nifedipine (on average 5%, P 5% and the slope was steeper (4% UProt change per percent BP change) than on ACE inhibitors. On Ca2+ antagonists other than nifedipine, UProt was unchanged at zero BP change, and the regression line for the relationship between changes in UProt (r = 0.55, P < .05) was in an intermediate position between ACE inhibitors and conventional treatment. Seventy reports also contained data on glomerular filtration rate (GFR). On ACE inhibitors, GFR was on average unchanged, but tended to increase slighty with progressive BP reduction (r = -0.55, P < .0001). On conventional therapy or Ca2+.antagonists, variations in GFR were unrelated to changes in BP. As ACE inhibitors exert a specific antiproteinuric effect even without a change in systemic BP, they are superior to other agents in treating microalbuminuria or overt proteinuria in initially normotensive or mildly hypertensive diabetic patients. On the other hand, when systemic BP can be lowered by 20%, as it is desirable in severely hypertensive patients, ACE inhibitors, conventional therapy, and several Ca2+ antagonists all have a distinct antiproteinuric action. In contrast, as the example of nifedipine illustrates, drug-specific intrarenal effects may antagonize a BP-dependent antiproteinuric action and even counteract the effect of lowering systemic pressure. It is of note that ACE inhibitors may, in addition to their antiproteinuric effect, exert a drug-specific beneficial influence on GFR. Am J Hypertens 1994;7:84S-92
Lineage-Aware Temporal Windows: Supporting Set Operations in Temporal-Probabilistic Databases
In temporal-probabilistic (TP) databases, the combination of the temporal and
the probabilistic dimension adds significant overhead to the computation of set
operations. Although set queries are guaranteed to yield linearly sized output
relations, existing solutions exhibit quadratic runtime complexity. They suffer
from redundant interval comparisons and additional joins for the formation of
lineage expressions. In this paper, we formally define the semantics of set
operations in TP databases and study their properties. For their efficient
computation, we introduce the lineage-aware temporal window, a mechanism that
directly binds intervals with lineage expressions. We suggest the lineage-aware
window advancer (LAWA) for producing the windows of two TP relations in
linearithmic time, and we implement all TP set operations based on LAWA. By
exploiting the flexibility of lineage-aware temporal windows, we perform direct
filtering of irrelevant intervals and finalization of output lineage
expressions and thus guarantee that no additional computational cost or buffer
space is needed. A series of experiments over both synthetic and real-world
datasets show that (a) our approach has predictable performance, depending only
on the input size and not on the number of time intervals per fact or their
overlap, and that (b) it outperforms state-of-the-art approaches in both
temporal and probabilistic databases
- …