10,236 research outputs found
Middleware-based Database Replication: The Gaps between Theory and Practice
The need for high availability and performance in data management systems has
been fueling a long running interest in database replication from both academia
and industry. However, academic groups often attack replication problems in
isolation, overlooking the need for completeness in their solutions, while
commercial teams take a holistic approach that often misses opportunities for
fundamental innovation. This has created over time a gap between academic
research and industrial practice.
This paper aims to characterize the gap along three axes: performance,
availability, and administration. We build on our own experience developing and
deploying replication systems in commercial and academic settings, as well as
on a large body of prior related work. We sift through representative examples
from the last decade of open-source, academic, and commercial database
replication systems and combine this material with case studies from real
systems deployed at Fortune 500 customers. We propose two agendas, one for
academic research and one for industrial R&D, which we believe can bridge the
gap within 5-10 years. This way, we hope to both motivate and help researchers
in making the theory and practice of middleware-based database replication more
relevant to each other.Comment: 14 pages. Appears in Proc. ACM SIGMOD International Conference on
Management of Data, Vancouver, Canada, June 200
A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures
This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverable’s authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes
DataHub: Collaborative Data Science & Dataset Version Management at Scale
Relational databases have limited support for data collaboration, where teams
collaboratively curate and analyze large datasets. Inspired by software version
control systems like git, we propose (a) a dataset version control system,
giving users the ability to create, branch, merge, difference and search large,
divergent collections of datasets, and (b) a platform, DataHub, that gives
users the ability to perform collaborative data analysis building on this
version control system. We outline the challenges in providing dataset version
control at scale.Comment: 7 page
SkyDOT (Sky Database for Objects in the Time Domain): A Virtual Observatory for Variability Studies at LANL
The mining of Virtual Observatories (VOs) is becoming a powerful new method
for discovery in astronomy. Here we report on the development of SkyDOT (Sky
Database for Objects in the Time domain), a new Virtual Observatory, which is
dedicated to the study of sky variability. The site will confederate a number
of massive variability surveys and enable exploration of the time domain in
astronomy. We discuss the architecture of the database and the functionality of
the user interface. An important aspect of SkyDOT is that it is continuously
updated in near real time so that users can access new observations in a timely
manner. The site will also utilize high level machine learning tools that will
allow sophisticated mining of the archive. Another key feature is the real time
data stream provided by RAPTOR (RAPid Telescopes for Optical Response), a new
sky monitoring experiment under construction at Los Alamos National Laboratory
(LANL).Comment: to appear in SPIE proceedings vol. 4846, 11 pages, 5 figure
Efficient Management of Short-Lived Data
Motivated by the increasing prominence of loosely-coupled systems, such as
mobile and sensor networks, which are characterised by intermittent
connectivity and volatile data, we study the tagging of data with so-called
expiration times. More specifically, when data are inserted into a database,
they may be tagged with time values indicating when they expire, i.e., when
they are regarded as stale or invalid and thus are no longer considered part of
the database. In a number of applications, expiration times are known and can
be assigned at insertion time. We present data structures and algorithms for
online management of data tagged with expiration times. The algorithms are
based on fully functional, persistent treaps, which are a combination of binary
search trees with respect to a primary attribute and heaps with respect to a
secondary attribute. The primary attribute implements primary keys, and the
secondary attribute stores expiration times in a minimum heap, thus keeping a
priority queue of tuples to expire. A detailed and comprehensive experimental
study demonstrates the well-behavedness and scalability of the approach as well
as its efficiency with respect to a number of competitors.Comment: switched to TimeCenter latex styl
Astro-WISE: Chaining to the Universe
The recent explosion of recorded digital data and its processed derivatives
threatens to overwhelm researchers when analysing their experimental data or
when looking up data items in archives and file systems. While current hardware
developments allow to acquire, process and store 100s of terabytes of data at
the cost of a modern sports car, the software systems to handle these data are
lagging behind. This general problem is recognized and addressed by various
scientific communities, e.g., DATAGRID/EGEE federates compute and storage power
over the high-energy physical community, while the astronomical community is
building an Internet geared Virtual Observatory, connecting archival data.
These large projects either focus on a specific distribution aspect or aim to
connect many sub-communities and have a relatively long trajectory for setting
standards and a common layer. Here, we report "first light" of a very different
solution to the problem initiated by a smaller astronomical IT community. It
provides the abstract "scientific information layer" which integrates
distributed scientific analysis with distributed processing and federated
archiving and publishing. By designing new abstractions and mixing in old ones,
a Science Information System with fully scalable cornerstones has been
achieved, transforming data systems into knowledge systems. This break-through
is facilitated by the full end-to-end linking of all dependent data items,
which allows full backward chaining from the observer/researcher to the
experiment. Key is the notion that information is intrinsic in nature and thus
is the data acquired by a scientific experiment. The new abstraction is that
software systems guide the user to that intrinsic information by forcing full
backward and forward chaining in the data modelling.Comment: To be published in ADASS XVI ASP Conference Series, 2006, R. Shaw, F.
Hill and D. Bell, ed
State-of-the-art on evolution and reactivity
This report starts by, in Chapter 1, outlining aspects of querying and updating resources on
the Web and on the Semantic Web, including the development of query and update languages
to be carried out within the Rewerse project.
From this outline, it becomes clear that several existing research areas and topics are of
interest for this work in Rewerse. In the remainder of this report we further present state of
the art surveys in a selection of such areas and topics. More precisely: in Chapter 2 we give
an overview of logics for reasoning about state change and updates; Chapter 3 is devoted to briefly describing existing update languages for the Web, and also for updating logic programs;
in Chapter 4 event-condition-action rules, both in the context of active database systems and
in the context of semistructured data, are surveyed; in Chapter 5 we give an overview of some relevant rule-based agents frameworks
- …