133,827 research outputs found
Coordination-Free Byzantine Replication with Minimal Communication Costs
State-of-the-art fault-tolerant and federated data management systems rely on fully-replicated designs in which all participants have equivalent roles. Consequently, these systems have only limited scalability and are ill-suited for high-performance data management. As an alternative, we propose a hierarchical design in which a Byzantine cluster manages data, while an arbitrary number of learners can reliable learn these updates and use the corresponding data.
To realize our design, we propose the delayed-replication algorithm, an efficient solution to the Byzantine learner problem that is central to our design. The delayed-replication algorithm is coordination-free, scalable, and has minimal communication cost for all participants involved. In doing so, the delayed-broadcast algorithm opens the door to new high-performance fault-tolerant and federated data management systems. To illustrate this, we show that the delayed-replication algorithm is not only useful to support specialized learners, but can also be used to reduce the overall communication cost of permissioned blockchains and to improve their storage scalability
Statistical structures for internet-scale data management
Efficient query processing in traditional database management systems relies on statistics on base data. For centralized systems, there is a rich body of research results on such statistics, from simple aggregates to more elaborate synopses such as sketches and histograms. For Internet-scale distributed systems, on the other hand, statistics management still poses major challenges. With the work in this paper we aim to endow peer-to-peer data management over structured overlays with the power associated with such statistical information, with emphasis on meeting the scalability challenge. To this end, we first contribute efficient, accurate, and decentralized algorithms that can compute key aggregates such as Count, CountDistinct, Sum, and Average. We show how to construct several types of histograms, such as simple Equi-Width, Average-Shifted Equi-Width, and Equi-Depth histograms. We present a full-fledged open-source implementation of these tools for distributed statistical synopses, and report on a comprehensive experimental performance evaluation, evaluating our contributions in terms of efficiency, accuracy, and scalability
The Mirror MMDBMS architecture
Handling large collections of digitized multimedia data, usually referred to as multimedia digital libraries, is a major challenge for information technology. The Mirror DBMS is a research database system that is developed to better understand the kind of data management that is required in the context of multimedia digital libraries (see also URL http://www.cs.utwente.nl/~arjen/mmdb.html). Its main features are an integrated approach to both content management and (traditional) structured data management, and the implementation of an extensible object-oriented logical data model on a binary relational physical data model. The focus of this work is aimed at design for scalability
The SEALS Yardsticks for Ontology Management
This paper describes the rst SEALS evaluation campaign
over ontology engineering tools (i.e., the SEALS Yardsticks for Ontology Management). It presents the dierent evaluation scenarios dened to evaluate the conformance, interoperability and scalability of these tools, and the test data used in these scenarios
Multifaceted Faculty Network Design and Management: Practice and Experience Report
We report on our experience on multidimensional aspects of our faculty's
network design and management, including some unique aspects such as
campus-wide VLANs and ghosting, security and monitoring, switching and routing,
and others. We outline a historical perspective on certain research, design,
and development decisions and discuss the network topology, its scalability,
and management in detail; the services our network provides, and its evolution.
We overview the security aspects of the management as well as data management
and automation and the use of the data by other members of the IT group in the
faculty.Comment: 19 pages, 11 figures, TOC and index; a short version presented at
C3S2E'11; v6: more proofreading, index, TOC, reference
Data Management and Mining in Astrophysical Databases
We analyse the issues involved in the management and mining of astrophysical
data. The traditional approach to data management in the astrophysical field is
not able to keep up with the increasing size of the data gathered by modern
detectors. An essential role in the astrophysical research will be assumed by
automatic tools for information extraction from large datasets, i.e. data
mining techniques, such as clustering and classification algorithms. This asks
for an approach to data management based on data warehousing, emphasizing the
efficiency and simplicity of data access; efficiency is obtained using
multidimensional access methods and simplicity is achieved by properly handling
metadata. Clustering and classification techniques, on large datasets, pose
additional requirements: computational and memory scalability with respect to
the data size, interpretability and objectivity of clustering or classification
results. In this study we address some possible solutions.Comment: 10 pages, Late
Perspects in astrophysical databases
Astrophysics has become a domain extremely rich of scientific data. Data
mining tools are needed for information extraction from such large datasets.
This asks for an approach to data management emphasizing the efficiency and
simplicity of data access; efficiency is obtained using multidimensional access
methods and simplicity is achieved by properly handling metadata. Moreover,
clustering and classification techniques on large datasets pose additional
requirements in terms of computation and memory scalability and
interpretability of results. In this study we review some possible solutions
- …