164,488 research outputs found
Retrieval of the most relevant facts from data streams joined with slowly evolving dataset published on the web of data
Finding the most relevant facts among dynamic and hetero- geneous data published on theWeb of Data is getting a growing attention in recent years. RDF Stream Processing (RSP) engines offer a baseline solution to integrate and process streaming data with data distributed on the Web. Unfortunately, the time to access and fetch the distributed data can be so high to put the RSP engine at risk of losing reactiveness, especially when the distributed data is slowly evolving. State of the art work addressed this problem by proposing an architectural solution that keeps a local replica of the distributed data and a baseline maintenance policy to refresh it over time. This doctoral thesis is investigating advance policies that let RSP engines continuously answer top-k queries, which require to join data streams with slowly evolving datasets published on the Web of Data, without violating the reactiveness constrains imposed by the users. In particular, it proposes policies that focus on freshing only the data in the replica that contributes to the correctness of the top-k results
Stream Sampling for Frequency Cap Statistics
Unaggregated data, in streamed or distributed form, is prevalent and come
from diverse application domains which include interactions of users with web
services and IP traffic. Data elements have {\em keys} (cookies, users,
queries) and elements with different keys interleave. Analytics on such data
typically utilizes statistics stated in terms of the frequencies of keys. The
two most common statistics are {\em distinct}, which is the number of active
keys in a specified segment, and {\em sum}, which is the sum of the frequencies
of keys in the segment. Both are special cases of {\em cap} statistics, defined
as the sum of frequencies {\em capped} by a parameter , which are popular in
online advertising platforms. Aggregation by key, however, is costly, requiring
state proportional to the number of distinct keys, and therefore we are
interested in estimating these statistics or more generally, sampling the data,
without aggregation. We present a sampling framework for unaggregated data that
uses a single pass (for streams) or two passes (for distributed data) and state
proportional to the desired sample size. Our design provides the first
effective solution for general frequency cap statistics. Our -capped
samples provide estimates with tight statistical guarantees for cap statistics
with and nonnegative unbiased estimates of {\em any} monotone
non-decreasing frequency statistics. An added benefit of our unified design is
facilitating {\em multi-objective samples}, which provide estimates with
statistical guarantees for a specified set of different statistics, using a
single, smaller sample.Comment: 21 pages, 4 figures, preliminary version will appear in KDD 201
Process and Data Flow Control in KLOE
Abstract The core of the KLOE distributed event building system is a switched network. The online processes are distributed over a large set of processors in this network. All processes have to change coherently their state of activity as a consequence of local or remote commands. A fast and reliable message system based on the SNMP protocol has been developed. A command server has been implemented as a non privileged daemon able to respond to "set" and "get" queries on private SNMP variables. This process is able to convert remote set operations into local commands and to map automatically an SNMP subtree on a user-defined set of process variables. Process activity can be continuously monitored by remotely accessing their variables by means of the command server. Only the command server is involved in these operations, without disturbing the process flow. Subevents coming from subdetectors are sent to different nodes of a computing farm for the last stage of event building. Based on features of the SNMP protocol and of the KLOE message system, the Data Flow Control System (DFC) is able to rapidly redirect network traffic, keeping in account the dynamics of the whole DAQ system in order to assure coherent subevent addressing in an asynchronous "push" architecture, without introducing dead time. The KLOE DFC is currently working in the KLOE DAQ system. Its main characteristics and performance are discussed
Distributed Database Management Techniques for Wireless Sensor Networks
Authors and/or their employers shall have the right to post the accepted version of IEEE-copyrighted articles on their own
personal servers or the servers of their institutions or employers without permission from IEEE, provided that the posted version includes a prominently
displayed IEEE copyright notice and, when published, a full citation to the original IEEE publication, including a link to the article abstract in IEEE
Xplore. Authors shall not post the final, published versions of their papers.In sensor networks, the large amount of data generated by sensors greatly influences the lifetime of the network. In order to manage this amount of sensed data in an energy-efficient way, new methods of storage and data query are needed. In this way, the distributed database approach for sensor networks is proved as one of the most energy-efficient data storage and query techniques. This paper surveys the state of the art of the techniques used to manage data and queries in wireless sensor networks based on the distributed paradigm. A classification of these techniques is also proposed. The goal of this work is not only to present how data and query management techniques have advanced nowadays, but also show their benefits and drawbacks, and to identify open issues providing guidelines for further contributions in this type of distributed architectures.This work was partially supported by the Instituto de Telcomunicacoes, Next Generation Networks and Applications Group (NetGNA), Portugal, by the Ministerio de Ciencia e Innovacion, through the Plan Nacional de I+D+i 2008-2011 in the Subprograma de Proyectos de Investigacion Fundamental, project TEC2011-27516, by the Polytechnic University of Valencia, though the PAID-05-12 multidisciplinary projects, by Government of Russian Federation, Grant 074-U01, and by National Funding from the FCT-Fundacao para a Ciencia e a Tecnologia through the Pest-OE/EEI/LA0008/2013 Project.Diallo, O.; Rodrigues, JJPC.; Sene, M.; Lloret, J. (2013). Distributed Database Management Techniques for Wireless Sensor Networks. IEEE Transactions on Parallel and Distributed Systems. PP(99):1-17. https://doi.org/10.1109/TPDS.2013.207S117PP9
Partout: A Distributed Engine for Efficient RDF Processing
The increasing interest in Semantic Web technologies has led not only to a
rapid growth of semantic data on the Web but also to an increasing number of
backend applications with already more than a trillion triples in some cases.
Confronted with such huge amounts of data and the future growth, existing
state-of-the-art systems for storing RDF and processing SPARQL queries are no
longer sufficient. In this paper, we introduce Partout, a distributed engine
for efficient RDF processing in a cluster of machines. We propose an effective
approach for fragmenting RDF data sets based on a query log, allocating the
fragments to nodes in a cluster, and finding the optimal configuration. Partout
can efficiently handle updates and its query optimizer produces efficient query
execution plans for ad-hoc SPARQL queries. Our experiments show the superiority
of our approach to state-of-the-art approaches for partitioning and distributed
SPARQL query processing
Blazes: Coordination Analysis for Distributed Programs
Distributed consistency is perhaps the most discussed topic in distributed
systems today. Coordination protocols can ensure consistency, but in practice
they cause undesirable performance unless used judiciously. Scalable
distributed architectures avoid coordination whenever possible, but
under-coordinated systems can exhibit behavioral anomalies under fault, which
are often extremely difficult to debug. This raises significant challenges for
distributed system architects and developers. In this paper we present Blazes,
a cross-platform program analysis framework that (a) identifies program
locations that require coordination to ensure consistent executions, and (b)
automatically synthesizes application-specific coordination code that can
significantly outperform general-purpose techniques. We present two case
studies, one using annotated programs in the Twitter Storm system, and another
using the Bloom declarative language.Comment: Updated to include additional materials from the original technical
report: derivation rules, output stream label
- …