874,763 research outputs found
Computation-Aware Data Aggregation
Data aggregation is a fundamental primitive in distributed computing wherein a network computes a function of every nodes\u27 input. However, while compute time is non-negligible in modern systems, standard models of distributed computing do not take compute time into account. Rather, most distributed models of computation only explicitly consider communication time.
In this paper, we introduce a model of distributed computation that considers both computation and communication so as to give a theoretical treatment of data aggregation. We study both the structure of and how to compute the fastest data aggregation schedule in this model. As our first result, we give a polynomial-time algorithm that computes the optimal schedule when the input network is a complete graph. Moreover, since one may want to aggregate data over a pre-existing network, we also study data aggregation scheduling on arbitrary graphs. We demonstrate that this problem on arbitrary graphs is hard to approximate within a multiplicative 1.5 factor. Finally, we give an O(log n ? log(OPT/t_m))-approximation algorithm for this problem on arbitrary graphs, where n is the number of nodes and OPT is the length of the optimal schedule
Secure Hop-by-Hop Aggregation of End-to-End Concealed Data in Wireless Sensor Networks
In-network data aggregation is an essential technique in mission critical
wireless sensor networks (WSNs) for achieving effective transmission and hence
better power conservation. Common security protocols for aggregated WSNs are
either hop-by-hop or end-to-end, each of which has its own encryption schemes
considering different security primitives. End-to-end encrypted data
aggregation protocols introduce maximum data secrecy with in-efficient data
aggregation and more vulnerability to active attacks, while hop-by-hop data
aggregation protocols introduce maximum data integrity with efficient data
aggregation and more vulnerability to passive attacks.
In this paper, we propose a secure aggregation protocol for aggregated WSNs
deployed in hostile environments in which dual attack modes are present. Our
proposed protocol is a blend of flexible data aggregation as in hop-by-hop
protocols and optimal data confidentiality as in end-to-end protocols. Our
protocol introduces an efficient O(1) heuristic for checking data integrity
along with cost-effective heuristic-based divide and conquer attestation
process which is in average -O(n) in the worst scenario- for
further verification of aggregated results
Data Aggregation and Information Loss
Analysts often use a single average or otherwise aggregated price series to represent several geographic or product markets even when disaggregate data are available. We hypothesize that such an approach may not be appropriate under some circumstances, such as when only long-term relationships hold among price series or when homogeneous but relatively perishable products are considered. This question is of particular relevance in agriculture because of seasonality in production and harvest across various production regions, and the effect of changes in demand as substitute crops become available. We analyze this question in the context of fresh strawberry production. We find that in the case of the strawberry market, aggregate series are appropriate for long-term decision analysis, but some information loss occurs when conducting short-term decision analysis.strawberry, price, cointegration, Granger causality, average price, Research Methods/ Statistical Methods,
Perfectly secure data aggregation via shifted projections
We study a general scenario where confidential information is distributed
among a group of agents who wish to share it in such a way that the data
becomes common knowledge among them but an eavesdropper intercepting their
communications would be unable to obtain any of said data. The information is
modelled as a deck of cards dealt among the agents, so that after the
information is exchanged, all of the communicating agents must know the entire
deal, but the eavesdropper must remain ignorant about who holds each card.
Valentin Goranko and the author previously set up this scenario as the secure
aggregation of distributed information problem and constructed weakly safe
protocols, where given any card , the eavesdropper does not know with
certainty which agent holds . Here we present a perfectly safe protocol,
which does not alter the eavesdropper's perceived probability that any given
agent holds . In our protocol, one of the communicating agents holds a
larger portion of the cards than the rest, but we show how for infinitely many
values of , the number of cards may be chosen so that each of the agents
holds more than cards and less than
Recover Fine-Grained Spatial Data from Coarse Aggregation
In this paper, we study a new type of spatial sparse recovery problem, that
is to infer the fine-grained spatial distribution of certain density data in a
region only based on the aggregate observations recorded for each of its
subregions. One typical example of this spatial sparse recovery problem is to
infer spatial distribution of cellphone activities based on aggregate mobile
traffic volumes observed at sparsely scattered base stations. We propose a
novel Constrained Spatial Smoothing (CSS) approach, which exploits the local
continuity that exists in many types of spatial data to perform sparse recovery
via finite-element methods, while enforcing the aggregated observation
constraints through an innovative use of the ADMM algorithm. We also improve
the approach to further utilize additional geographical attributes. Extensive
evaluations based on a large dataset of phone call records and a demographical
dataset from the city of Milan show that our approach significantly outperforms
various state-of-the-art approaches, including Spatial Spline Regression (SSR).Comment: Accepted by ICDM 2017, 6 page
- …
