2,264 research outputs found
Identifying Correlated Heavy-Hitters in a Two-Dimensional Data Stream
We consider online mining of correlated heavy-hitters from a data stream.
Given a stream of two-dimensional data, a correlated aggregate query first
extracts a substream by applying a predicate along a primary dimension, and
then computes an aggregate along a secondary dimension. Prior work on
identifying heavy-hitters in streams has almost exclusively focused on
identifying heavy-hitters on a single dimensional stream, and these yield
little insight into the properties of heavy-hitters along other dimensions. In
typical applications however, an analyst is interested not only in identifying
heavy-hitters, but also in understanding further properties such as: what other
items appear frequently along with a heavy-hitter, or what is the frequency
distribution of items that appear along with the heavy-hitters. We consider
queries of the following form: In a stream S of (x, y) tuples, on the substream
H of all x values that are heavy-hitters, maintain those y values that occur
frequently with the x values in H. We call this problem as Correlated
Heavy-Hitters (CHH). We formulate an approximate formulation of CHH
identification, and present an algorithm for tracking CHHs on a data stream.
The algorithm is easy to implement and uses workspace which is orders of
magnitude smaller than the stream itself. We present provable guarantees on the
maximum error, as well as detailed experimental results that demonstrate the
space-accuracy trade-off
Fast and Accurate Mining of Correlated Heavy Hitters
The problem of mining Correlated Heavy Hitters (CHH) from a two-dimensional
data stream has been introduced recently, and a deterministic algorithm based
on the use of the Misra--Gries algorithm has been proposed by Lahiri et al. to
solve it. In this paper we present a new counter-based algorithm for tracking
CHHs, formally prove its error bounds and correctness and show, through
extensive experimental results, that our algorithm outperforms the Misra--Gries
based algorithm with regard to accuracy and speed whilst requiring
asymptotically much less space
Quantifying Differential Privacy under Temporal Correlations
Differential Privacy (DP) has received increased attention as a rigorous
privacy framework. Existing studies employ traditional DP mechanisms (e.g., the
Laplace mechanism) as primitives, which assume that the data are independent,
or that adversaries do not have knowledge of the data correlations. However,
continuously generated data in the real world tend to be temporally correlated,
and such correlations can be acquired by adversaries. In this paper, we
investigate the potential privacy loss of a traditional DP mechanism under
temporal correlations in the context of continuous data release. First, we
model the temporal correlations using Markov model and analyze the privacy
leakage of a DP mechanism when adversaries have knowledge of such temporal
correlations. Our analysis reveals that the privacy leakage of a DP mechanism
may accumulate and increase over time. We call it temporal privacy leakage.
Second, to measure such privacy leakage, we design an efficient algorithm for
calculating it in polynomial time. Although the temporal privacy leakage may
increase over time, we also show that its supremum may exist in some cases.
Third, to bound the privacy loss, we propose mechanisms that convert any
existing DP mechanism into one against temporal privacy leakage. Experiments
with synthetic data confirm that our approach is efficient and effective.Comment: appears at ICDE 201
Quantifying Differential Privacy in Continuous Data Release under Temporal Correlations
Differential Privacy (DP) has received increasing attention as a rigorous
privacy framework. Many existing studies employ traditional DP mechanisms
(e.g., the Laplace mechanism) as primitives to continuously release private
data for protecting privacy at each time point (i.e., event-level privacy),
which assume that the data at different time points are independent, or that
adversaries do not have knowledge of correlation between data. However,
continuously generated data tend to be temporally correlated, and such
correlations can be acquired by adversaries. In this paper, we investigate the
potential privacy loss of a traditional DP mechanism under temporal
correlations. First, we analyze the privacy leakage of a DP mechanism under
temporal correlation that can be modeled using Markov Chain. Our analysis
reveals that, the event-level privacy loss of a DP mechanism may
\textit{increase over time}. We call the unexpected privacy loss
\textit{temporal privacy leakage} (TPL). Although TPL may increase over time,
we find that its supremum may exist in some cases. Second, we design efficient
algorithms for calculating TPL. Third, we propose data releasing mechanisms
that convert any existing DP mechanism into one against TPL. Experiments
confirm that our approach is efficient and effective.Comment: accepted in TKDE special issue "Best of ICDE 2017". arXiv admin note:
substantial text overlap with arXiv:1610.0754
Effective Use Methods for Continuous Sensor Data Streams in Manufacturing Quality Control
This work outlines an approach for managing sensor data streams of continuous numerical data in product manufacturing settings, emphasizing statistical process control, low computational and memory overhead, and saving information necessary to reduce the impact of nonconformance to quality specifications. While there is extensive literature, knowledge, and documentation about standard data sources and databases, the high volume and velocity of sensor data streams often makes traditional analysis unfeasible. To that end, an overview of data stream fundamentals is essential. An analysis of commonly used stream preprocessing and load shedding methods follows, succeeded by a discussion of aggregation procedures. Stream storage and querying systems are the next topics. Further, existing machine learning techniques for data streams are presented, with a focus on regression. Finally, the work describes a novel methodology for managing sensor data streams in which data stream management systems save and record aggregate data from small time intervals, and individual measurements from the stream that are nonconforming. The aggregates shall be continually entered into control charts and regressed on. To conserve memory, old data shall be periodically reaggregated at higher levels to reduce memory consumption
- …