9,865 research outputs found
Sampling from a system-theoretic viewpoint: Part II - Noncausal solutions
This paper puts to use concepts and tools introduced in Part I to address a wide spectrum of noncausal sampling and reconstruction problems. Particularly, we follow the system-theoretic paradigm by using systems as signal generators to account for available information and system norms (L2 and L∞) as performance measures. The proposed optimization-based approach recovers many known solutions, derived hitherto by different methods, as special cases under different assumptions about acquisition or reconstructing devices (e.g., polynomial and exponential cardinal splines for fixed samplers and the Sampling Theorem and its modifications in the case when both sampler and interpolator are design parameters). We also derive new results, such as versions of the Sampling Theorem for downsampling and reconstruction from noisy measurements, the continuous-time invariance of a wide class of optimal sampling-and-reconstruction circuits, etcetera
Sampling from a system-theoretic viewpoint
This paper studies a system-theoretic approach to the problem of reconstructing an analog signal from its samples. The idea, borrowed from earlier treatments in the control literature, is to address the problem as a hybrid model-matching problem in which performance is measured by system norms. \ud
\ud
The paper is split into three parts. In Part I we present the paradigm and revise the lifting technique, which is our main technical tool. In Part II optimal samplers and holds are designed for various analog signal reconstruction problems. In some cases one component is fixed while the remaining are designed, in other cases all three components are designed simultaneously. No causality requirements are imposed in Part II, which allows to use frequency domain arguments, in particular the lifted frequency response as introduced in Part I. In Part III the main emphasis is placed on a systematic incorporation of causality constraints into the optimal design of reconstructors. We consider reconstruction problems, in which the sampling (acquisition) device is given and the performance is measured by the -norm of the reconstruction error. The problem is solved under the constraint that the optimal reconstructor is -causal for a given i.e., that its impulse response is zero in the time interval where is the sampling period. We derive a closed-form state-space solution of the problem, which is based on the spectral factorization of a rational transfer function
Dual Rate Control for Security in Cyber-physical Systems
We consider malicious attacks on actuators and sensors of a feedback system
which can be modeled as additive, possibly unbounded, disturbances at the
digital (cyber) part of the feedback loop. We precisely characterize the role
of the unstable poles and zeros of the system in the ability to detect stealthy
attacks in the context of the sampled data implementation of the controller in
feedback with the continuous (physical) plant. We show that, if there is a
single sensor that is guaranteed to be secure and the plant is observable from
that sensor, then there exist a class of multirate sampled data controllers
that ensure that all attacks remain detectable. These dual rate controllers are
sampling the output faster than the zero order hold rate that operates on the
control input and as such, they can even provide better nominal performance
than single rate, at the price of higher sampling of the continuous output
Data Sketches for Disaggregated Subset Sum and Frequent Item Estimation
We introduce and study a new data sketch for processing massive datasets. It
addresses two common problems: 1) computing a sum given arbitrary filter
conditions and 2) identifying the frequent items or heavy hitters in a data
set. For the former, the sketch provides unbiased estimates with state of the
art accuracy. It handles the challenging scenario when the data is
disaggregated so that computing the per unit metric of interest requires an
expensive aggregation. For example, the metric of interest may be total clicks
per user while the raw data is a click stream with multiple rows per user. Thus
the sketch is suitable for use in a wide range of applications including
computing historical click through rates for ad prediction, reporting user
metrics from event streams, and measuring network traffic for IP flows.
We prove and empirically show the sketch has good properties for both the
disaggregated subset sum estimation and frequent item problems. On i.i.d. data,
it not only picks out the frequent items but gives strongly consistent
estimates for the proportion of each frequent item. The resulting sketch
asymptotically draws a probability proportional to size sample that is optimal
for estimating sums over the data. For non i.i.d. data, we show that it
typically does much better than random sampling for the frequent item problem
and never does worse. For subset sum estimation, we show that even for
pathological sequences, the variance is close to that of an optimal sampling
design. Empirically, despite the disadvantage of operating on disaggregated
data, our method matches or bests priority sampling, a state of the art method
for pre-aggregated data and performs orders of magnitude better on skewed data
compared to uniform sampling. We propose extensions to the sketch that allow it
to be used in combining multiple data sets, in distributed systems, and for
time decayed aggregation
Multi-party Poisoning through Generalized -Tampering
In a poisoning attack against a learning algorithm, an adversary tampers with
a fraction of the training data with the goal of increasing the
classification error of the constructed hypothesis/model over the final test
distribution. In the distributed setting, might be gathered gradually from
data providers who generate and submit their shares of
in an online way.
In this work, we initiate a formal study of -poisoning attacks in
which an adversary controls of the parties, and even for each
corrupted party , the adversary submits some poisoned data on
behalf of that is still "-close" to the correct data (e.g.,
fraction of is still honestly generated). For , this model
becomes the traditional notion of poisoning, and for it coincides with
the standard notion of corruption in multi-party computation.
We prove that if there is an initial constant error for the generated
hypothesis , there is always a -poisoning attacker who can decrease
the confidence of (to have a small error), or alternatively increase the
error of , by . Our attacks can be implemented in
polynomial time given samples from the correct data, and they use no wrong
labels if the original distributions are not noisy.
At a technical level, we prove a general lemma about biasing bounded
functions through an attack model in which each
block might be controlled by an adversary with marginal probability
in an online way. When the probabilities are independent, this coincides with
the model of -tampering attacks, thus we call our model generalized
-tampering. We prove the power of such attacks by incorporating ideas from
the context of coin-flipping attacks into the -tampering model and
generalize the results in both of these areas
Generic Feasibility of Perfect Reconstruction with Short FIR Filters in Multi-channel Systems
We study the feasibility of short finite impulse response (FIR) synthesis for
perfect reconstruction (PR) in generic FIR filter banks. Among all PR synthesis
banks, we focus on the one with the minimum filter length. For filter banks
with oversampling factors of at least two, we provide prescriptions for the
shortest filter length of the synthesis bank that would guarantee PR almost
surely. The prescribed length is as short or shorter than the analysis filters
and has an approximate inverse relationship with the oversampling factor. Our
results are in form of necessary and sufficient statements that hold
generically, hence only fail for elaborately-designed nongeneric examples. We
provide extensive numerical verification of the theoretical results and
demonstrate that the gap between the derived filter length prescriptions and
the true minimum is small. The results have potential applications in synthesis
FB design problems, where the analysis bank is given, and for analysis of
fundamental limitations in blind signals reconstruction from data collected by
unknown subsampled multi-channel systems.Comment: Manuscript submitted to IEEE Transactions on Signal Processin
Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions
We develop a robust uncertainty principle for finite signals in C^N which
states that for almost all subsets T,W of {0,...,N-1} such that |T|+|W| ~ (log
N)^(-1/2) N, there is no sigal f supported on T whose discrete Fourier
transform is supported on W. In fact, we can make the above uncertainty
principle quantitative in the sense that if f is supported on T, then only a
small percentage of the energy (less than half, say) of its Fourier transform
is concentrated on W.
As an application of this robust uncertainty principle (QRUP), we consider
the problem of decomposing a signal into a sparse superposition of spikes and
complex sinusoids. We show that if a generic signal f has a decomposition using
spike and frequency locations in T and W respectively, and obeying |T| + |W| <=
C (\log N)^{-1/2} N, then this is the unique sparsest possible decomposition
(all other decompositions have more non-zero terms). In addition, if |T| + |W|
<= C (\log N)^{-1} N, then this sparsest decomposition can be found by solving
a convex optimization problem.Comment: 25 pages, 9 figure
- …