47 research outputs found
Data-Injection Attacks
In this chapter we review some of the basic attack constructions that exploit
a stochastic description of the state variables. We pose the state estimation
problem in a Bayesian setting and cast the bad data detection procedure as a
Bayesian hypothesis testing problem. This revised detection framework provides
the benchmark for the attack detection problem that limits the achievable
attack disruption. Indeed, the trade-off between the impact of the attack, in
terms of disruption to the state estimator, and the probability of attack
detection is analytically characterized within this Bayesian attack setting. We
then generalize the attack construction by considering information-theoretic
measures that place fundamental limits to a broad class of detection,
estimation, and learning techniques. Because the attack constructions proposed
in this chapter rely on the attacker having access to the statistical structure
of the random process describing the state variables, we conclude by studying
the impact of imperfect statistics on the attack performance. Specifically, we
study the attack performance as a function of the size of the training data set
that is available to the attacker to estimate the second-order statistics of
the state variables.Comment: arXiv admin note: substantial text overlap with arXiv:1808.0418
Covariance Estimation from Compressive Data Partitions using a Projected Gradient-based Algorithm
Covariance matrix estimation techniques require high acquisition costs that
challenge the sampling systems' storing and transmission capabilities. For this
reason, various acquisition approaches have been developed to simultaneously
sense and compress the relevant information of the signal using random
projections. However, estimating the covariance matrix from the random
projections is an ill-posed problem that requires further information about the
data, such as sparsity, low rank, or stationary behavior. Furthermore, this
approach fails using high compression ratios. Therefore, this paper proposes an
algorithm based on the projected gradient method to recover a low-rank or
Toeplitz approximation of the covariance matrix. The proposed algorithm divides
the data into subsets projected onto different subspaces, assuming that each
subset contains an approximation of the signal statistics, improving the
inverse problem's condition. The error induced by this assumption is
analytically derived along with the convergence guarantees of the proposed
method. Extensive simulations show that the proposed algorithm can effectively
recover the covariance matrix of hyperspectral images with high compression
ratios (8-15% approx) in noisy scenarios. Additionally, simulations and
theoretical results show that filtering the gradient reduces the estimator's
error recovering up to twice the number of eigenvectors.Comment: submitted to IEEE Transactions on Image Processin
Generalization Analysis of Machine Learning Algorithms via the Worst-Case Data-Generating Probability Measure
In this paper, the worst-case probability measure over the data is introduced
as a tool for characterizing the generalization capabilities of machine
learning algorithms. More specifically, the worst-case probability measure is a
Gibbs probability measure and the unique solution to the maximization of the
expected loss under a relative entropy constraint with respect to a reference
probability measure. Fundamental generalization metrics, such as the
sensitivity of the expected loss, the sensitivity of the empirical risk, and
the generalization gap are shown to have closed-form expressions involving the
worst-case data-generating probability measure. Existing results for the Gibbs
algorithm, such as characterizing the generalization gap as a sum of mutual
information and lautum information, up to a constant factor, are recovered. A
novel parallel is established between the worst-case data-generating
probability measure and the Gibbs algorithm. Specifically, the Gibbs
probability measure is identified as a fundamental commonality of the model
space and the data space for machine learning algorithms.Comment: To appear in the Proceedings of the AAAI Conference on Artificial
Intelligence (7 + 2 pages
Learning requirements for stealth attacks
The learning data requirements are analyzed for the construction of stealth
attacks in state estimation. In particular, the training data set is used to
compute a sample covariance matrix that results in a random matrix with a
Wishart distribution. The ergodic attack performance is defined as the average
attack performance obtained by taking the expectation with respect to the
distribution of the training data set. The impact of the training data size on
the ergodic attack performance is characterized by proposing an upper bound for
the performance. Simulations on the IEEE 30-Bus test system show that the
proposed bound is tight in practical settings.Comment: International Conference on Acoustics, Speech, and Signal Processing
201
Power Injection Measurements are more Vulnerable to Data Integrity Attacks than Power Flow Measurements
A novel metric that describes the vulnerability of the measurements in power
system to data integrity attacks is proposed. The new metric, coined
vulnerability index (VuIx), leverages information theoretic measures to assess
the attack effect on the fundamental limits of the disruption and detection
tradeoff. The result of computing the VuIx of the measurements in the system
yields an ordering of the measurements vulnerability based on the level of
exposure to data integrity attacks. This new framework is used to assess the
measurements vulnerability of IEEE test systems and it is observed that power
injection measurements are overwhelmingly more vulnerable to data integrity
attacks than power flow measurements. A detailed numerical evaluation of the
VuIx values for IEEE test systems is provided.Comment: 6 pages, 9 figures, Submitted to IEEE International Conference on
Communications, Control, and Computing Technologies for Smart Grid
An information theoretic vulnerability metric for data integrity attacks on smart grids
A novel metric that describes the vulnerability of the measurements in power
systems to data integrity attacks is proposed. The new metric, coined
vulnerability index (VuIx), leverages information theoretic measures to assess
the attack effect on the fundamental limits of the disruption and detection
tradeoff. The result of computing the VuIx of the measurements in the system
yields an ordering of their vulnerability based on the level of exposure to
data integrity attacks. This new framework is used to assess the measurement
vulnerability of IEEE 9-bus and 30-bus test systems and it is observed that
power injection measurements are overwhelmingly more vulnerable to data
integrity attacks than power flow measurements. A detailed numerical evaluation
of the VuIx values for IEEE test systems is provided.Comment: 7 pages, 10 figures, submitted to IET Smart Grid. arXiv admin note:
substantial text overlap with arXiv:2207.0697
Information Theoretic Data Injection Attacks with Sparsity Constraints
Information theoretic sparse attacks that minimize simultaneously the
information obtained by the operator and the probability of detection are
studied in a Bayesian state estimation setting. The attack construction is
formulated as an optimization problem that aims to minimize the mutual
information between the state variables and the observations while guaranteeing
the stealth of the attack. Stealth is described in terms of the
Kullback-Leibler (KL) divergence between the distributions of the observations
under attack and without attack. To overcome the difficulty posed by the
combinatorial nature of a sparse attack construction, the attack case in which
only one sensor is compromised is analytically solved first. The insight
generated in this case is then used to propose a greedy algorithm that
constructs random sparse attacks. The performance of the proposed attack is
evaluated in the IEEE 30 Bus Test Case.Comment: Submitted to SGC 202