15,279 research outputs found
Grid-free compressive beamforming
The direction-of-arrival (DOA) estimation problem involves the localization
of a few sources from a limited number of observations on an array of sensors,
thus it can be formulated as a sparse signal reconstruction problem and solved
efficiently with compressive sensing (CS) to achieve high-resolution imaging.
On a discrete angular grid, the CS reconstruction degrades due to basis
mismatch when the DOAs do not coincide with the angular directions on the grid.
To overcome this limitation, a continuous formulation of the DOA problem is
employed and an optimization procedure is introduced, which promotes sparsity
on a continuous optimization variable. The DOA estimation problem with
infinitely many unknowns, i.e., source locations and amplitudes, is solved over
a few optimization variables with semidefinite programming. The grid-free CS
reconstruction provides high-resolution imaging even with non-uniform arrays,
single-snapshot data and under noisy conditions as demonstrated on experimental
towed array data.Comment: 14 pages, 8 figures, journal pape
False Data Injection Attacks on Phasor Measurements That Bypass Low-rank Decomposition
This paper studies the vulnerability of phasor measurement units (PMUs) to
false data injection (FDI) attacks. Prior work demonstrated that unobservable
FDI attacks that can bypass traditional bad data detectors based on measurement
residuals can be identified by detector based on low-rank decomposition (LD).
In this work, a class of more sophisticated FDI attacks that captures the
temporal correlation of PMU data is introduced. Such attacks are designed with
a convex optimization problem and can always bypass the LD detector. The
vulnerability of this attack model is illustrated on both the IEEE 24-bus RTS
and the IEEE 118-bus systems.Comment: 6 pages, 4 figures, submitted to 2017 IEEE International Conference
on Smart Grid Communications (SmartGridComm
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
Distributed and parallel sparse convex optimization for radio interferometry with PURIFY
Next generation radio interferometric telescopes are entering an era of big
data with extremely large data sets. While these telescopes can observe the sky
in higher sensitivity and resolution than before, computational challenges in
image reconstruction need to be overcome to realize the potential of
forthcoming telescopes. New methods in sparse image reconstruction and convex
optimization techniques (cf. compressive sensing) have shown to produce higher
fidelity reconstructions of simulations and real observations than traditional
methods. This article presents distributed and parallel algorithms and
implementations to perform sparse image reconstruction, with significant
practical considerations that are important for implementing these algorithms
for Big Data. We benchmark the algorithms presented, showing that they are
considerably faster than their serial equivalents. We then pre-sample gridding
kernels to scale the distributed algorithms to larger data sizes, showing
application times for 1 Gb to 2.4 Tb data sets over 25 to 100 nodes for up to
50 billion visibilities, and find that the run-times for the distributed
algorithms range from 100 milliseconds to 3 minutes per iteration. This work
presents an important step in working towards computationally scalable and
efficient algorithms and implementations that are needed to image observations
of both extended and compact sources from next generation radio interferometers
such as the SKA. The algorithms are implemented in the latest versions of the
SOPT (https://github.com/astro-informatics/sopt) and PURIFY
(https://github.com/astro-informatics/purify) software packages {(Versions
3.1.0)}, which have been released alongside of this article.Comment: 25 pages, 5 figure
Practical recommendations for gradient-based training of deep architectures
Learning algorithms related to artificial neural networks and in particular
for Deep Learning may seem to involve many bells and whistles, called
hyper-parameters. This chapter is meant as a practical guide with
recommendations for some of the most commonly used hyper-parameters, in
particular in the context of learning algorithms based on back-propagated
gradient and gradient-based optimization. It also discusses how to deal with
the fact that more interesting results can be obtained when allowing one to
adjust many hyper-parameters. Overall, it describes elements of the practice
used to successfully and efficiently train and debug large-scale and often deep
multi-layer neural networks. It closes with open questions about the training
difficulties observed with deeper architectures
- …