160 research outputs found
Sparsity-driven sparse-aperture ultrasound imaging
We propose an image formation algorithm for ultrasound imaging based on sparsity-driven regularization functionals. We consider data collected by synthetic transducer arrays, with the primary motivating application being nondestructive evaluation. Our framework involves the use of a physical optics-based forward model of the observation process; the formulation of an optimization problem for image formation; and the solution of that problem through efficient numerical algorithms. Our sparsity-driven, model-based approach achieves the preservation of physical features while suppressing spurious artifacts. It also provides robust reconstructions in the case of sparse observation apertures. We demonstrate the effectiveness of our imaging strategy on real ultrasound data
Interdisciplinary Graduate Training in the Science, Technology, and Applications of Augmented and Virtual Reality
We present the rationale, structure, and components of a new Ph.D. training program on augmented and virtual reality (AR/VR) at the University of Rochester, funded by the National Science Foundation (NSF)
Semi-blind sparse channel estimation with constant modulus symbols
We propose two methods for the estimation of sparse communication channels. In the first method, we consider the problem of channel estimation based on training symbols, and formulate it as an optimization problem. In this formulation, we combine the objective of fidelity to the received data with a non-quadratic constraint reflecting the prior information about the sparsity of the channel. This approach leads to accurate channel estimates with much shorter training sequences than conventional methods. The second method we propose is aimed at taking advantage of any available training-based data, as well as any "blind" data based on unknown, constant modulus symbols. We propose a semi-blind optimization framework making use of these two types of data, and enforcing the sparsity of the channel, as well as the constant modulus property of the symbols. This approach improves upon the channel estimates based only on training sequences, and also produces accurate estimates for the unknown symbols
Disjunctive Normal Level Set: An Efficient Parametric Implicit Method
Level set methods are widely used for image segmentation because of their
capability to handle topological changes. In this paper, we propose a novel
parametric level set method called Disjunctive Normal Level Set (DNLS), and
apply it to both two phase (single object) and multiphase (multi-object) image
segmentations. The DNLS is formed by union of polytopes which themselves are
formed by intersections of half-spaces. The proposed level set framework has
the following major advantages compared to other level set methods available in
the literature. First, segmentation using DNLS converges much faster. Second,
the DNLS level set function remains regular throughout its evolution. Third,
the proposed multiphase version of the DNLS is less sensitive to
initialization, and its computational cost and memory requirement remains
almost constant as the number of objects to be simultaneously segmented grows.
The experimental results show the potential of the proposed method.Comment: 5 page
Region-enhanced passive radar imaging
The authors adapt and apply a recently-developed region-enhanced synthetic aperture radar (SAR) image reconstruction technique to the problem of passive radar imaging. One goal in passive radar imaging is to form images of aircraft using signals transmitted by commercial radio and television stations that are reflected from the objects of interest. This involves reconstructing an image from sparse samples of its Fourier transform. Owing to the sparse nature of the aperture, a conventional image formation approach based on direct Fourier transformation results in quite dramatic artefacts in the image, as compared with the case of active SAR imaging. The regionenhanced image formation method considered is based on an explicit mathematical model of the observation process; hence, information about the nature of the aperture is explicitly taken into account in image formation. Furthermore, this framework allows the incorporation of prior information or constraints about the scene being imaged, which makes it possible to compensate for the limitations of the sparse apertures involved in passive radar imaging. As a result, conventional imaging artefacts, such as sidelobes, can be alleviated. Experimental results using data based on electromagnetic simulations demonstrate that this is a promising strategy for passive radar imaging, exhibiting significant suppression of artefacts, preservation of imaged object features, and robustness to measurement noise
Modeling differences in the time-frequency representation of EEG signals through HMM’s for classification of imaginary motor tasks
Brain Computer interfaces are systems that allow the control of external devices using the information extracted from the brain signals. Such systems find applications in rehabilitation, as an alternative communication channel and in multimedia applications for entertainment and gaming. In this work, a new approach based on the Time-Frequency (TF) distribution of the signal power, obtained by autoregressive methods and the use Hidden Markov models (HMM) is developed. This approach take into account the changes of power on different frequency bands with time. For that purpose HMM’s are used to modeling the
changes in the power during the execution of two different motor tasks. The use of TF methods involves a problem related to the selection of the frequency bands that can lead to over fitting (due to the course of dimensionality) as well as problems related to the selection of the model parameters. These problems are solved in this work by combining two methods for feature selection: Fisher
Score and Sequential Floating Forward Selection. The results are compared to the three top results of the BCI competition IV. It is shown here that the proposed method over perform those other methods in four subjects and the average over all the subjects equals the one obtained by the winner algorithm of the competition
A sparsity-driven approach to multi-camera tracking in visual sensor networks
In this paper, a sparsity-driven approach is presented for multi-camera tracking in visual sensor networks (VSNs). VSNs consist of image sensors, embedded processors and wireless transceivers which are powered by batteries. Since the energy and bandwidth resources are limited, setting up a tracking system in VSNs is a challenging problem. Motivated by the goal of tracking in a bandwidth-constrained environment, we present a sparsity-driven method to compress the features extracted by the camera nodes, which are then transmitted across the network for distributed inference. We have designed special overcomplete dictionaries that match the structure of the features, leading to very parsimonious yet accurate representations. We have tested our method in indoor and outdoor people tracking scenarios. Our experimental results demonstrate how our approach leads to communication savings without significant loss in tracking performance
Design, implementation and evaluation of a real-time P300-based brain-computer interface system
We present a new end-to-end brain-computer interface system based on electroencephalography (EEG). Our system exploits the P300 signal in the brain, a positive deflection in event-related potentials, caused by rare events. P300 can be used for various tasks, perhaps the most well-known being a spelling device. We have designed a flexible visual stimulus mechanism that can be adapted to user preferences and developed and implemented EEG signal processing, learning and classification algorithms. Our classifier is based on Bayes linear discriminant analysis, in which we have explored various choices and improvements. We have designed data collection experiments for offline and online decision-making and have proposed modifications in the stimulus and decision-making procedure to increase online efficiency. We have evaluated the performance of our system on 8 healthy subjects on a spelling task and have observed that our system achieves higher average speed than state-of-the-art systems reported in the literature for a given classification accuracy
An efficient Monte Carlo approach for optimizing communication constrained decentralized estimation networks
We consider the design problem of a decentralized estimation network under communication constraints. The underlying low capacity links are modeled by introducing a directed acyclic graph where each node corresponds to a sensor platform. The operation of the platforms are constrained by the graph such that each node, based on its measurement and incoming messages from parents, produces
a local estimate and outgoing messages to children. A Bayesian risk that captures both estimation error penalty and cost of communications,
e.g. due to consumption of the limited resource of energy, together with constraining the feasible set of strategies by the graph, yields a rigorous problem definition. We adopt an iterative solution that converges to an optimal strategy in a person-byperson sense previously proposed for decentralized detection networks under a team theoretic investigation. Provided that some reasonable assumptions hold, the solution admits a message passing
interpretation exhibiting linear complexity in the number of nodes. However, the corresponding expressions in the estimation setting contain integral operators with no closed form solutions in general. We propose particle representations and approximate computational schemes through Monte Carlo methods in order not to compromise model accuracy and achieve an optimization method which results in an approximation to an optimal strategy for decentralized estimation networks under communication constraints. Through an example, we present a quantification of the trade-off between the estimation
accuracy and the cost of communications where the former degrades as the later is increased
Hyper-parameter selection in non-quadratic regularization-based radar image formation
We consider the problem of automatic parameter selection in regularization-based radar image formation techniques. It
has previously been shown that non-quadratic regularization produces feature-enhanced radar images; can yield
superresolution; is robust to uncertain or limited data; and can generate enhanced images in non-conventional data
collection scenarios such as sparse aperture imaging. However, this regularized imaging framework involves some
hyper-parameters, whose choice is crucial because that directly affects the characteristics of the reconstruction. Hence
there is interest in developing methods for automatic parameter choice. We investigate Stein’s unbiased risk estimator
(SURE) and generalized cross-validation (GCV) for automatic selection of hyper-parameters in regularized radar
imaging. We present experimental results based on the Air Force Research Laboratory (AFRL) “Backhoe Data Dome,”
to demonstrate and discuss the effectiveness of these methods
- …