260 research outputs found
Robustness of Planar Fourier Capture Arrays to Colour Changes and Lost Pixels
Planar Fourier capture arrays (PFCAs) are optical sensors built entirely in
standard microchip manufacturing flows. PFCAs are composed of ensembles of
angle sensitive pixels (ASPs) that each report a single coefficient of the
Fourier transform of the far-away scene. Here we characterize the performance
of PFCAs under the following three non-optimal conditions. First, we show that
PFCAs can operate while sensing light of a wavelength other than the design
point. Second, if only a randomly-selected subset of 10% of the ASPs are
functional, we can nonetheless reconstruct the entire far-away scene using
compressed sensing. Third, if the wavelength of the imaged light is unknown, it
can be inferred by demanding self-consistency of the outputs.Comment: 15 pages including cover page, 12 figures, associated with the 9th
International Conference on Position Sensitive Detector
Image Fusion via Sparse Regularization with Non-Convex Penalties
The L1 norm regularized least squares method is often used for finding sparse
approximate solutions and is widely used in 1-D signal restoration. Basis
pursuit denoising (BPD) performs noise reduction in this way. However, the
shortcoming of using L1 norm regularization is the underestimation of the true
solution. Recently, a class of non-convex penalties have been proposed to
improve this situation. This kind of penalty function is non-convex itself, but
preserves the convexity property of the whole cost function. This approach has
been confirmed to offer good performance in 1-D signal denoising. This paper
demonstrates the aforementioned method to 2-D signals (images) and applies it
to multisensor image fusion. The problem is posed as an inverse one and a
corresponding cost function is judiciously designed to include two data
attachment terms. The whole cost function is proved to be convex upon suitably
choosing the non-convex penalty, so that the cost function minimization can be
tackled by convex optimization approaches, which comprise simple computations.
The performance of the proposed method is benchmarked against a number of
state-of-the-art image fusion techniques and superior performance is
demonstrated both visually and in terms of various assessment measures
NEW ALGORITHMS FOR COMPRESSED SENSING OF MRI: WTWTS, DWTS, WDWTS
Magnetic resonance imaging (MRI) is one of the most accurate imaging techniques that can be used to detect several diseases, where other imaging methodologies fail. MRI data takes a longer time to capture. This is a pain taking process for the patients to remain still while the data is being captured. This is also hard for the doctor as well because if the images are not captured correctly then it will lead to wrong diagnoses of illness that might put the patients lives in danger. Since long scanning time is one of most serious drawback of the MRI modality, reducing acquisition time for MRI acquisition is a crucial challenge for many imaging techniques. Compressed Sensing (CS) theory is an appealing framework to address this issue since it provides theoretical guarantees on the reconstruction of sparse signals while projection on a low dimensional linear subspace. Further enhancements have extended the CS framework by performing Variable Density Sampling (VDS) or using wavelet domain as sparsity basis generator. Recent work in this approach considers parent-child relations in the wavelet levels.
This paper further extends the prior approach by utilizing the entire wavelet tree structure as an argument for coefficient correlation and also considers the directionality of wavelet coefficients using Hybrid Directional Wavelets (HDW). Incorporating coefficient thresholding in both wavelet tree structure as well as directional wavelet tree structure, the experiments reveal higher Signal to Noise ratio (SNR), Peak Signal to Noise ratio (PSNR) and lower Mean Square Error (MSE) for the CS based image reconstruction approach. Exploiting the sparsity of wavelet tree using the above-mentioned techniques achieves further lessening for data needed for the reconstruction, while improving the reconstruction result. These techniques are applied on a variety of images including both MRI and non-MRI data. The results show the efficacy of our techniques
Projected Newton Method for noise constrained Tikhonov regularization
Tikhonov regularization is a popular approach to obtain a meaningful solution
for ill-conditioned linear least squares problems. A relatively simple way of
choosing a good regularization parameter is given by Morozov's discrepancy
principle. However, most approaches require the solution of the Tikhonov
problem for many different values of the regularization parameter, which is
computationally demanding for large scale problems. We propose a new and
efficient algorithm which simultaneously solves the Tikhonov problem and finds
the corresponding regularization parameter such that the discrepancy principle
is satisfied. We achieve this by formulating the problem as a nonlinear system
of equations and solving this system using a line search method. We obtain a
good search direction by projecting the problem onto a low dimensional Krylov
subspace and computing the Newton direction for the projected problem. This
projected Newton direction, which is significantly less computationally
expensive to calculate than the true Newton direction, is then combined with a
backtracking line search to obtain a globally convergent algorithm, which we
refer to as the Projected Newton method. We prove convergence of the algorithm
and illustrate the improved performance over current state-of-the-art solvers
with some numerical experiments
Homotopy based algorithms for -regularized least-squares
Sparse signal restoration is usually formulated as the minimization of a
quadratic cost function , where A is a dictionary and x is an
unknown sparse vector. It is well-known that imposing an constraint
leads to an NP-hard minimization problem. The convex relaxation approach has
received considerable attention, where the -norm is replaced by the
-norm. Among the many efficient solvers, the homotopy
algorithm minimizes with respect to x for a
continuum of 's. It is inspired by the piecewise regularity of the
-regularization path, also referred to as the homotopy path. In this
paper, we address the minimization problem for a
continuum of 's and propose two heuristic search algorithms for
-homotopy. Continuation Single Best Replacement is a forward-backward
greedy strategy extending the Single Best Replacement algorithm, previously
proposed for -minimization at a given . The adaptive search of
the -values is inspired by -homotopy. Regularization
Path Descent is a more complex algorithm exploiting the structural properties
of the -regularization path, which is piecewise constant with respect
to . Both algorithms are empirically evaluated for difficult inverse
problems involving ill-conditioned dictionaries. Finally, we show that they can
be easily coupled with usual methods of model order selection.Comment: 38 page
Continuous Action Recognition Based on Sequence Alignment
Continuous action recognition is more challenging than isolated recognition
because classification and segmentation must be simultaneously carried out. We
build on the well known dynamic time warping (DTW) framework and devise a novel
visual alignment technique, namely dynamic frame warping (DFW), which performs
isolated recognition based on per-frame representation of videos, and on
aligning a test sequence with a model sequence. Moreover, we propose two
extensions which enable to perform recognition concomitant with segmentation,
namely one-pass DFW and two-pass DFW. These two methods have their roots in the
domain of continuous recognition of speech and, to the best of our knowledge,
their extension to continuous visual action recognition has been overlooked. We
test and illustrate the proposed techniques with a recently released dataset
(RAVEL) and with two public-domain datasets widely used in action recognition
(Hollywood-1 and Hollywood-2). We also compare the performances of the proposed
isolated and continuous recognition algorithms with several recently published
methods
- …