766 research outputs found
Reconstructing Human Pose from Inertial Measurements: A Generative Model-based Compressive Sensing Approach
The ability to sense, localize, and estimate the 3D position and orientation
of the human body is critical in virtual reality (VR) and extended reality (XR)
applications. This becomes more important and challenging with the deployment
of VR/XR applications over the next generation of wireless systems such as 5G
and beyond. In this paper, we propose a novel framework that can reconstruct
the 3D human body pose of the user given sparse measurements from Inertial
Measurement Unit (IMU) sensors over a noisy wireless environment. Specifically,
our framework enables reliable transmission of compressed IMU signals through
noisy wireless channels and effective recovery of such signals at the receiver,
e.g., an edge server. This task is very challenging due to the constraints of
transmit power, recovery accuracy, and recovery latency. To address these
challenges, we first develop a deep generative model at the receiver to recover
the data from linear measurements of IMU signals. The linear measurements of
the IMU signals are obtained by a linear projection with a measurement matrix
based on the compressive sensing theory. The key to the success of our
framework lies in the novel design of the measurement matrix at the
transmitter, which can not only satisfy power constraints for the IMU devices
but also obtain a highly accurate recovery for the IMU signals at the receiver.
This can be achieved by extending the set-restricted eigenvalue condition of
the measurement matrix and combining it with an upper bound for the power
transmission constraint. Our framework can achieve robust performance for
recovering 3D human poses from noisy compressed IMU signals. Additionally, our
pre-trained deep generative model achieves signal reconstruction accuracy
comparable to an optimization-based approach, i.e., Lasso, but is an order of
magnitude faster
Recommended from our members
Structured Sub-Nyquist Sampling with Applications in Compressive Toeplitz Covariance Estimation, Super-Resolution and Phase Retrieval
Sub-Nyquist sampling has received a huge amount of interest in the past decade. In classical compressed sensing theory, if the measurement procedure satisfies a particular condition known as Restricted Isometry Property (RIP), we can achieve stable recovery of signals of low-dimensional intrinsic structures with an order-wise optimal sample size. Such low-dimensional structures include sparse and low rank for both vector and matrix cases. The main drawback of conventional compressed sensing theory is that random measurements are required to ensure the RIP property. However, in many applications such as imaging and array signal processing, applying independent random measurements may not be practical as the systems are deterministic. Moreover, random measurements based compressed sensing always exploits convex programs for signal recovery even in the noiseless case, and solving those programs is computationally intensive if the ambient dimension is large, especially in the matrix case. The main contribution of this dissertation is that we propose a deterministic sub-Nyquist sampling framework for compressing the structured signal and come up with computationally efficient algorithms. Besides widely studied sparse and low-rank structures, we particularly focus on the cases that the signals of interest are stationary or the measurements are of Fourier type. The key difference between our work from classical compressed sensing theory is that we explicitly exploit the second-order statistics of the signals, and study the equivalent quadratic measurement model in the correlation domain. The essential observation made in this dissertation is that a difference/sum coarray structure will arise from the quadratic model if the measurements are of Fourier type. With these observations, we are able to achieve a better compression rate for covariance estimation, identify more sources in array signal processing or recover the signals of larger sparsity. In this dissertation, we will first study the problem of Toeplitz covariance estimation. In particular, we will show how to achieve an order-wise optimal compression rate using the idea of sparse arrays in both general and low-rank cases. Then, an analysis framework of super-resolution with positivity constraint is established. We will present fundamental robustness guarantees, efficient algorithms and applications in practices. Next, we will study the problem of phase-retrieval for which we successfully apply the sparse array ideas by fully exploiting the quadratic measurement model. We achieve near-optimal sample complexity for both sparse and general cases with practical Fourier measurements and provide efficient and deterministic recovery algorithms. In the end, we will further elaborate on the essential role of non-negative constraint in underdetermined inverse problems. In particular, we will analyze the nonlinear co-array interpolation problem and develop a universal upper bound of the interpolation error. Bilinear problem with non-negative constraint will be considered next and the exact characterization of the ambiguous solutions will be established for the first time in literature. At last, we will show how to apply the nested array idea to solve real problems such as Kriging. Using spatial correlation information, we are able to have a stable estimate of the field of interest with fewer sensors than classic methodologies. Extensive numerical experiments are implemented to demonstrate our theoretical claims
Quantitative analysis of algorithms for compressed signal recovery
Compressed Sensing (CS) is an emerging paradigm in which signals are recovered from undersampled
nonadaptive linear measurements taken at a rate proportional to the signal's true
information content as opposed to its ambient dimension. The resulting problem consists in finding a sparse solution to an underdetermined system of linear equations. It has now been
established, both theoretically and empirically, that certain optimization algorithms are able
to solve such problems. Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2007),
which is the focus of this thesis, is an established CS recovery algorithm which is known to
be effective in practice, both in terms of recovery performance and computational efficiency.
However, theoretical analysis of IHT to date suffers from two drawbacks: state-of-the-art worst-case
recovery conditions have not yet been quantified in terms of the sparsity/undersampling
trade-off, and also there is a need for average-case analysis in order to understand the behaviour
of the algorithm in practice.
In this thesis, we present a new recovery analysis of IHT, which considers the fixed points of
the algorithm. In the context of arbitrary matrices, we derive a condition guaranteeing convergence
of IHT to a fixed point, and a condition guaranteeing that all fixed points are 'close' to
the underlying signal. If both conditions are satisfied, signal recovery is therefore guaranteed.
Next, we analyse these conditions in the case of Gaussian measurement matrices, exploiting
the realistic average-case assumption that the underlying signal and measurement matrix are
independent. We obtain asymptotic phase transitions in a proportional-dimensional framework,
quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. By generalizing
the notion of xed points, we extend our analysis to the variable stepsize Normalised IHT
(NIHT) (Blumensath and Davies, 2010). For both stepsize schemes, comparison with previous
results within this framework shows a substantial quantitative improvement.
We also extend our analysis to a related algorithm which exploits the assumption that the
underlying signal exhibits tree-structured sparsity in a wavelet basis (Baraniuk et al., 2010).
We obtain recovery conditions for Gaussian matrices in a simplified proportional-dimensional
asymptotic, deriving bounds on the oversampling rate relative to the sparsity for which recovery
is guaranteed. Our results, which are the first in the phase transition framework for tree-based
CS, show a further significant improvement over results for the standard sparsity model. We
also propose a dynamic programming algorithm which is guaranteed to compute an exact tree
projection in low-order polynomial time
Compressed Sensing Based Reconstruction Algorithm for X-ray Dose Reduction in Synchrotron Source Micro Computed Tomography
Synchrotron computed tomography requires a large number of angular projections to reconstruct tomographic images with high resolution for detailed and accurate diagnosis. However, this exposes the specimen to a large amount of x-ray radiation. Furthermore, this increases scan time and, consequently, the likelihood of involuntary specimen movements. One approach for decreasing the total scan time and radiation dose is to reduce the number of projection views needed to reconstruct the images. However, the aliasing artifacts appearing in the image due to the reduced number of projection data, visibly degrade the image quality. According to the compressed sensing theory, a signal can be accurately reconstructed from highly undersampled data by solving an optimization problem, provided that the signal can be sparsely represented in a predefined transform domain. Therefore, this thesis is mainly concerned with designing compressed sensing-based reconstruction algorithms to suppress aliasing artifacts while preserving spatial resolution in the resulting reconstructed image. First, the reduced-view synchrotron computed tomography reconstruction is formulated as a total variation regularized compressed sensing problem. The Douglas-Rachford Splitting and the randomized Kaczmarz methods are utilized to solve the optimization problem of the compressed sensing formulation.
In contrast with the first part, where consistent simulated projection data are generated for image reconstruction, the reduced-view inconsistent real ex-vivo synchrotron absorption contrast micro computed tomography bone data are used in the second part. A gradient regularized compressed sensing problem is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The wavelet image denoising algorithm is used as the post-processing algorithm to attenuate the unwanted staircase artifact generated by the reconstruction algorithm.
Finally, a noisy and highly reduced-view inconsistent real in-vivo synchrotron phase-contrast computed tomography bone data are used for image reconstruction. A combination of prior image constrained compressed sensing framework, and the wavelet regularization is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The prior image constrained compressed sensing framework takes advantage of the prior image to promote the sparsity of the target image. It may lead to an unwanted staircase artifact when applied to noisy and texture images, so the wavelet regularization is used to attenuate the unwanted staircase artifact generated by the prior image constrained compressed sensing reconstruction algorithm.
The visual and quantitative performance assessments with the reduced-view simulated and real computed tomography data from canine prostate tissue, rat forelimb, and femoral cortical bone samples, show that the proposed algorithms have fewer artifacts and reconstruction errors than other conventional reconstruction algorithms at the same x-ray dose
Compressive Sensing Applications in Measurement: Theoretical issues, algorithm characterization and implementation
At its core, signal acquisition is concerned with efficient algorithms and protocols capable to capture and encode the signal information content. For over five decades, the indisputable theoretical benchmark has been represented by the wellknown Shannon’s sampling theorem, and the corresponding notion of information has been indissolubly related to signal spectral bandwidth.
The contemporary society is founded on almost instantaneous exchange of information, which is mainly conveyed in a digital format. Accordingly, modern communication devices are expected to cope with huge amounts of data, in a typical
sequence of steps which comprise acquisition, processing and storage. Despite the continual technological progress, the conventional acquisition protocol has come under mounting pressure and requires a computational effort not related to the actual signal information content.
In recent years, a novel sensing paradigm, also known as Compressive Sensing, briefly CS, is quickly spreading among several branches of Information Theory. It relies on two main principles: signal sparsity and incoherent sampling, and employs
them to acquire the signal directly in a condensed form. The sampling rate is related to signal information rate, rather than to signal spectral bandwidth. Given a sparse signal, its information content can be recovered even fromwhat could appear to be
an incomplete set of measurements, at the expense of a greater computational effort at reconstruction stage.
My Ph.D. thesis builds on the field of Compressive Sensing and illustrates how sparsity and incoherence properties can be exploited to design efficient sensing strategies, or to intimately understand the sources of uncertainty that affect measurements.
The research activity has dealtwith both theoretical and practical issues, inferred frommeasurement application contexts, ranging fromradio frequency communications to synchrophasor estimation and neurological activity investigation.
The thesis is organised in four chapters whose key contributions include:
• definition of a general mathematical model for sparse signal acquisition systems,
with particular focus on sparsity and incoherence implications;
• characterization of the main algorithmic families for recovering sparse signals
from reduced set of measurements, with particular focus on the impact of additive noise;
• implementation and experimental validation of a CS-based algorithmfor providing accurate preliminary information and suitably preprocessed data for a vector signal analyser or a cognitive radio application;
• design and characterization of a CS-based super-resolution technique for spectral analysis in the discrete Fourier transform(DFT) domain;
• definition of an overcomplete dictionary which explicitly account for spectral leakage effect;
• insight into the so-called off-the-grid estimation approach, by properly combining CS-based super-resolution and DFT coefficients polar interpolation;
• exploration and analysis of sparsity implications in quasi-stationary operative conditions, emphasizing the importance of time-varying sparse signal models;
• definition of an enhanced spectral content model for spectral analysis applications in dynamic conditions by means of Taylor-Fourier transform (TFT) approaches
Recommended from our members
Study of Human Muscle Structure and Function with Velocity Encoded Phase Contrast and Diffusion Tensor Magnetic Resonance Imaging Techniques
The disproportionate loss of muscle force with aging and disuse atrophy compared to the loss of muscle mass is not yet completely understood. In addition to well-established neural and contractile determinants of force loss, remodeling of the extracellular matrix (ECM) has been recently shown in animal models to be another important contributor. In-vivo human studies exploring the structural remodeling of the ECM and its functional consequences are lacking due to the paucity of appropriate imaging techniques. This study focuses on the development and application of advanced Magnetic Resonance Imaging (MRI) methods to elucidate the mechanisms of loss of force with aging and disuse atrophy with the focus on ECM. Functional changes are investigated by strain and strain rate tensor mapping of muscle under different contraction paradigms using Velocity Encoded Phase-Contrast MRI. Methodological advances include improvements in hardware and software control of the dynamic studies. To overcome the limitation of long scan times, compressed sensing MR acquisition and reconstruction framework to reduce scan times to under a minute were developed. A multi-step automated analysis pipeline to extract 3D strain/strain rate tensors from the velocity images was implemented to process the large dynamic volumes. Strain indices reflecting the material properties of the ECM were shown to correlate with force loss leading to a hypothesis that shear strain may serve as a surrogate marker for lateral transmission of force. Diffusion tensor imaging has been applied previously to study skeletal muscle fiber architecture. The resolution of the images precludes direct inferences to be made about the microstructure. To address this limitation, bicompartmental and Random Permeable Barrier models of diffusion were applied to the diffusion data obtained with spin-echo and custom-developed stimulated echo echo-planar-imaging sequences respectively. Model derived parameters (fiber diameter, wall permeability) obtained from fitting time-dependent diffusion data were in physiologically reasonable range, with potential for tracking age related changes in muscle microstructure. The developed imaging and modeling techniques were applied to a cohort of young/senior subjects and to longitudinal tracking of disuse atrophy induced by Unilateral Limb Suspension. These studies may potentially provide the causal link between age- and disuse-related structural remodeling and its functional consequences
- …