766 research outputs found

    Reconstructing Human Pose from Inertial Measurements: A Generative Model-based Compressive Sensing Approach

    Full text link
    The ability to sense, localize, and estimate the 3D position and orientation of the human body is critical in virtual reality (VR) and extended reality (XR) applications. This becomes more important and challenging with the deployment of VR/XR applications over the next generation of wireless systems such as 5G and beyond. In this paper, we propose a novel framework that can reconstruct the 3D human body pose of the user given sparse measurements from Inertial Measurement Unit (IMU) sensors over a noisy wireless environment. Specifically, our framework enables reliable transmission of compressed IMU signals through noisy wireless channels and effective recovery of such signals at the receiver, e.g., an edge server. This task is very challenging due to the constraints of transmit power, recovery accuracy, and recovery latency. To address these challenges, we first develop a deep generative model at the receiver to recover the data from linear measurements of IMU signals. The linear measurements of the IMU signals are obtained by a linear projection with a measurement matrix based on the compressive sensing theory. The key to the success of our framework lies in the novel design of the measurement matrix at the transmitter, which can not only satisfy power constraints for the IMU devices but also obtain a highly accurate recovery for the IMU signals at the receiver. This can be achieved by extending the set-restricted eigenvalue condition of the measurement matrix and combining it with an upper bound for the power transmission constraint. Our framework can achieve robust performance for recovering 3D human poses from noisy compressed IMU signals. Additionally, our pre-trained deep generative model achieves signal reconstruction accuracy comparable to an optimization-based approach, i.e., Lasso, but is an order of magnitude faster

    Quantitative analysis of algorithms for compressed signal recovery

    Get PDF
    Compressed Sensing (CS) is an emerging paradigm in which signals are recovered from undersampled nonadaptive linear measurements taken at a rate proportional to the signal's true information content as opposed to its ambient dimension. The resulting problem consists in finding a sparse solution to an underdetermined system of linear equations. It has now been established, both theoretically and empirically, that certain optimization algorithms are able to solve such problems. Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2007), which is the focus of this thesis, is an established CS recovery algorithm which is known to be effective in practice, both in terms of recovery performance and computational efficiency. However, theoretical analysis of IHT to date suffers from two drawbacks: state-of-the-art worst-case recovery conditions have not yet been quantified in terms of the sparsity/undersampling trade-off, and also there is a need for average-case analysis in order to understand the behaviour of the algorithm in practice. In this thesis, we present a new recovery analysis of IHT, which considers the fixed points of the algorithm. In the context of arbitrary matrices, we derive a condition guaranteeing convergence of IHT to a fixed point, and a condition guaranteeing that all fixed points are 'close' to the underlying signal. If both conditions are satisfied, signal recovery is therefore guaranteed. Next, we analyse these conditions in the case of Gaussian measurement matrices, exploiting the realistic average-case assumption that the underlying signal and measurement matrix are independent. We obtain asymptotic phase transitions in a proportional-dimensional framework, quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. By generalizing the notion of xed points, we extend our analysis to the variable stepsize Normalised IHT (NIHT) (Blumensath and Davies, 2010). For both stepsize schemes, comparison with previous results within this framework shows a substantial quantitative improvement. We also extend our analysis to a related algorithm which exploits the assumption that the underlying signal exhibits tree-structured sparsity in a wavelet basis (Baraniuk et al., 2010). We obtain recovery conditions for Gaussian matrices in a simplified proportional-dimensional asymptotic, deriving bounds on the oversampling rate relative to the sparsity for which recovery is guaranteed. Our results, which are the first in the phase transition framework for tree-based CS, show a further significant improvement over results for the standard sparsity model. We also propose a dynamic programming algorithm which is guaranteed to compute an exact tree projection in low-order polynomial time

    Compressed Sensing Based Reconstruction Algorithm for X-ray Dose Reduction in Synchrotron Source Micro Computed Tomography

    Get PDF
    Synchrotron computed tomography requires a large number of angular projections to reconstruct tomographic images with high resolution for detailed and accurate diagnosis. However, this exposes the specimen to a large amount of x-ray radiation. Furthermore, this increases scan time and, consequently, the likelihood of involuntary specimen movements. One approach for decreasing the total scan time and radiation dose is to reduce the number of projection views needed to reconstruct the images. However, the aliasing artifacts appearing in the image due to the reduced number of projection data, visibly degrade the image quality. According to the compressed sensing theory, a signal can be accurately reconstructed from highly undersampled data by solving an optimization problem, provided that the signal can be sparsely represented in a predefined transform domain. Therefore, this thesis is mainly concerned with designing compressed sensing-based reconstruction algorithms to suppress aliasing artifacts while preserving spatial resolution in the resulting reconstructed image. First, the reduced-view synchrotron computed tomography reconstruction is formulated as a total variation regularized compressed sensing problem. The Douglas-Rachford Splitting and the randomized Kaczmarz methods are utilized to solve the optimization problem of the compressed sensing formulation. In contrast with the first part, where consistent simulated projection data are generated for image reconstruction, the reduced-view inconsistent real ex-vivo synchrotron absorption contrast micro computed tomography bone data are used in the second part. A gradient regularized compressed sensing problem is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The wavelet image denoising algorithm is used as the post-processing algorithm to attenuate the unwanted staircase artifact generated by the reconstruction algorithm. Finally, a noisy and highly reduced-view inconsistent real in-vivo synchrotron phase-contrast computed tomography bone data are used for image reconstruction. A combination of prior image constrained compressed sensing framework, and the wavelet regularization is formulated, and the Douglas-Rachford Splitting and the preconditioned conjugate gradient methods are utilized to solve the optimization problem of the compressed sensing formulation. The prior image constrained compressed sensing framework takes advantage of the prior image to promote the sparsity of the target image. It may lead to an unwanted staircase artifact when applied to noisy and texture images, so the wavelet regularization is used to attenuate the unwanted staircase artifact generated by the prior image constrained compressed sensing reconstruction algorithm. The visual and quantitative performance assessments with the reduced-view simulated and real computed tomography data from canine prostate tissue, rat forelimb, and femoral cortical bone samples, show that the proposed algorithms have fewer artifacts and reconstruction errors than other conventional reconstruction algorithms at the same x-ray dose

    Compressive Sensing Applications in Measurement: Theoretical issues, algorithm characterization and implementation

    Get PDF
    At its core, signal acquisition is concerned with efficient algorithms and protocols capable to capture and encode the signal information content. For over five decades, the indisputable theoretical benchmark has been represented by the wellknown Shannon’s sampling theorem, and the corresponding notion of information has been indissolubly related to signal spectral bandwidth. The contemporary society is founded on almost instantaneous exchange of information, which is mainly conveyed in a digital format. Accordingly, modern communication devices are expected to cope with huge amounts of data, in a typical sequence of steps which comprise acquisition, processing and storage. Despite the continual technological progress, the conventional acquisition protocol has come under mounting pressure and requires a computational effort not related to the actual signal information content. In recent years, a novel sensing paradigm, also known as Compressive Sensing, briefly CS, is quickly spreading among several branches of Information Theory. It relies on two main principles: signal sparsity and incoherent sampling, and employs them to acquire the signal directly in a condensed form. The sampling rate is related to signal information rate, rather than to signal spectral bandwidth. Given a sparse signal, its information content can be recovered even fromwhat could appear to be an incomplete set of measurements, at the expense of a greater computational effort at reconstruction stage. My Ph.D. thesis builds on the field of Compressive Sensing and illustrates how sparsity and incoherence properties can be exploited to design efficient sensing strategies, or to intimately understand the sources of uncertainty that affect measurements. The research activity has dealtwith both theoretical and practical issues, inferred frommeasurement application contexts, ranging fromradio frequency communications to synchrophasor estimation and neurological activity investigation. The thesis is organised in four chapters whose key contributions include: • definition of a general mathematical model for sparse signal acquisition systems, with particular focus on sparsity and incoherence implications; • characterization of the main algorithmic families for recovering sparse signals from reduced set of measurements, with particular focus on the impact of additive noise; • implementation and experimental validation of a CS-based algorithmfor providing accurate preliminary information and suitably preprocessed data for a vector signal analyser or a cognitive radio application; • design and characterization of a CS-based super-resolution technique for spectral analysis in the discrete Fourier transform(DFT) domain; • definition of an overcomplete dictionary which explicitly account for spectral leakage effect; • insight into the so-called off-the-grid estimation approach, by properly combining CS-based super-resolution and DFT coefficients polar interpolation; • exploration and analysis of sparsity implications in quasi-stationary operative conditions, emphasizing the importance of time-varying sparse signal models; • definition of an enhanced spectral content model for spectral analysis applications in dynamic conditions by means of Taylor-Fourier transform (TFT) approaches

    Algorithms for Reconstruction of Undersampled Atomic Force Microscopy Images

    Get PDF
    • …
    corecore