320 research outputs found
Arrayed LiDAR signal analysis for automotive applications
Light detection and ranging (LiDAR) is one of the enabling technologies for advanced
driver assistance and autonomy. Advances in solid-state photon detector arrays offer
the potential of high-performance LiDAR systems but require novel signal processing
approaches to fully exploit the dramatic increase in data volume an arrayed detector
can provide.
This thesis presents two approaches applicable to arrayed solid-state LiDAR. First, a
novel block independent sparse depth reconstruction framework is developed, which
utilises a random and very sparse illumination scheme to reduce illumination density while improving sampling times, which further remain constant for any array
size. Compressive sensing (CS) principles are used to reconstruct depth information
from small measurement subsets. The smaller problem size of blocks reduces the
reconstruction complexity, improves compressive depth reconstruction performance
and enables fast concurrent processing. A feasibility study of a system proposal for
this approach demonstrates that the required logic could be practically implemented
within detector size constraints. Second, a novel deep learning architecture called
LiDARNet is presented to localise surface returns from LiDAR waveforms with high
throughput. This single data driven processing approach can unify a wide range
of scenarios, making use of a training-by-simulation methodology. This augments
real datasets with challenging simulated conditions such as multiple returns and
high noise variance, while enabling rapid prototyping of fast data driven processing
approaches for arrayed LiDAR systems.
Both approaches are fast and practical processing methodologies for arrayed LiDAR
systems. These retrieve depth information with excellent depth resolution for wide
operating ranges, and are demonstrated on real and simulated data. LiDARNet is
a rapid approach to determine surface locations from LiDAR waveforms for efficient point cloud generation, while block sparse depth reconstruction is an efficient method to facilitate high-resolution depth maps at high frame rates with reduced power and memory requirements.Engineering and Physical Sciences Research Council (EPSRC
Survey of Computational Methods for Inverse Problems
Inverse problems occur in a wide range of scientific applications, such as in the fields of signal processing, medical imaging, or geophysics. This work aims to present to the field practitioners, in an accessible and concise way, several established and newer cutting-edge computational methods used in the field of inverse problemsāand when and how these techniques should be employed
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
Low-rank matrix approximations, such as the truncated singular value
decomposition and the rank-revealing QR decomposition, play a central role in
data analysis and scientific computing. This work surveys and extends recent
research which demonstrates that randomization offers a powerful tool for
performing low-rank matrix approximation. These techniques exploit modern
computational architectures more fully than classical methods and open the
possibility of dealing with truly massive data sets.
This paper presents a modular framework for constructing randomized
algorithms that compute partial matrix decompositions. These methods use random
sampling to identify a subspace that captures most of the action of a matrix.
The input matrix is then compressed---either explicitly or implicitly---to this
subspace, and the reduced matrix is manipulated deterministically to obtain the
desired low-rank factorization. In many cases, this approach beats its
classical competitors in terms of accuracy, speed, and robustness. These claims
are supported by extensive numerical experiments and a detailed error analysis
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressedāeither explicitly or
implicitlyāto this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m Ć n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
Progressive transmission of medical images
A novel adaptive source-channel coding scheme for progressive transmission of medical images with a feedback system is therefore proposed in this dissertation. The overall design includes Discrete Wavelet Transform (DWT), Embedded Zerotree Wavelet (EZW) coding, Joint Source-Channel Coding (JSCC), prioritization of region of interest (RoI), variability of parity length based on feedback, and the corresponding hardware design utilising Simulink. The JSCC can achieve an efficient transmission by incorporating unequal error projection (UEP) and rate allocation. An algorithm is also developed to estimate the number of erroneous data in the receiver. The algorithm detects the address in which the number of symbols for each subblock is indicated, and reassigns an estimated correct data according to a decision making criterion, if error data is detected. The proposed system has been designed based on Simulink which can be used to generate netlist for portable devices. A new compression method called Compressive Sensing (CS) is also revisited in this work. CS exhibits many advantages in comparison with EZW based on our experimental results. DICOM JPEG2000 is an efficient coding standard for lossy or lossless multi-component image coding. However, it does not provide any mechanism for automatic RoI definition, and is more complex compared to our proposed scheme. The proposed system significantly reduces the transmission time, lowers computation cost, and maintains an error-free state in the RoI with regards to the above provided features. A MATLAB-based TCP/IP connection is established to demonstrate the efficacy of the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary and symmetric channel (BSC) and Rayleigh channel. The experimental results confirm the effectiveness of the design.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Progressive transmission of medical images
A novel adaptive source-channel coding scheme for progressive transmission of medical images with a feedback system is therefore proposed in this dissertation. The overall design includes Discrete Wavelet Transform (DWT), Embedded Zerotree Wavelet (EZW) coding, Joint Source-Channel Coding (JSCC), prioritization of region of interest (RoI), variability of parity length based on feedback, and the corresponding hardware design utilising Simulink. The JSCC can achieve an efficient transmission by incorporating unequal error projection (UEP) and rate allocation. An algorithm is also developed to estimate the number of erroneous data in the receiver. The algorithm detects the address in which the number of symbols for each subblock is indicated, and reassigns an estimated correct data according to a decision making criterion, if error data is detected. The proposed system has been designed based on Simulink which can be used to generate netlist for portable devices. A new compression method called Compressive Sensing (CS) is also revisited in this work. CS exhibits many advantages in comparison with EZW based on our experimental results. DICOM JPEG2000 is an efficient coding standard for lossy or lossless multi-component image coding. However, it does not provide any mechanism for automatic RoI definition, and is more complex compared to our proposed scheme. The proposed system significantly reduces the transmission time, lowers computation cost, and maintains an error-free state in the RoI with regards to the above provided features. A MATLAB-based TCP/IP connection is established to demonstrate the efficacy of the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary and symmetric channel (BSC) and Rayleigh channel. The experimental results confirm the effectiveness of the desig
- ā¦