461 research outputs found
A compressed sensing approach to block-iterative equalization: connections and applications to radar imaging reconstruction
The widespread of underdetermined systems has brought forth a variety of new algorithmic solutions, which capitalize on the Compressed Sensing (CS) of sparse data. While well known greedy or iterative threshold type of CS recursions take the form of an adaptive filter followed by a proximal operator, this is no different in spirit from the role of block iterative decision-feedback equalizers (BI-DFE), where structure is roughly exploited by the signal constellation slicer. By taking advantage of the intrinsic sparsity of signal modulations in a communications scenario, the concept of interblock interference (IBI) can be approached more cunningly in light of CS concepts, whereby the optimal feedback of detected symbols is devised adaptively. The new DFE takes the form of a more efficient re-estimation scheme, proposed under recursive-least-squares based adaptations. Whenever suitable, these recursions are derived under a reduced-complexity, widely-linear formulation, which further reduces the minimum-mean-square-error (MMSE) in comparison with traditional strictly-linear approaches. Besides maximizing system throughput, the new algorithms exhibit significantly higher performance when compared to existing methods. Our reasoning will also show that a properly formulated BI-DFE turns out to be a powerful CS algorithm itself. A new algorithm, referred to as CS-Block DFE (CS-BDFE) exhibits improved convergence and detection when compared to first order methods, thus outperforming the state-of-the-art Complex Approximate Message Passing (CAMP) recursions. The merits of the new recursions are illustrated under a novel 3D MIMO Radar formulation, where the CAMP algorithm is shown to fail with respect to important performance measures.A proliferação de sistemas sub-determinados trouxe a tona uma gama de novas soluções algorítmicas, baseadas no sensoriamento compressivo (CS) de dados esparsos. As recursões do tipo greedy e de limitação iterativa para CS se apresentam comumente como um filtro adaptativo seguido de um operador proximal, não muito diferente dos equalizadores de realimentação de decisão iterativos em blocos (BI-DFE), em que um decisor explora a estrutura do sinal de constelação. A partir da esparsidade intrínseca presente na modulação de sinais no contexto de comunicações, a interferência entre blocos (IBI) pode ser abordada utilizando-se o conceito de CS, onde a realimentação ótima de símbolos detectados é realizada de forma adaptativa. O novo DFE se apresenta como um esquema mais eficiente de reestimação, baseado na atualização por mínimos quadrados recursivos (RLS). Sempre que possível estas recursões são propostas via formulação linear no sentido amplo, o que reduz ainda mais o erro médio quadrático mínimo (MMSE) em comparação com abordagens tradicionais. Além de maximizar a taxa de transferência de informação, o novo algoritmo exibe um desempenho significativamente superior quando comparado aos métodos existentes. Também mostraremos que um equalizador BI-DFE formulado adequadamente se torna um poderoso algoritmo de CS. O novo algoritmo CS-BDFE apresenta convergência e detecção aprimoradas, quando comparado a métodos de primeira ordem, superando as recursões de Passagem de Mensagem Aproximada para Complexos (CAMP). Os méritos das novas recursões são ilustrados através de um modelo tridimensional para radares MIMO recentemente proposto, onde o algoritmo CAMP falha em aspectos importantes de medidas de desempenho
Master of Science
thesisNondestructive evaluation (NDE) is a means of assessing the reliability and integrity of a structural component and provides such information as the presence, location, extent, and type of damage in the component. Structural health monitoring (SHM) is a subfield of NDE, and focuses on a continuous monitoring of a structure while in use. SHM has been applied to structures such as bridges, buildings, pipelines, and airplanes with the goal of detecting the presence of damage as a means of determining whether a structure is in need of maintenance. SHM can be posed as a modeling problem, where an accurate model allows for a more reliable prediction of structural behavior. More reliable predictions make it easier to determine if something is out of the ordinary with the structure. Structural models can be designed using analytical or empirical approaches. Most SHM applications use purely analytical models based on finite element analysis and fundamental wave propagation equations to construct behavioral predictions. Purely empirical models exist, but are less common. These often utilize pattern recognition algorithms to recognize features that indicate damage. This thesis uses a method related to the k-means algorithm known as dictionary learning to train a wave propagation model from full wavefield data. These data are gathered from thin metal plates that exhibit complex wavefields dominated by multipath interference. We evaluate our model for its ability to detect damage in structures on which the model was not trained. These structures are similar to the training structure, but variable in material type and thickness. This evaluation will demonstrate how well learned dictionaries can both detect damage in a complex wavefield with multipath interference, and how well the learned model generalizes to structures with slight variations in properties. The damage detection and generalization results achieved using this empirical model are compared to similar results using both an analytical model and a support vector machine model
Compressive Sensing for Microwave and Millimeter-Wave Array Imaging
PhDCompressive Sensing (CS) is a recently proposed signal processing technique that has
already found many applications in microwave and millimeter-wave imaging. CS theory
guarantees that sparse or compressible signals can be recovered from far fewer measure-
ments than those were traditionally thought necessary. This property coincides with the
goal of personnel surveillance imaging whose priority is to reduce the scanning time as
much as possible. Therefore, this thesis investigates the implementation of CS techniques
in personnel surveillance imaging systems with different array configurations.
The first key contribution is the comparative study of CS methods in a switched array
imaging system. Specific attention has been paid to situations where the array element
spacing does not satisfy the Nyquist criterion due to physical limitations. CS methods are
divided into the Fourier transform based CS (FT-CS) method that relies on conventional
FT and the direct CS (D-CS) method that directly utilizes classic CS formulations. The
performance of the two CS methods is compared with the conventional FT method in
terms of resolution, computational complexity, robustness to noise and under-sampling.
Particularly, the resolving power of the two CS methods is studied under various cir-
cumstances. Both numerical and experimental results demonstrate the superiority of CS
methods. The FT-CS and D-CS methods are complementary techniques that can be
used together for optimized efficiency and image reconstruction.
The second contribution is a novel 3-D compressive phased array imaging algorithm
based on a more general forward model that takes antenna factors into consideration.
Imaging results in both range and cross-range dimensions show better performance than
the conventional FT method. Furthermore, suggestions on how to design the sensing con-
figurations for better CS reconstruction results are provided based on coherence analysis.
This work further considers the near-field imaging with a near-field focusing technique
integrated into the CS framework. Simulation results show better robustness against
noise and interfering targets from the background.
The third contribution presents the effects of array configurations on the performance of
the D-CS method. Compressive MIMO array imaging is first derived and demonstrated
with a cross-shaped MIMO array. The switched array, MIMO array and phased array are
then investigated together under the compressive imaging framework. All three methods
have similar resolution due to the same effective aperture. As an alternative scheme for
the switched array, the MIMO array is able to achieve comparable performance with far
fewer antenna elements. While all three array configurations are capable of imaging with
sub-Nyquist element spacing, the phased array is more sensitive to this element spacing
factor. Nevertheless, the phased array configuration achieves the best robustness against
noise at the cost of higher computational complexity.
The final contribution is the design of a novel low-cost beam-steering imaging system
using a flat Luneburg lens. The idea is to use a switched array at the focal plane of
the Luneburg lens to control the beam-steering. By sequentially exciting each element,
the lens forms directive beams to scan the region of interest. The adoption of CS for
image reconstruction enables high resolution and also data under-sampling. Numerical
simulations based on mechanically scanned data are conducted to verify the proposed
imaging system.China Scholarship Council
Engineering and Physical Sciences
Research Council (EPSRC)
funding (EP/I034548/1)
Ultra Wideband
Ultra wideband (UWB) has advanced and merged as a technology, and many more people are aware of the potential for this exciting technology. The current UWB field is changing rapidly with new techniques and ideas where several issues are involved in developing the systems. Among UWB system design, the UWB RF transceiver and UWB antenna are the key components. Recently, a considerable amount of researches has been devoted to the development of the UWB RF transceiver and antenna for its enabling high data transmission rates and low power consumption. Our book attempts to present current and emerging trends in-research and development of UWB systems as well as future expectations
Study on THz Imaging System for Concealed Threats Detection.
PhD ThesisMany research groups have conducted studies on Terahertz technology for various applications in the last decades. THz imaging for personnel screening is one prospective application due in part to its superior performance compared with imaging microwave bands. Because of the demand for the accurate detection, it is desirable to devise a high-performance THz imaging system for concealed threats detection. Therefore, this thesis presents my research on the low-cost THz imaging system for security detection.
The key contributions of this research lie in investigating the linear sparse periodic array (SPA) THz imaging system for concealed threats detection, improving the traditional reconstruction algorithm of Generalized Synthetic Aperture Focusing Technique (GSAFT) to suppress the ghost images and applying the compressive sensing technique into the proposed SPA-THz imaging system to reduce the sampling data but maintain the image quality.
The first part of the work is to investigate the linear sparse periodic array (SPA) and its configuration with large element spacing in simulation, deriving the design guideline for such a SPA THz imaging system. Meanwhile, the improved GSAFT reconstruction algorithm and multi-pass interferometric synthetic aperture imaging technique have been proposed to suppress the ghost image and improve the image quality, respectively. Secondly, the compressive sensing technique has been investigated to reduce the sampling data. Therefore, we have proposed the corresponding discrete CS SPA-THz reconstruction model and verified it in simulation. Finally, we have devised a simplified experimental set-up to assess the practical imaging performance, verifying the proposed SPA-THz imaging system. The set-up only uses 1 Tx and 1 Rx scanning on two separate tracks to effectively realize the proposed imaging system. The reconstructed images by the GSAFT and CS approaches with the measured data have both shown good consistency with the simulated results, respectively. And the multi-pass interferometric synthetic aperture imaging has been experimentally proved effective in improving image SNR and contras
ИНТЕЛЛЕКТУАЛЬНЫЙ числовым программным ДЛЯ MIMD-компьютер
For most scientific and engineering problems simulated on computers the solving of problems of the computational mathematics with approximately given initial data constitutes an intermediate or a final stage. Basic problems of the computational mathematics include the investigating and solving of linear algebraic systems, evaluating of eigenvalues and eigenvectors of matrices, the solving of systems of non-linear equations, numerical integration of initial- value problems for systems of ordinary differential equations.Для більшості наукових та інженерних задач моделювання на ЕОМ рішення задач обчислювальної математики з наближено заданими вихідними даними складає проміжний або остаточний етап. Основні проблеми обчислювальної математики відносяться дослідження і рішення лінійних алгебраїчних систем оцінки власних значень і власних векторів матриць, рішення систем нелінійних рівнянь, чисельного інтегрування початково задач для систем звичайних диференціальних рівнянь.Для большинства научных и инженерных задач моделирования на ЭВМ решение задач вычислительной математики с приближенно заданным исходным данным составляет промежуточный или окончательный этап. Основные проблемы вычислительной математики относятся исследования и решения линейных алгебраических систем оценки собственных значений и собственных векторов матриц, решение систем нелинейных уравнений, численного интегрирования начально задач для систем обыкновенных дифференциальных уравнений
Towards Personalized Healthcare in Cardiac Population: The Development of a Wearable ECG Monitoring System, an ECG Lossy Compression Schema, and a ResNet-Based AF Detector
Cardiovascular diseases (CVDs) are the number one cause of death worldwide.
While there is growing evidence that the atrial fibrillation (AF) has strong
associations with various CVDs, this heart arrhythmia is usually diagnosed
using electrocardiography (ECG) which is a risk-free, non-intrusive, and
cost-efficient tool. Continuously and remotely monitoring the subjects' ECG
information unlocks the potentials of prompt pre-diagnosis and timely
pre-treatment of AF before the development of any life-threatening
conditions/diseases. Ultimately, the CVDs associated mortality could be
reduced. In this manuscript, the design and implementation of a personalized
healthcare system embodying a wearable ECG device, a mobile application, and a
back-end server are presented. This system continuously monitors the users' ECG
information to provide personalized health warnings/feedbacks. The users are
able to communicate with their paired health advisors through this system for
remote diagnoses, interventions, etc. The implemented wearable ECG devices have
been evaluated and showed excellent intra-consistency (CVRMS=5.5%), acceptable
inter-consistency (CVRMS=12.1%), and negligible RR-interval errors (ARE<1.4%).
To boost the battery life of the wearable devices, a lossy compression schema
utilizing the quasi-periodic feature of ECG signals to achieve compression was
proposed. Compared to the recognized schemata, it outperformed the others in
terms of compression efficiency and distortion, and achieved at least 2x of CR
at a certain PRD or RMSE for ECG signals from the MIT-BIH database. To enable
automated AF diagnosis/screening in the proposed system, a ResNet-based AF
detector was developed. For the ECG records from the 2017 PhysioNet CinC
challenge, this AF detector obtained an average testing F1=85.10% and a best
testing F1=87.31%, outperforming the state-of-the-art
Image Restoration Methods for Retinal Images: Denoising and Interpolation
Retinal imaging provides an opportunity to detect pathological and natural age-related
physiological changes in the interior of the eye. Diagnosis of retinal abnormality requires an image that is sharp, clear and free of noise and artifacts. However, to prevent tissue damage, retinal imaging instruments use low illumination radiation, hence, the signal-to-noise ratio (SNR) is reduced which means the total noise power is increased. Furthermore, noise is inherent in some imaging techniques. For example, in Optical Coherence Tomography (OCT) speckle noise is produced due to the coherence between the unwanted backscattered light. Improving OCT image quality by reducing speckle noise increases the accuracy of analyses and hence the diagnostic sensitivity. However, the challenge is to preserve image features while reducing speckle noise. There is a clear trade-off between image feature preservation and speckle noise reduction in OCT.
Averaging multiple OCT images taken from a unique position provides a high SNR
image, but it drastically increases the scanning time. In this thesis, we develop a multi-frame image denoising method for Spectral Domain OCT (SD-OCT) images extracted from a very close locations of a SD-OCT volume. The proposed denoising method was tested using two dictionaries: nonlinear (NL) and KSVD-based adaptive dictionary. The NL dictionary was constructed by adding phases, polynomial, exponential and boxcar functions to the conventional Discrete Cosine Transform (DCT) dictionary. The proposed denoising method denoises nearby frames of SD-OCT volume using a sparse representation method and combines them by selecting median intensity pixels from the denoised nearby frames. The result showed that both dictionaries reduced the speckle noise from the OCT images; however, the adaptive dictionary showed slightly better results at the cost of a higher computational complexity. The NL dictionary was also used for fundus and OCT image reconstruction. The performance of the NL dictionary was always better than that of other analytical-based dictionaries, such as DCT and Haar.
The adaptive dictionary involves a lengthy dictionary learning process, and therefore
cannot be used in real situations. We dealt this problem by utilizing a low-rank approximation. In this approach SD-OCT frames were divided into a group of noisy matrices that consist of non-local similar patches. A noise-free patch matrix was obtained from a noisy patch matrix utilizing a low-rank approximation. The noise-free patches from nearby frames were averaged to enhance the denoising. The denoised image obtained from the proposed approach was better than those obtained by several state-of-the-art methods. The proposed approach was extended to jointly denoise and interpolate SD-OCT image. The results show that joint denoising and interpolation method outperforms several existing state-of-the-art denoising methods plus bicubic interpolation.4 month
Detection and classification of non-stationary signals using sparse representations in adaptive dictionaries
Automatic classification of non-stationary radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such signals are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models, making feature extraction and classification difficult. This thesis proposes an adaptive classification approach for poorly characterized targets and backgrounds based on sparse representations in non-analytical dictionaries learned from data. Conventional analytical orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of non-stationary signals, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They generally do not lead to sparse decompositions (i.e., with very few non-zero coefficients), and use in classification requires separate feature selection algorithms. Pursuit-type decompositions in analytical overcomplete (non-orthogonal) dictionaries yield sparse representations, by design, and work well for signals that are similar to the dictionary elements. The pursuit search, however, has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. One such overcomplete analytical dictionary method is also analyzed in this thesis for comparative purposes. The main thrust of the thesis is learning discriminative RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics. A pursuit search is used over the learned dictionaries to generate sparse classification features in order to identify time windows that contain a target pulse. Two state-of-the-art dictionary learning methods are compared, the K-SVD algorithm and Hebbian learning, in terms of their classification performance as a function of dictionary training parameters. Additionally, a novel hybrid dictionary algorithm is introduced, demonstrating better performance and higher robustness to noise. The issue of dictionary dimensionality is explored and this thesis demonstrates that undercomplete learned dictionaries are suitable for non-stationary RF classification. Results on simulated data sets with varying background clutter and noise levels are presented. Lastly, unsupervised classification with undercomplete learned dictionaries is also demonstrated in satellite imagery analysis
Remote Sensing Data Compression
A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin
- …