18 research outputs found
ISAR image matching and three-dimensional scattering imaging based on extracted dominant scatterers
This paper studies inverse synthetic aperture radar (ISAR) image matching and three-dimensional (3D) scattering imaging based on extracted dominant scatterers. In the condition of a long baseline between two radars, it is easy for obvious rotation, scale, distortion, and shift to occur between two-dimensional (2D) radar images. These problems lead to the difficulty of radar-image matching, which cannot be resolved by motion compensation and cross-correlation. What is more, due to the anisotropy, existing image-matching algorithms, such as scale invariant feature transform (SIFT), do not adapt to ISAR images very well. In addition, the angle between the target rotation axis and the radar line of sight (LOS) cannot be neglected. If so, the calibration result will be smaller than the real projection size. Furthermore, this angle cannot be estimated by monostatic radar. Therefore, instead of matching image by image, this paper proposes a novel ISAR imaging matching and 3D imaging based on extracted scatterers to deal with these issues. First, taking advantage of ISAR image sparsity, radar images are converted into scattering point sets. Then, a coarse scatterer matching based on the random sampling consistency algorithm (RANSAC) is performed. The scatterer height and accurate affine transformation parameters are estimated iteratively. Based on matched scatterers, information such as the angle and 3D image can be obtained. Finally, experiments based on the electromagnetic simulation software CADFEKO have been conducted to demonstrate the effectiveness of the proposed algorithm
Digital Image Processing
This book presents several recent advances that are related or fall under the umbrella of 'digital image processing', with the purpose of providing an insight into the possibilities offered by digital image processing algorithms in various fields. The presented mathematical algorithms are accompanied by graphical representations and illustrative examples for an enhanced readability. The chapters are written in a manner that allows even a reader with basic experience and knowledge in the digital image processing field to properly understand the presented algorithms. Concurrently, the structure of the information in this book is such that fellow scientists will be able to use it to push the development of the presented subjects even further
Sparse machine learning methods with applications in multivariate signal processing
This thesis details theoretical and empirical work that draws from two main subject areas: Machine
Learning (ML) and Digital Signal Processing (DSP). A unified general framework is given for the application
of sparse machine learning methods to multivariate signal processing. In particular, methods that
enforce sparsity will be employed for reasons of computational efficiency, regularisation, and compressibility.
The methods presented can be seen as modular building blocks that can be applied to a variety
of applications. Application specific prior knowledge can be used in various ways, resulting in a flexible
and powerful set of tools. The motivation for the methods is to be able to learn and generalise from a set
of multivariate signals.
In addition to testing on benchmark datasets, a series of empirical evaluations on real world
datasets were carried out. These included: the classification of musical genre from polyphonic audio
files; a study of how the sampling rate in a digital radar can be reduced through the use of Compressed
Sensing (CS); analysis of human perception of different modulations of musical key from
Electroencephalography (EEG) recordings; classification of genre of musical pieces to which a listener
is attending from Magnetoencephalography (MEG) brain recordings. These applications demonstrate
the efficacy of the framework and highlight interesting directions of future research
Recommended from our members
A Novel Inpainting Framework for Virtual View Synthesis
Multi-view imaging has stimulated significant research to enhance the user experience of free viewpoint video, allowing interactive navigation between views and the freedom to select a desired view to watch. This usually involves transmitting both textural and depth information captured from different viewpoints to the receiver, to enable the synthesis of an arbitrary view. In rendering these virtual views, perceptual holes can appear due to certain regions, hidden in the original view by a closer object, becoming visible in the virtual view. To provide a high quality experience these holes must be filled in a visually plausible way, in a process known as inpainting. This is challenging because the missing information is generally unknown and the hole-regions can be large. Recently depth-based inpainting techniques have been proposed to address this challenge and while these generally perform better than non-depth assisted methods, they are not very robust and can produce perceptual artefacts.
This thesis presents a new inpainting framework that innovatively exploits depth and textural self-similarity characteristics to construct subjectively enhanced virtual viewpoints. The framework makes three significant contributions to the field: i) the exploitation of view information to jointly inpaint textural and depth hole regions; ii) the introduction of the novel concept of self-similarity characterisation which is combined with relevant depth information; and iii) an advanced self-similarity characterising scheme that automatically determines key spatial transform parameters for effective and flexible inpainting.
The presented inpainting framework has been critically analysed and shown to provide superior performance both perceptually and numerically compared to existing techniques, especially in terms of lower visual artefacts. It provides a flexible robust framework to develop new inpainting strategies for the next generation of interactive multi-view technologies
Microwave Sensing and Imaging
In recent years, microwave sensing and imaging have acquired an ever-growing importance in several applicative fields, such as non-destructive evaluations in industry and civil engineering, subsurface prospection, security, and biomedical imaging. Indeed, microwave techniques allow, in principle, for information to be obtained directly regarding the physical parameters of the inspected targets (dielectric properties, shape, etc.) by using safe electromagnetic radiations and cost-effective systems. Consequently, a great deal of research activity has recently been devoted to the development of efficient/reliable measurement systems, which are effective data processing algorithms that can be used to solve the underlying electromagnetic inverse scattering problem, and efficient forward solvers to model electromagnetic interactions. Within this framework, this Special Issue aims to provide some insights into recent microwave sensing and imaging systems and techniques
Remote Sensing of the Oceans
This book covers different topics in the framework of remote sensing of the oceans. Latest research advancements and brand-new studies are presented that address the exploitation of remote sensing instruments and simulation tools to improve the understanding of ocean processes and enable cutting-edge applications with the aim of preserving the ocean environment and supporting the blue economy. Hence, this book provides a reference framework for state-of-the-art remote sensing methods that deal with the generation of added-value products and the geophysical information retrieval in related fields, including: Oil spill detection and discrimination; Analysis of tropical cyclones and sea echoes; Shoreline and aquaculture area extraction; Monitoring coastal marine litter and moving vessels; Processing of SAR, HF radar and UAV measurements
Generalizable deep learning based medical image segmentation
Deep learning is revolutionizing medical image analysis and interpretation. However, its real-world deployment is often hindered by the poor generalization to unseen domains (new imaging modalities and protocols). This lack of generalization ability is further exacerbated by the scarcity of labeled datasets for training: Data collection and annotation can be prohibitively expensive in terms of labor and costs because label quality heavily dependents on the expertise of radiologists. Additionally, unreliable predictions caused by poor model generalization pose safety risks to clinical downstream applications.
To mitigate labeling requirements, we investigate and develop a series of techniques to strengthen the generalization ability and the data efficiency of deep medical image computing models. We further improve model accountability and identify unreliable predictions made on out-of-domain data, by designing probability calibration techniques.
In the first and the second part of thesis, we discuss two types of problems for handling unexpected domains: unsupervised domain adaptation and single-source domain generalization. For domain adaptation we present a data-efficient technique that adapts a segmentation model trained on a labeled source domain (e.g., MRI) to an unlabeled target domain (e.g., CT), using a small number of unlabeled training images from the target domain.
For domain generalization, we focus on both image reconstruction and segmentation. For image reconstruction, we design a simple and effective domain generalization technique for cross-domain MRI reconstruction, by reusing image representations learned from natural image datasets. For image segmentation, we perform causal analysis of the challenging cross-domain image segmentation problem. Guided by this causal analysis we propose an effective data-augmentation-based generalization technique for single-source domains. The proposed method outperforms existing approaches on a large variety of cross-domain image segmentation scenarios.
In the third part of the thesis, we present a novel self-supervised method for learning generic image representations that can be used to analyze unexpected objects of interest. The proposed method is designed together with a novel few-shot image segmentation framework that can segment unseen objects of interest by taking only a few labeled examples as references. Superior flexibility over conventional fully-supervised models is demonstrated by our few-shot framework: it does not require any fine-tuning on novel objects of interest. We further build a publicly available comprehensive evaluation environment for few-shot medical image segmentation.
In the fourth part of the thesis, we present a novel probability calibration model. To ensure safety in clinical settings, a deep model is expected to be able to alert human radiologists if it has low confidence, especially when confronted with out-of-domain data. To this end we present a plug-and-play model to calibrate prediction probabilities on out-of-domain data. It aligns the prediction probability in line with the actual accuracy on the test data. We evaluate our method on both artifact-corrupted images and images from an unforeseen MRI scanning protocol. Our method demonstrates improved calibration accuracy compared with the state-of-the-art method.
Finally, we summarize the major contributions and limitations of our works. We also suggest future research directions that will benefit from the works in this thesis.Open Acces