983 research outputs found
Radon spectrogram-based approach for automatic IFs separation
The separation of overlapping components is a well-known and difficult problem in multicomponent signals analysis and it is shared by applications dealing with radar, biosonar, seismic, and audio signals. In order to estimate the instantaneous frequencies of a multicomponent signal, it is necessary to disentangle signal modes in a proper domain. Unfortunately, if signal modes supports overlap both in time and frequency, separation is only possible through a parametric approach whenever the signal class is a priori fixed. In this work, time-frequency analysis and Radon transform are jointly used for the unsupervised separation of modes of a generic frequency modulated signal in noisy environment. The proposed method takes advantage of the ability of the Radon transform of a proper time-frequency distribution in separating overlapping modes. It consists of a blind segmentation of signal components in Radon domain by means of a near-to-optimal threshold operation. The inversion of the Radon transform on each detected region allows us to isolate the instantaneous frequency curves of each single mode in the time-frequency domain. Experimental results performed on constant amplitudes chirp signals confirm the effectiveness of the proposed method, opening the way for its extension to more complex frequency modulated signals
Directional edge and texture representations for image processing
An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations
ISAR Image formation with a combined Empirical Mode Decomposition and Time-Frequency Representation
International audienceIn this paper, a method for Inverse Synthetic Aperture Radar (ISAR) image formation based on the use of the Complex Empirical Mode Decomposition (CEMD) is proposed. The CEMD [1] which based on the Empirical Mode Decomposition (EMD) is used in conjunction with a Time-Frequency Representation (TFR) to estimate a 3-D time-range-Doppler Cubic image, which we can use to effectively extract a sequence of ISAR 2-D range-Doppler images. The potential of the proposed method to construct ISAR image is illustrated by simulations results performed on synthetic data and compared to 2-D Fourier Transform and TFR methods. The simulation results indicate that this method can provide ISAR images with a good resolution. These results demonstrate the potential application of the proposed method for ISAR image formation
Measuring gravitational waves from binary black hole coalescences: II. the waves' information and its extraction, with and without templates
We discuss the extraction of information from detected binary black hole
(BBH) coalescence gravitational waves, focusing on the merger phase that occurs
after the gradual inspiral and before the ringdown. Our results are: (1) If
numerical relativity simulations have not produced template merger waveforms
before BBH detections by LIGO/VIRGO, one can band-pass filter the merger waves.
For BBHs smaller than about 40 solar masses detected via their inspiral waves,
the band pass filtering signal to noise ratio indicates that the merger waves
should typically be just barely visible in the noise for initial and advanced
LIGO interferometers. (2) We derive an optimized (maximum likelihood) method
for extracting a best-fit merger waveform from the noisy detector output; one
"perpendicularly projects" this output onto a function space (specified using
wavelets) that incorporates our prior knowledge of the waveforms. An extension
of the method allows one to extract the BBH's two independent waveforms from
outputs of several interferometers. (3) If numerical relativists produce codes
for generating merger templates but running the codes is too expensive to allow
an extensive survey of the merger parameter space, then a coarse survey of this
parameter space, to determine the ranges of the several key parameters and to
explore several qualitative issues which we describe, would be useful for data
analysis purposes. (4) A complete set of templates could be used to test the
nonlinear dynamics of general relativity and to measure some of the binary
parameters. We estimate the number of bits of information obtainable from the
merger waves (about 10 to 60 for LIGO/VIRGO, up to 200 for LISA), estimate the
information loss due to template numerical errors or sparseness in the template
grid, and infer approximate requirements on template accuracy and spacing.Comment: 33 pages, Rextex 3.1 macros, no figures, submitted to Phys Rev
Resampling to accelerate cross-correlation searches for continuous gravitational waves from binary systems
Continuous-wave (CW) gravitational waves (GWs) call for
computationally-intensive methods. Low signal-to-noise ratio signals need
templated searches with long coherent integration times and thus fine
parameter-space resolution. Longer integration increases sensitivity. Low-mass
x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven
CWs at strains reachable by current ground-based observatories. Binary orbital
parameters induce phase modulation. This paper describes how resampling
corrects binary and detector motion, yielding source-frame time series used for
cross-correlation. Compared to the previous, detector-frame, templated
cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO
observing run (O1), resampling is about 20x faster in the costliest,
most-sensitive frequency bands. Speed-up factors depend on integration time and
search setup. The speed could be reinvested into longer integration with a
forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from
20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing
model enables future setup optimization. Resampling scales well with longer
integration, and at 10x unoptimized cost could reach respectively 2.83x and
2.75x median sensitivities, limited by spin-wandering. Then an O1 search could
yield a marginalized-polarization upper limit reaching torque-balance at 100
Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with
2x improved detectors.Comment: 28 pages, 7 figures, 3 table
A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity
The richness of natural images makes the quest for optimal representations in
image processing and computer vision challenging. The latter observation has
not prevented the design of image representations, which trade off between
efficiency and complexity, while achieving accurate rendering of smooth regions
as well as reproducing faithful contours and textures. The most recent ones,
proposed in the past decade, share an hybrid heritage highlighting the
multiscale and oriented nature of edges and patterns in images. This paper
presents a panorama of the aforementioned literature on decompositions in
multiscale, multi-orientation bases or dictionaries. They typically exhibit
redundancy to improve sparsity in the transformed domain and sometimes its
invariance with respect to simple geometric deformations (translation,
rotation). Oriented multiscale dictionaries extend traditional wavelet
processing and may offer rotation invariance. Highly redundant dictionaries
require specific algorithms to simplify the search for an efficient (sparse)
representation. We also discuss the extension of multiscale geometric
decompositions to non-Euclidean domains such as the sphere or arbitrary meshed
surfaces. The etymology of panorama suggests an overview, based on a choice of
partially overlapping "pictures". We hope that this paper will contribute to
the appreciation and apprehension of a stream of current research directions in
image understanding.Comment: 65 pages, 33 figures, 303 reference
Mathematical Morphology for Quantification in Biological & Medical Image Analysis
Mathematical morphology is an established field of image processing first introduced as an application of set and lattice theories. Originally used to characterise particle distributions, mathematical morphology has gone on to be a core tool required for such important analysis methods as skeletonisation and the watershed transform. In this thesis, I introduce a selection of new image analysis techniques based on mathematical morphology.
Utilising assumptions of shape, I propose a new approach for the enhancement of vessel-like objects in images: the bowler-hat transform. Built upon morphological operations, this approach is successful at challenges such as junctions and robust against noise. The bowler-hat transform is shown to give better results than competitor methods on challenging data such as retinal/fundus imagery.
Building further on morphological operations, I introduce two novel methods for particle and blob detection. The first of which is developed in the context of colocalisation, a standard biological assay, and the second, which is based on Hilbert-Edge Detection And Ranging (HEDAR), with regard to nuclei detection and counting in fluorescent microscopy. These methods are shown to produce accurate and informative results for sub-pixel and supra-pixel object counting in complex and noisy biological scenarios.
I propose a new approach for the automated extraction and measurement of object thickness for intricate and complicated vessels, such as brain vascular in medical images. This pipeline depends on two key technologies: semi-automated segmentation by advanced level-set methods and automatic thickness calculation based on morphological operations. This approach is validated and results demonstrating the broad range of challenges posed by these images and the possible limitations of this pipeline are shown.
This thesis represents a significant contribution to the field of image processing using mathematical morphology and the methods within are transferable to a range of complex challenges present across biomedical image analysis
Digital Image Processing
Newspapers and the popular scientific press today publish many examples of highly impressive images. These images range, for example, from those showing regions of star birth in the distant Universe to the extent of the stratospheric ozone depletion over Antarctica in springtime, and to those regions of the human brain affected by Alzheimer’s disease. Processed digitally to generate spectacular images, often in false colour, they all make an immediate and deep impact on the viewer’s imagination and understanding.
Professor Jonathan Blackledge’s erudite but very useful new treatise Digital Image Processing: Mathematical and Computational Methods explains both the underlying theory and the techniques used to produce such images in considerable detail. It also provides many valuable example problems - and their solutions - so that the reader can test his/her grasp of the physical, mathematical and numerical aspects of the particular topics and methods discussed. As such, this magnum opus complements the author’s earlier work Digital Signal Processing. Both books are a wonderful resource for students who wish to make their careers in this fascinating and rapidly developing field which has an ever increasing number of areas of application.
The strengths of this large book lie in: • excellent explanatory introduction to the subject; • thorough treatment of the theoretical foundations, dealing with both electromagnetic and acoustic wave scattering and allied techniques; • comprehensive discussion of all the basic principles, the mathematical transforms (e.g. the Fourier and Radon transforms), their interrelationships and, in particular, Born scattering theory and its application to imaging systems modelling; discussion in detail - including the assumptions and limitations - of optical imaging, seismic imaging, medical imaging (using ultrasound), X-ray computer aided tomography, tomography when the wavelength of the probing radiation is of the same order as the dimensions of the scatterer, Synthetic Aperture Radar (airborne or spaceborne), digital watermarking and holography; detail devoted to the methods of implementation of the analytical schemes in various case studies and also as numerical packages (especially in C/C++); • coverage of deconvolution, de-blurring (or sharpening) an image, maximum entropy techniques, Bayesian estimators, techniques for enhancing the dynamic range of an image, methods of filtering images and techniques for noise reduction; • discussion of thresholding, techniques for detecting edges in an image and for contrast stretching, stochastic scattering (random walk models) and models for characterizing an image statistically; • investigation of fractal images, fractal dimension segmentation, image texture, the coding and storing of large quantities of data, and image compression such as JPEG; • valuable summary of the important results obtained in each Chapter given at its end; • suggestions for further reading at the end of each Chapter. I warmly commend this text to all readers, and trust that they will find it to be invaluable.
Professor Michael J Rycroft Visiting Professor at the International Space University, Strasbourg, France, and at Cranfield University, England
- …