49 research outputs found

    A high reliability survey of discrete Epoch of Reionization foreground sources in the MWA EoR0 field

    Get PDF
    Detection of the epoch of reionization HI signal requires a precise understanding of the intervening galaxies and AGN, both for instrumental calibration and foreground removal. We present a catalogue of 7394 extragalactic sources at 182 MHz detected in the RA = 0 field of the Murchison Widefield Array Epoch of Reionization observation programme. Motivated by unprecedented requirements for precision and reliability we develop new methods for source finding and selection. We apply machine learning methods to self-consistently classify the relative reliability of 9490 source candidates. A subset of 7466 are selected based on reliability class and signal-to-noise ratio criteria. These are statistically cross-matched to four other radio surveys using both position and flux density information. We find 7369 sources to have confident matches, including 90 partially resolved sources that split into a total of 192 sub-components. An additional 25 unmatched sources are included as new radio detections. The catalogue sources have a median spectral index of -0.85. Spectral flattening is seen towards lower frequencies with a median of -0.71 predicted at 182 MHz. The astrometric error is 7 arcsec compared to a 2.3 arcmin beam FWHM. The resulting catalogue covers ~1400 deg2 and is complete to approximately 80 mJy within half beam power. This provides the most reliable discrete source sky model available to date in the MWA EoR0 field for precision foreground subtraction

    Least-squares reverse-time migration

    Get PDF
    Conventional migration methods, including reverse-time migration (RTM) have two weaknesses: first, they use the adjoint of forward-modelling operators, and second, they usually apply a crosscorrelation imaging condition to extract images from reconstructed wavefields. Adjoint operators, which are an approximation to inverse operators, can only correctly calculate traveltimes (phase), but not amplitudes. To preserve the true amplitudes of migration images, it is necessary to apply the inverse of the forward-modelling operator. Similarly, crosscorrelation imaging conditions also only correct traveltimes (phase) but do not preserve amplitudes. Besides, the examples show crosscorrelation imaging conditions produce strong sidelobes. Least-squares migration (LSM) uses both inverse operators and deconvolution imaging conditions. As a result, LSM resolves both problems in conventional migration methods and produces images with fewer artefacts, higher resolution and more accurate amplitudes. At the same time, RTM can accurately handle all dips, frequencies and any type of velocity variation. Combining RTM and LSM produces least-squares reverse-time migration (LSRTM), which in turn has all the advantages of RTM and LSM. In this thesis, we implement two types of LSRTM: matrix-based LSRTM (MLSRTM) and non-linear LSRTM (NLLSRTM). MLSRTM is a matrix formulation of LSRTM and is more stable than conventional LSRTM; it can be implemented with linear inversion algorithms but needs a large amount of computer memory. NLLSRTM, by contrast, directly expresses migration as an optimisation which minimises the 2 norm of the residual between the predicted and observed data. NLLSRTM can be implemented using non-linear gradient inversion algorithms, such as non-linear steepest descent and non-linear conjugated-gradient solvers. We demonstrate that both MLSRTM and NLLSRTM can achieve better images with fewer artefacts, higher resolution and more accurate amplitudes than RTM using three synthetic examples. The power of LSRTM is also further illustrated using a field dataset. Finally, a simple synthetic test demonstrates that the objective function used in LSRTM is sensitive to errors in the migration velocity. As a result, it may be possible to use NLLSRTM to both refine the migrated image and estimate the migration velocity.Open Acces

    Non-Negative Matrix Factorization Based Algorithms to Cluster Frequency Basis Functions for Monaural Sound Source Separation.

    Get PDF
    Monophonic sound source separation (SSS) refers to a process that separates out audio signals produced from the individual sound sources in a given acoustic mixture, when the mixture signal is recorded using one microphone or is directly recorded onto one reproduction channel. Many audio applications such as pitch modification and automatic music transcription would benefit from the availability of segregated sound sources from the mixture of audio signals for further processing. Recently, Non-negative matrix factorization (NMF) has found application in monaural audio source separation due to its ability to factorize audio spectrograms into additive part-based basis functions, where the parts typically correspond to individual notes or chords in music. An advantage of NMF is that there can be a single basis function for each note played by a given instrument, thereby capturing changes in timbre with pitch for each instrument or source. However, these basis functions need to be clustered to their respective sources for the reconstruction of the individual source signals. Many clustering methods have been proposed to map the separated signals into sources with considerable success. Recently, to avoid the need of clustering, Shifted NMF (SNMF) was proposed, which assumes that the timbre of a note is constant for all the pitches produced by an instrument. SNMF has two drawbacks. Firstly, the assumption that the timbre of the notes played by an instrument remains constant, is not true in general. Secondly, the SNMF method uses the Constant Q transform (CQT) and the lack of a true inverse of the CQT results in compromising on separation quality of the reconstructed signal. The principal aim of this thesis is to attempt to solve the problem of clustering NMF basis functions. Our first major contribution is the use of SNMF as a method of clustering the basis functions obtained via standard NMF. The proposed SNMF clustering method aims to cluster the frequency basis functions obtained via standard NMF to their respective sources by making use of shift invariance in a log-frequency domain. Further, a minor contribution is made by improving the separation performance of the standard SNMF algorithm (here used directly to separate sources) obtained through the use of an improved inverse CQT. Here, the standard SNMF algorithm finds shift-invariance in a CQ spectrogram, that contain the frequency basis functions, obtained directly from the spectrogram of the audio mixture. Our next contribution is an improvement in the SNMF clustering algorithm through the incorporation of the CQT matrix inside the SNMF model in order to avoid the need of an inverse CQT to reconstruct the clustered NMF basis unctions. Another major contribution deals with the incorporation of a constraint called group sparsity (GS) into the SNMF clustering algorithm at two stages to improve clustering. The effect of the GS is evaluated on various SNMF clustering algorithms proposed in this thesis. Finally, we have introduced a new family of masks to reconstruct the original signal from the clustered basis functions and compared their performance to the generalized Wiener filter masks using three different factorisation-based separation algorithms. We show that better separation performance can be achieved by using the proposed family of masks

    PET/MR imaging of hypoxic atherosclerotic plaque using 64Cu-ATSM

    Get PDF
    ABSTRACT OF THE DISSERTATION PET/MR Imaging of Hypoxic Atherosclerotic Plaque Using 64Cu-ATSM by Xingyu Nie Doctor of Philosophy in Biomedical Engineering Washington University in St. Louis, 2017 Professor Pamela K. Woodard, Chair Professor Suzanne Lapi, Co-Chair It is important to accurately identify the factors involved in the progression of atherosclerosis because advanced atherosclerotic lesions are prone to rupture, leading to disability or death. Hypoxic areas have been known to be present in human atherosclerotic lesions, and lesion progression is associated with the formation of lipid-loaded macrophages and increased local inflammation which are potential major factors in the formation of vulnerable plaque. This dissertation work represents a comprehensive investigation of non-invasive identification of hypoxic atherosclerotic plaque in animal models and human subjects using the PET hypoxia imaging agent 64Cu-ATSM. We first demonstrated the feasibility of 64Cu-ATSM for the identification of hypoxic atherosclerotic plaque and evaluated the relative effects of diet and genetics on hypoxia progression in atherosclerotic plaque in a genetically-altered mouse model. We then fully validated the feasibility of using 64Cu-ATSM to image the extent of hypoxia in a rabbit model with atherosclerotic-like plaque using a simultaneous PET-MR system. We also proceeded with a pilot clinical trial to determine whether 64Cu-ATSM MR/PET scanning is capable of detecting hypoxic carotid atherosclerosis in human subjects. In order to improve the 64Cu-ATSM PET image quality, we investigated the Siemens HD (high-definition) PET software and 4 partial volume correction methods to correct for partial volume effects. In addition, we incorporated the attenuation effect of the carotid surface coil into the MR attenuation correction _-map to correct for photon attention. In the long term, this imaging strategy has the potential to help identify patients at risk for cardiovascular events, guide therapy, and add to the understanding of plaque biology in human patients

    Independent component analysis (ICA) applied to ultrasound image processing and tissue characterization

    Get PDF
    As a complicated ubiquitous phenomenon encountered in ultrasound imaging, speckle can be treated as either annoying noise that needs to be reduced or the source from which diagnostic information can be extracted to reveal the underlying properties of tissue. In this study, the application of Independent Component Analysis (ICA), a relatively new statistical signal processing tool appeared in recent years, to both the speckle texture analysis and despeckling problems of B-mode ultrasound images was investigated. It is believed that higher order statistics may provide extra information about the speckle texture beyond the information provided by first and second order statistics only. However, the higher order statistics of speckle texture is still not clearly understood and very difficult to model analytically. Any direct dealing with high order statistics is computationally forbidding. On the one hand, many conventional ultrasound speckle texture analysis algorithms use only first or second order statistics. On the other hand, many multichannel filtering approaches use pre-defined analytical filters which are not adaptive to the data. In this study, an ICA-based multichannel filtering texture analysis algorithm, which considers both higher order statistics and data adaptation, was proposed and tested on the numerically simulated homogeneous speckle textures. The ICA filters were learned directly from the training images. Histogram regularization was conducted to make the speckle images quasi-stationary in the wide sense so as to be adaptive to an ICA algorithm. Both Principal Component Analysis (PCA) and a greedy algorithm were used to reduce the dimension of feature space. Finally, Support Vector Machines (SVM) with Radial Basis Function (RBF) kernel were chosen as the classifier for achieving best classification accuracy. Several representative conventional methods, including both low and high order statistics based methods, and both filtering and non-filtering methods, have been chosen for comparison study. The numerical experiments have shown that the proposed ICA-based algorithm in many cases outperforms other algorithms for comparison. Two-component texture segmentation experiments were conducted and the proposed algorithm showed strong capability of segmenting two visually very similar yet different texture regions with rather fuzzy boundaries and almost the same mean and variance. Through simulating speckle with first order statistics approaching gradually to the Rayleigh model from different non-Rayleigh models, the experiments to some extent reveal how the behavior of higher order statistics changes with the underlying property of tissues. It has been demonstrated that when the speckle approaches the Rayleigh model, both the second and higher order statistics lose the texture differentiation capability. However, when the speckles tend to some non-Rayleigh models, methods based on higher order statistics show strong advantage over those solely based on first or second order statistics. The proposed algorithm may potentially find clinical application in the early detection of soft tissue disease, and also be helpful for better understanding ultrasound speckle phenomenon in the perspective of higher order statistics. For the despeckling problem, an algorithm was proposed which adapted the ICA Sparse Code Shrinkage (ICA-SCS) method for the ultrasound B-mode image despeckling problem by applying an appropriate preprocessing step proposed by other researchers. The preprocessing step makes the speckle noise much closer to the real white Gaussian noise (WGN) hence more amenable to a denoising algorithm such as ICS-SCS that has been strictly designed for additive WGN. A discussion is given on how to obtain the noise-free training image samples in various ways. The experimental results have shown that the proposed method outperforms several classical methods chosen for comparison, including first or second order statistics based methods (such as Wiener filter) and multichannel filtering methods (such as wavelet shrinkage), in the capability of both speckle reduction and edge preservation
    corecore