12 research outputs found
A randomised primal-dual algorithm for distributed radio-interferometric imaging
Next generation radio telescopes, like the Square Kilometre Array, will
acquire an unprecedented amount of data for radio astronomy. The development of
fast, parallelisable or distributed algorithms for handling such large-scale
data sets is of prime importance. Motivated by this, we investigate herein a
convex optimisation algorithmic structure, based on primal-dual
forward-backward iterations, for solving the radio interferometric imaging
problem. It can encompass any convex prior of interest. It allows for the
distributed processing of the measured data and introduces further flexibility
by employing a probabilistic approach for the selection of the data blocks used
at a given iteration. We study the reconstruction performance with respect to
the data distribution and we propose the use of nonuniform probabilities for
the randomised updates. Our simulations show the feasibility of the
randomisation given a limited computing infrastructure as well as important
computational advantages when compared to state-of-the-art algorithmic
structures.Comment: 5 pages, 3 figures, Proceedings of the European Signal Processing
Conference (EUSIPCO) 2016, Related journal publication available at
https://arxiv.org/abs/1601.0402
Multi-frequency image reconstruction for radio-interferometry with self-tuned regularization parameters
As the world's largest radio telescope, the Square Kilometer Array (SKA) will
provide radio interferometric data with unprecedented detail. Image
reconstruction algorithms for radio interferometry are challenged to scale well
with TeraByte image sizes never seen before. In this work, we investigate one
such 3D image reconstruction algorithm known as MUFFIN (MUlti-Frequency image
reconstruction For radio INterferometry). In particular, we focus on the
challenging task of automatically finding the optimal regularization parameter
values. In practice, finding the regularization parameters using classical grid
search is computationally intensive and nontrivial due to the lack of ground-
truth. We adopt a greedy strategy where, at each iteration, the optimal
parameters are found by minimizing the predicted Stein unbiased risk estimate
(PSURE). The proposed self-tuned version of MUFFIN involves parallel and
computationally efficient steps, and scales well with large- scale data.
Finally, numerical results on a 3D image are presented to showcase the
performance of the proposed approach
Robust sparse image reconstruction of radio interferometric observations with purify
Next-generation radio interferometers, such as the Square Kilometre Array
(SKA), will revolutionise our understanding of the universe through their
unprecedented sensitivity and resolution. However, to realise these goals
significant challenges in image and data processing need to be overcome. The
standard methods in radio interferometry for reconstructing images, such as
CLEAN, have served the community well over the last few decades and have
survived largely because they are pragmatic. However, they produce
reconstructed inter\-ferometric images that are limited in quality and
scalability for big data. In this work we apply and evaluate alternative
interferometric reconstruction methods that make use of state-of-the-art sparse
image reconstruction algorithms motivated by compressive sensing, which have
been implemented in the PURIFY software package. In particular, we implement
and apply the proximal alternating direction method of multipliers (P-ADMM)
algorithm presented in a recent article. First, we assess the impact of the
interpolation kernel used to perform gridding and degridding on sparse image
reconstruction. We find that the Kaiser-Bessel interpolation kernel performs as
well as prolate spheroidal wave functions, while providing a computational
saving and an analytic form. Second, we apply PURIFY to real interferometric
observations from the Very Large Array (VLA) and the Australia Telescope
Compact Array (ATCA) and find images recovered by PURIFY are higher quality
than those recovered by CLEAN. Third, we discuss how PURIFY reconstructions
exhibit additional advantages over those recovered by CLEAN. The latest version
of PURIFY, with developments presented in this work, is made publicly
available.Comment: 22 pages, 10 figures, PURIFY code available at
http://basp-group.github.io/purif
Cygnus A super-resolved via convex optimisation from VLA data
We leverage the Sparsity Averaging Reweighted Analysis (SARA) approach for
interferometric imaging, that is based on convex optimisation, for the
super-resolution of Cyg A from observations at the frequencies 8.422GHz and
6.678GHz with the Karl G. Jansky Very Large Array (VLA). The associated average
sparsity and positivity priors enable image reconstruction beyond instrumental
resolution. An adaptive Preconditioned Primal-Dual algorithmic structure is
developed for imaging in the presence of unknown noise levels and calibration
errors. We demonstrate the superior performance of the algorithm with respect
to the conventional CLEAN-based methods, reflected in super-resolved images
with high fidelity. The high resolution features of the recovered images are
validated by referring to maps of Cyg A at higher frequencies, more precisely
17.324GHz and 14.252GHz. We also confirm the recent discovery of a radio
transient in Cyg A, revealed in the recovered images of the investigated data
sets. Our matlab code is available online on GitHub.Comment: 14 pages, 7 figures (3/7 animated figures), accepted for publication
in MNRA
Imaging and uncertainty quantification in radio astronomy via convex optimization : when precision meets scalability
Upcoming radio telescopes such as the Square Kilometre Array (SKA) will provide sheer amounts
of data, allowing large images of the sky to be reconstructed at an unprecedented resolution and
sensitivity over thousands of frequency channels. In this regard, wideband radio-interferometric
imaging consists in recovering a 3D image of the sky from incomplete and noisy Fourier data, that
is a highly ill-posed inverse problem. To regularize the inverse problem, advanced prior image
models need to be tailored. Moreover, the underlying algorithms should be highly parallelized to
scale with the vast data volumes provided and the Petabyte image cubes to be reconstructed for
SKA. The research developed in this thesis leverages convex optimization techniques to achieve
precise and scalable imaging for wideband radio interferometry and further assess the degree of
confidence in particular 3D structures present in the reconstructed cube.
In the context of image reconstruction, we propose a new approach that decomposes the image
cube into regular spatio-spectral facets, each is associated with a sophisticated hybrid prior image
model. The approach is formulated as an optimization problem with a multitude of facet-based
regularization terms and block-specific data-fidelity terms. The underpinning algorithmic structure benefits from well-established convergence guarantees and exhibits interesting functionalities
such as preconditioning to accelerate the convergence speed. Furthermore, it allows for parallel processing of all data blocks and image facets over a multiplicity of CPU cores, allowing the
bottleneck induced by the size of the image and data cubes to be efficiently addressed via parallelization. The precision and scalability potential of the proposed approach are confirmed through
the reconstruction of a 15 GB image cube of the Cyg A radio galaxy.
In addition, we propose a new method that enables analyzing the degree of confidence in
particular 3D structures appearing in the reconstructed cube. This analysis is crucial due to the
high ill-posedness of the inverse problem. Besides, it can help in making scientific decisions on
the structures under scrutiny (e.g., confirming the existence of a second black hole in the Cyg A
galaxy). The proposed method is posed as an optimization problem and solved efficiently with
a modern convex optimization algorithm with preconditioning and splitting functionalities. The
simulation results showcase the potential of the proposed method to scale to big data regimes
Generation of terahertz-modulated optical signals using AlGaAs/GaAs laser diodes
The Thesis reports on the research activities carried out under the Semiconductor-Laser Terahertz-Frequency Converters Project at the Department of Electronics and Electrical Engineering, University of Glasgow.
The Thesis presents the work leading to the demonstration of reproducible harmonic modelocked operation from a novel design of monolithic semiconductor laser, comprising a compound cavity formed by a 1-D photonic-bandgap (PBG) mirror. Modelocking was achieved at a harmonic of the fundamental round-trip frequency with pulse repetition rates from 131 GHz up to a record-high frequency of 2.1 THz. The devices were fabricated from GaAs/AlGaAs material emitting at a wavelength of 860 nm and incorporated two gain sections with an etched PBG reflector between them, and a saturable absorber section.
Autocorrelation studies are reported, which allow the device behaviour for different modelocking frequencies, compound cavity ratios, and type and number of intra-cavity reflectors to be analyzed. The highly reflective PBG microstructures are shown to be essential for subharmonic-free modelocking operation of the high-frequency devices. It was also demonstrated that the multi-slot PBG reflector can be replaced with two separate slots with smaller reflectivity.
Some work was also done on the realisation of a dual-wavelength source using a broad-area laser diode in an external grating-loaded cavity. However, the source failed to deliver the spectrally-narrow lines required for optical heterodyning applications. Photomixer devices incorporating a terahertz antenna for optical-to microwave down-conversion were fabricated, however, no down-conversion experiments were attempted. Finally, novel device designs are proposed that exploit the remarkable spectral and modelocking properties of compound-cavity lasers.
The ultrafast laser diodes demonstrated in this Project can be developed for applications in terahertz imaging, medicine, ultrafast optical links and atmospheric sensing
Recommended from our members
On fundamental computational barriers in the mathematics of information
This thesis is about computational theory in the setting of the mathematics of information. The first goal is to demonstrate that many commonly considered problems
in optimisation theory cannot be solved with an algorithm if the input data is only
known up to an arbitrarily small error (modelling the fact that most real numbers are
not expressible to infinite precision with a floating point based computational device).
This includes computing the minimisers to basis pursuit, linear programming, lasso
and image deblurring as well as finding an optimal neural network given training data.
These results are somewhat paradoxical given the success that existing algorithms exhibit when tackling these problems with real world datasets and a substantial portion
of this thesis is dedicated to explaining the apparent disparity, particularly in the context of compressed sensing. To do so requires the introduction of a variety of new
concepts, including that of a breakdown epsilon, which may have broader applicability
to computational problems outside of the ones central to this thesis. We conclude with
a discussion on future research directions opened up by this work.This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/H023348/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis
A Compressed Sensing Approach to Detect Immobilized Nanoparticles Using Superparamagnetic Relaxometry
Superparamagnetic relaxometry (SPMR) is an emerging technology that leverages the unique properties of biologically targeted superparamagnetic iron oxide nanoparticles to detect cancer. The use of ultra-sensitive sensors enables SPMR to detect tumors ten times smaller than current imaging methods. Reconstructing the distribution of cancer-bound nanoparticles from SPMR measurements is challenging because the inverse problem is ill posed. Current methods of source reconstruction rely on prior knowledge of the number of clusters of bound nanoparticles and their approximate locations, which is not known in clinical applications. In this work, we present a novel reconstruction algorithm based on compressed sensing methods that relies on only clinically feasible information. This approach is based on the hypothesis that the true distribution of cancer-bound nanoparticles consists of only a few highly-focal clusters around tumors and metastases, and is therefore the sparsest of all possible distributions with a similar SPMR signal. We tested this hypothesis through three specific aims. First, we calibrated the sensor locations used in the forward model to measured data, and found a 5% agreement between the forward model and the data. Next, we determined the optimal choice of the data fidelity parameter and investigated the effect of experimental factors on the reconstruction. Finally, we compared the compressed sensing-based algorithm with the current reconstruction method on SPMR measurements of phantoms. We found that when a multiple sources were reconstructed simultaneously, the compressed sensing approach was more frequently able to detect the second source. In a blinded user analysis, our compressed sensing-based reconstruction algorithm was able to correctly classify 80% of the test cases, whereas the current reconstruction method had an accuracy of 43%. Therefore, our algorithm has the potential to detect early stage tumors with higher accuracy, advancing the translation of SPMR as a clinical tool for early detection of cancer