79 research outputs found
Foetal echocardiographic segmentation
Congenital heart disease affects just under one percentage of all live births [1].
Those defects that manifest themselves as changes to the cardiac chamber volumes
are the motivation for the research presented in this thesis.
Blood volume measurements in vivo require delineation of the cardiac chambers and
manual tracing of foetal cardiac chambers is very time consuming and operator
dependent. This thesis presents a multi region based level set snake deformable
model applied in both 2D and 3D which can automatically adapt to some extent
towards ultrasound noise such as attenuation, speckle and partial occlusion artefacts.
The algorithm presented is named Mumford Shah Sarti Collision Detection (MSSCD).
The level set methods presented in this thesis have an optional shape prior term for
constraining the segmentation by a template registered to the image in the presence
of shadowing and heavy noise.
When applied to real data in the absence of the template the MSSCD algorithm is
initialised from seed primitives placed at the centre of each cardiac chamber. The
voxel statistics inside the chamber is determined before evolution. The MSSCD stops
at open boundaries between two chambers as the two approaching level set fronts
meet. This has significance when determining volumes for all cardiac compartments
since cardiac indices assume that each chamber is treated in isolation. Comparison
of the segmentation results from the implemented snakes including a previous level
set method in the foetal cardiac literature show that in both 2D and 3D on both real
and synthetic data, the MSSCD formulation is better suited to these types of data.
All the algorithms tested in this thesis are within 2mm error to manually traced
segmentation of the foetal cardiac datasets. This corresponds to less than 10% of
the length of a foetal heart. In addition to comparison with manual tracings all the
amorphous deformable model segmentations in this thesis are validated using a
physical phantom. The volume estimation of the phantom by the MSSCD
segmentation is to within 13% of the physically determined volume
Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos
High quality digital images have become pervasive in modern scientific and everyday life —
in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However
there are always limits to the quality of these images due to uncertainty and imprecision in the
measurement systems. Modern signal processing methods offer the promise of overcoming
some of these problems by postprocessing
these blurred and noisy images. In this thesis,
novel methods using nonstationary statistical models are developed for the removal of blurs
from out of focus and other types of degraded photographic images.
The work tackles the fundamental problem blind image deconvolution (BID); its goal is
to restore a sharp image from a blurred observation when the blur itself is completely unknown.
This is a “doubly illposed”
problem — extreme lack of information must be countered
by strong prior constraints about sensible types of solution. In this work, the hierarchical
Bayesian methodology is used as a robust and versatile framework to impart the required prior
knowledge.
The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along
with techniques and models for its solution. Observation models are developed, with an
emphasis on photographic restoration, concluding with a discussion of how these are reduced
to the common linear spatially-invariant
(LSI) convolutional model. Classical methods for the
solution of illposed
problems are summarised to provide a foundation for the main theoretical
ideas that will be used under the Bayesian framework. This is followed by an indepth
review
and discussion of the various prior image and blur models appearing in the literature, and then
their applications to solving the problem with both Bayesian and nonBayesian
techniques.
The second part covers novel restoration methods, making use of the theory presented in Part I.
Firstly, two new nonstationary image models are presented. The first models local variance in
the image, and the second extends this with locally adaptive noncausal
autoregressive (AR)
texture estimation and local mean components. These models allow for recovery of image
details including edges and texture, whilst preserving smooth regions. Most existing methods
do not model the boundary conditions correctly for deblurring of natural photographs, and a
Chapter is devoted to exploring Bayesian solutions to this topic.
Due to the complexity of the models used and the problem itself, there are many challenges
which must be overcome for tractable inference. Using the new models, three different inference
strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori
(MMAP) method with deterministic optimisation; proceeding with the stochastic methods
of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution
using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective
way to deal with a variety of different types of unknown blurs. Along the way, details are given
of the numerical strategies developed to give accurate results and to accelerate performance.
Finally, the thesis demonstrates state of the art
results in blind restoration of synthetic and real
degraded images, such as recovering details in out of focus photographs
Time and frequency domain algorithms for speech coding
The promise of digital hardware economies (due to recent advances in
VLSI technology), has focussed much attention on more complex and sophisticated
speech coding algorithms which offer improved quality at relatively
low bit rates.
This thesis describes the results (obtained from computer simulations)
of research into various efficient (time and frequency domain) speech
encoders operating at a transmission bit rate of 16 Kbps.
In the time domain, Adaptive Differential Pulse Code Modulation (ADPCM)
systems employing both forward and backward adaptive prediction were
examined. A number of algorithms were proposed and evaluated, including
several variants of the Stochastic Approximation Predictor (SAP). A
Backward Block Adaptive (BBA) predictor was also developed and found to
outperform the conventional stochastic methods, even though its complexity
in terms of signal processing requirements is lower. A simplified
Adaptive Predictive Coder (APC) employing a single tap pitch predictor
considered next provided a slight improvement in performance over ADPCM,
but with rather greater complexity.
The ultimate test of any speech coding system is the perceptual performance
of the received speech. Recent research has indicated that this
may be enhanced by suitable control of the noise spectrum according to
the theory of auditory masking. Various noise shaping ADPCM
configurations were examined, and it was demonstrated that a proposed
pre-/post-filtering arrangement which exploits advantageously the
predictor-quantizer interaction, leads to the best subjective
performance in both forward and backward prediction systems.
Adaptive quantization is instrumental to the performance of ADPCM systems.
Both the forward adaptive quantizer (AQF) and the backward oneword
memory adaptation (AQJ) were examined. In addition, a novel method
of decreasing quantization noise in ADPCM-AQJ coders, which involves the
application of correction to the decoded speech samples, provided
reduced output noise across the spectrum, with considerable high frequency
noise suppression.
More powerful (and inevitably more complex) frequency domain speech
coders such as the Adaptive Transform Coder (ATC) and the Sub-band Coder
(SBC) offer good quality speech at 16 Kbps. To reduce complexity and
coding delay, whilst retaining the advantage of sub-band coding, a novel
transform based split-band coder (TSBC) was developed and found to compare
closely in performance with the SBC.
To prevent the heavy side information requirement associated with a
large number of bands in split-band coding schemes from impairing coding
accuracy, without forgoing the efficiency provided by adaptive bit
allocation, a method employing AQJs to code the sub-band signals together
with vector quantization of the bit allocation patterns was also
proposed.
Finally, 'pipeline' methods of bit allocation and step size estimation
(using the Fast Fourier Transform (FFT) on the input signal) were examined.
Such methods, although less accurate, are nevertheless useful in
limiting coding delay associated with SRC schemes employing Quadrature
Mirror Filters (QMF)
Time in Music and Culture
From Aristotle to Heidegger, philosophers distinguished two orders of time, before, after and past, present, future, presenting them in a wide range of interpretations. It was only around the turn of the 1970s that two theories of time which deliberately went beyond that tradition, enhancing our notional apparatus, were produced independently of one another. The nature philosopher Julius T. Fraser, founder of the interdisciplinary International Society for the Study of Time, distinguished temporal levels in the evolution of the Cosmos and the structure of the human mind: atemporality,prototemporality,eotemporality,biotemporality andnootemporality. The author of the book distinguishes two ‘dimensions’ in time: the dimension of the sequence of time (syntagmatic) and the dimension of the sizes of duration or frequency (systemic). On the systemic scale, the author distinguishes, in human ways of existing and acting, a visual zone, zone of the psychological present, zone of works and performances, zone of the natural and cultural environment, zone of individual and social life and zone of history, myth and tradition. In this book, the author provides a synthesis of these theories
Musical Forces in Claude Vivier’s Wo bist du Licht! and Trois airs pour un opéra imaginaire
Claude Vivier’s (1947–1983) idiosyncratic and moving composition style often evades traditional, pitch-centred approaches to music-theoretical analysis; however, the somatic and sensual qualities of his style encourage a metaphorical appreciation of his music. This study analyses Wo bist du Licht! (1981) and the first two airs from Trois airs pour un opéra imaginaire (1982), which both feature his technique sinusoïdale, from the perspective of conceptual metaphor and musical forces. At the centre of this study are the dominant conceptual metaphors that linguist George Lakoff and philosopher Mark Johnson identify as being integral to our understanding of time, and which music theorist Arnie Cox demonstrates also underlie our concept of motion and change in music.
My approach builds on Steve Larson’s theory of musical forces, which qualifies the musical motion metaphor by invoking musical analogues to gravity, magnetism, and inertia. These, Larson demonstrates, operate in a predictable way in tonal music. The post-tonal context of Vivier’s music requires modification of Larson’s approach. To this end, I incorporate concepts borrowed from Robert Hatten and Matthew BaileyShea. From Hatten, I borrow the notion of a musical agent, and analogues to friction and momentum, only I qualify musical momentum as a combined perception of musical mass (manifested as register, density, and texture) and velocity (manifested as tempo). From BaileyShea, I borrow the concept of water and wind as non-sentient, unpredictable environmental forces. The wave and wind metaphors are particularly adept at conveying the changes in texture and intensity that the technique sinusoïdale affords. Because they complement force metaphors, I also include energy and other embodied, non-motion metaphors (e.g., kinetic/potential energy, pressure, timbre). Although not forces-based, timbre metaphors have corporeal connotations that are helpful in converying the changing mental states suggested in the second air of Trois airs.
These metaphors rely on our intuitive understanding of motion and embodied experience to convey musical change. They enable us to discuss more phenomenological, abstract musical attributes by drawing on a familiar vocabulary rooted in sensorimotor experience. This approach resonates particularly well with the sensual nature of Vivier’s music
Recent advances on the reduction and analysis of big and high-dimensional data
In an era with remarkable advancements in computer engineering, computational algorithms, and mathematical modeling, data scientists are inevitably faced with the challenge of working with big and high-dimensional data. For many problems, data reduction is a necessary first step; such reduction allows for storage and portability of big data, and enables the computation of expensive downstream quantities. The next step then involves the analysis of big data -- the use of such data for modeling, inference, and prediction. This thesis presents new methods for big data reduction and analysis, with a focus on solving real-world problems in statistics, machine learning and engineering.Ph.D
Ultracold atoms in flexible holographic traps
This thesis details the design, construction and characterisation of an ultracold atoms system, developed in conjunction with a flexible optical trapping scheme which utilises a Liquid Crystal Spatial Light Modulator (LC SLM). The ultracold atoms system uses a hybrid trap formed of a quadrupole magnetic field and a focused far-detuned laser beam to form a Bose-Einstein Condensate of 2×10⁵ ⁸⁷Rb atoms. Cold atoms confined in several arbitrary optical trapping geometries are created by overlaying the LC SLM trap on to the hybrid trap, where a simple feedback process using the atomic distribution as a metric is shown to be capable of compensating for optical aberrations.
Two novel methods for creating flexible optical traps with the LC SLM are also detailed, the first of which is a multi-wavelength technique which allows several wavelengths of light to be smoothly shaped and applied to the atoms. The second method uses a computationally-efficient minimisation algorithm to create light patterns which are constrained in both amplitude and phase, where the extra phase constraint was shown to be crucial for controlling propagation effects of the LC SLM trapping beam
- …