1,204 research outputs found
Recommended from our members
A Hybrid Similarity Measure Framework for Multimodal Medical Image Registration
Medical imaging is widely used today to facilitate both disease diagnosis and treatment planning practice, with a key prerequisite being the systematic process of medical image registration (MIR) to align either mono or multimodal images of different anatomical parts of the human body. MIR utilises a similarity measure (SM) to quantify the level of spatial alignment and is particularly demanding due to the presence of inherent modality characteristics like intensity non-uniformities (INU) in magnetic resonance images and large homogeneous non-vascular regions in retinal images. While various intensity and feature-based SMs exist for MIR, mutual information (MI) has become established because of its computational efficiency and ability to register multimodal images. It is however, very sensitive to interpolation artefacts in the presence of INU with noise and can be compromised when overlapping areas are small. Recently MI-based hybrid variants which combine regional features with intensity have emerged, though these incur high dimensionality and large computational overheads.
To address these challenges and secure accurate, efficient and robust registration of images containing high INU, noise and large homogeneous regions, this thesis presents a new hybrid SM framework for 2D multimodal rigid MIR. The framework consistently provides superior quantitative and qualitative performance, while offering a uniquely flexible design trade-off between registration accuracy and computational time. It makes three significant technical contributions to the field: i) An expectation maximisation-based principal component analysis with mutual information (EMPCA-MI) framework incorporating neighbourhood feature information; ii) Two innovative enhancements to reduce information redundancy and improve MI computational efficiency; and iii) an adaptive algorithm to select the most significant principal components for feature selection.
The thesis findings conclusively confirm the hybrid SM framework offers an accurate and robust 2D registration solution for challenging multimodal medical imaging datasets, while its inherent flexibility means it can also be extended to the 3D registration domain
Novel transmission and beamforming strategies for multiuser MIMO with various CSIT types
In multiuser multi-antenna wireless systems, the transmission and beamforming strategies that achieve the sum rate capacity depend critically on the acquisition of perfect Channel State Information at the Transmitter (CSIT).
Accordingly, a high-rate low-latency feedback link between the receiver and the transmitter is required to keep the latter accurately and instantaneously informed about the CSI.
In realistic wireless systems, however, only imperfect CSIT is achievable due to pilot contamination, estimation error, limited feedback and delay, etc.
As an intermediate solution, this thesis investigates novel transmission strategies suitable for various imperfect CSIT scenarios and the associated beamforming techniques to optimise the rate performance.
First, we consider a two-user Multiple-Input-Single-Output (MISO) Broadcast Channel (BC) under statistical and delayed CSIT.
We mainly focus on linear beamforming and power allocation designs for ergodic sum rate maximisation.
The proposed designs enable higher sum rate than the conventional designs.
Interestingly, we propose a novel transmission framework which makes better use of statistical and delayed CSIT and smoothly bridges between statistical CSIT-based strategies and delayed CSIT-based strategies.
Second, we consider a multiuser massive MIMO system under partial and statistical CSIT.
In order to tackle multiuser interference incurred by partial CSIT, a Rate-Splitting (RS) transmission strategy has been proposed recently.
We generalise the idea of RS into the large-scale array.
By further exploiting statistical CSIT, we propose a novel framework Hierarchical-Rate-Splitting that is particularly suited to massive MIMO systems.
Third, we consider a multiuser Millimetre Wave (mmWave) system with hybrid analog/digital precoding under statistical and quantised CSIT.
We leverage statistical CSIT to design digital precoder for interference mitigation while all feedback overhead is reserved for precise analog beamforming.
For very limited feedback and/or very sparse channels, the proposed precoding scheme yields higher sum rate than the conventional precoding schemes under a fixed total feedback constraint.
Moreover, a RS transmission strategy is introduced to further tackle the multiuser interference, enabling remarkable saving in feedback overhead compared with conventional transmission strategies.
Finally, we investigate the downlink hybrid precoding for physical layer multicasting with a limited number of RF chains.
We propose a low complexity algorithm to compute the analog precoder that achieves near-optimal max-min performance.
Moreover, we derive a simple condition under which the hybrid precoding driven by a limited number of RF chains incurs no loss of optimality with respect to the fully digital precoding case.Open Acces
Interference modelling and management for cognitive radio networks
Radio spectrum is becoming increasingly scarce as more and more devices go wireless.
Meanwhile, studies indicate that the assigned spectrum is not fully utilised.
Cognitive radio (CR) technology is envisioned to be a promising solution to address
the imbalance between spectrum scarcity and spectrum underutilisation. It improves
the spectrum utilisation by reusing the unused or underutilised spectrum owned by
incumbent systems (primary systems). With the introduction of CR networks, two
types of interference originating from CR networks are introduced. They are the interference
from CR to primary networks (CR-primary interference) and the interference
among spectrum-sharing CR nodes (CR-CR interference). The interference should be
well controlled and managed in order not to jeopardise the operation of the primary
network and to improve the performance of CR systems. This thesis investigates the
interference in CR networks by modelling and mitigating the CR-primary interference
and analysing the CR-CR interference channels.
Firstly, the CR-primary interference is modelled for multiple CR nodes sharing the
spectrum with the primary system. The probability density functions of CR-primary
interference are derived for CR networks adopting different interference management
schemes. The relationship between CR operating parameters and the resulting CRprimary
interference is investigated. It sheds light on the deployment of CR networks
to better protect the primary system.
Secondly, various interference mitigation techniques that are applicable to CR networks
are reviewed. Two novel precoding schemes for CR multiple-input multipleoutput
(MIMO) systems are proposed to mitigate the CR-primary interference and
maximise the CR throughput. To further reduce the CR-primary interference, we also
approach interference mitigation from a cross-layer perspective by jointly considering
channel allocation in the media access control layer and precoding in the physical
layer of CR MIMO systems.
Finally, we analyse the underlying interference channels among spectrum-sharing CR
users when they interfere with each other. The Pareto rate region for multi-user MIMO
interference systems is characterised. Various rate region convexification schemes are
examined to convexify the rate region. Then, game theory is applied to the interference
system to coordinate the operation of each CR user. Nash bargaining over MIMO
interference systems is characterised as well.
The research presented in this thesis reveals the impact of CR operation on the resulting
CR-primary network, how to mitigate the CR-primary interference and how
to coordinate the spectrum-sharing CR users. It forms the fundamental basis for interference
management in CR systems and consequently gives insights into the design
and deployment of CR networks
Probabilistic multiple kernel learning
The integration of multiple and possibly heterogeneous information sources for an overall decision-making process has been an open and unresolved research direction in computing science since its very beginning. This thesis attempts to address parts of that direction by proposing probabilistic data integration algorithms for multiclass decisions where an observation of interest is assigned to one of many categories based on a plurality of information channels
Continuous Variable Optimisation of Quantum Randomness and Probabilistic Linear Amplification
In the past decade, quantum communication protocols based on
continuous variables (CV) has seen considerable development in
both theoretical and experimental aspects.
Nonetheless, challenges remain in both the practical security and
the operating range for CV systems, before such systems may be
used extensively. In this thesis, we present
the optimisation of experimental parameters for secure randomness
generation and propose a non-deterministic approach to enhance
amplification of CV quantum state.
The first part of this thesis examines the security of quantum
devices: in particular, we investigate quantum random number
generators (QRNG) and quantum key distribution
(QKD) schemes. In a realistic scenario, the output of a quantum
random number generator is inevitably tainted by classical
technical noise, which potentially compromises
the security of such a device. To safeguard against this, we
propose and experimentally demonstrate an approach that produces
side-information independent randomness. We present a method for
maximising such randomness contained in a number sequence
generated from a given quantum-to-classical-noise ratio. The
detected photocurrent
in our experiment is shown to have a real-time random-number
generation rate of 14 (Mbit/s)/MHz.
Next, we study the one-sided device-independent (1sDI) quantum
key distribution scheme in the context of continuous variables.
By exploiting recently proven entropic
uncertainty relations, one may bound the information leaked to an
eavesdropper. We use such a bound to further derive the secret
key rate, that depends only upon the
conditional Shannon entropies accessible to Alice and Bob, the
two honest communicating parties. We identify and experimentally
demonstrate such a protocol, using only
coherent states as the resource. We measure the correlations
necessary for 1sDI key distribution up to an applied loss
equivalent to 3.5 km of fibre transmission.
The second part of this thesis concerns the improvement in the
transmission of a quantum state. We study two approximate
implementations of a probabilistic noiseless
linear amplifier (NLA): a physical implementation that truncates
the working space of the NLA or a measurement-based
implementation that realises the truncation
by a bounded postselection filter. We do this by conducting a
full analysis on the measurement-based NLA (MB-NLA), making
explicit the relationship between its various
operating parameters, such as amplification gain and the cut-off
of operating domain. We compare it with its physical counterpart
in terms of the Husimi Q-distribution and
their probability of success.
We took our investigations further by combining a probabilistic
NLA with an ideal deterministic linear amplifier (DLA). In
particular, we show that when NLA gain is strictly lesser than
the DLA gain, this combination can be realised by integrating an
MB-NLA in an optical DLA setup. This results in a hybrid device
which we refer to as the heralded hybrid quantum amplifier. A
quantum cloning machine based on this hybrid amplifier is
constructed through an amplify-then-split method. We perform
probabilistic cloning of arbitrary coherent states, and
demonstrate the production of up to five clones, with the
fidelity of each clone clearly exceeding the corresponding
no-cloning limit
Recommended from our members
Evaluation and analysis of hybrid intelligent pattern recognition techniques for speaker identification
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The rapid momentum of the technology progress in the recent years has led to a tremendous rise in the use of biometric authentication systems. The objective of this research is to investigate the problem
of identifying a speaker from its voice regardless of the content (i.e.
text-independent), and to design efficient methods of combining face and voice in producing a robust authentication system.
A novel approach towards speaker identification is developed using
wavelet analysis, and multiple neural networks including Probabilistic
Neural Network (PNN), General Regressive Neural Network (GRNN)and Radial Basis Function-Neural Network (RBF NN) with the AND
voting scheme. This approach is tested on GRID and VidTIMIT cor-pora and comprehensive test results have been validated with state-
of-the-art approaches. The system was found to be competitive and it improved the recognition rate by 15% as compared to the classical Mel-frequency Cepstral Coe±cients (MFCC), and reduced the recognition time by 40% compared to Back Propagation Neural Network (BPNN), Gaussian Mixture Models (GMM) and Principal Component Analysis (PCA).
Another novel approach using vowel formant analysis is implemented using Linear Discriminant Analysis (LDA). Vowel formant based speaker identification is best suitable for real-time implementation and requires only a few bytes of information to be stored for each speaker, making it both storage and time efficient. Tested on GRID and Vid-TIMIT, the proposed scheme was found to be 85.05% accurate when Linear Predictive Coding (LPC) is used to extract the vowel formants, which is much higher than the accuracy of BPNN and GMM. Since the proposed scheme does not require any training time other than creating a small database of vowel formants, it is faster as well. Furthermore, an increasing number of speakers makes it di±cult for BPNN and GMM to sustain their accuracy, but the proposed score-based methodology stays almost linear.
Finally, a novel audio-visual fusion based identification system is implemented using GMM and MFCC for speaker identi¯cation and PCA for face recognition. The results of speaker identification and face recognition are fused at different levels, namely the feature, score and decision levels. Both the score-level and decision-level (with OR voting) fusions were shown to outperform the feature-level fusion in terms of accuracy and error resilience. The result is in line with the distinct nature of the two modalities which lose themselves when combined at the feature-level. The GRID and VidTIMIT test results validate that
the proposed scheme is one of the best candidates for the fusion of
face and voice due to its low computational time and high recognition accuracy
Optimisation of flow chemistry: tools and algorithms
The coupling of flow chemistry with automated laboratory equipment has become increasingly common and used to support the efficient manufacturing of chemicals. A variety of reactors and analytical techniques have been used in such configurations for investigating and optimising the processing conditions of different reactions. However, the integrated reactors used thus far have been constrained to single phase mixing, greatly limiting the scope of reactions for such studies. This thesis presents the development and integration of a millilitre-scale CSTR, the fReactor, that is able to process multiphase flows, thus broadening the range of reactions susceptible of being investigated in this way.
Following a thorough review of the literature covering the uses of flow chemistry and lab-scale reactor technology, insights on the design of a temperature-controlled version of the fReactor with an accuracy of ±0.3 ºC capable of cutting waiting times 44% when compared to the previous reactor are given. A demonstration of its use is provided for which the product of a multiphasic reaction is analysed automatically under different reaction conditions according to a sampling plan. Metamodeling and cross-validation techniques are applied to these results, where single and multi-objective optimisations are carried out over the response surface models of different metrics to illustrate different trade-offs between them. The use of such techniques allowed reducing the error incurred by the common least squares polynomial fitting by over 12%. Additionally, a demonstration of the fReactor as a tool for synchrotron X-Ray Diffraction is also carried out by means of successfully assessing the change in polymorph caused by solvent switching, this being the first synchrotron experiment using this sort of device.
The remainder of the thesis focuses on applying the same metamodeling and cross-validation techniques used previously, in the optimisation of the design of a miniaturised continuous oscillatory baffled reactor. However, rather than using these techniques with physical experimentation, they are used in conjunction with computational fluid dynamics. This reactor shows a better residence time distribution than its CSTR counterparts. Notably, the effect of the introduction of baffle offsetting in a plate design of the reactor is identified as a key parameter in giving a narrow residence time distribution and good mixing. Under this configuration it is possible to reduce the RTD variance by 45% and increase the mixing efficiency by 60% when compared to the best performing opposing baffles geometry
Optimisation Method for Training Deep Neural Networks in Classification of Non- functional Requirements
Non-functional requirements (NFRs) are regarded critical to a software system's success. The majority of NFR detection and classification solutions have relied on supervised machine
learning models. It is hindered by the lack of labelled data for training and necessitate a significant amount of time spent on feature engineering.
In this work we explore emerging deep learning techniques to reduce the burden of feature engineering. The goal of this study is to develop an autonomous system that can classify NFRs into multiple classes based on a labelled corpus. In the first section of the thesis, we standardise the NFRs ontology and annotations to produce a corpus based on five attributes: usability, reliability, efficiency, maintainability, and portability. In the second section, the design and implementation of four neural networks, including the artificial neural network, convolutional neural network, long short-term memory, and gated recurrent unit are examined to classify NFRs.
These models, necessitate a large corpus. To overcome this limitation, we proposed a new paradigm for data augmentation. This method uses a sort and concatenates strategy to combine two phrases from the same class, resulting in a two-fold increase in data size while keeping the domain vocabulary intact. We compared our method to a baseline (no augmentation) and an existing approach Easy data augmentation (EDA) with pre-trained word embeddings. All training has been performed under two modifications to the data; augmentation on the entire data before train/validation split vs augmentation on train set only. Our findings show that as compared to EDA and baseline, NFRs classification model improved greatly, and CNN outperformed when trained using our suggested technique in the first setting. However, we saw a slight boost in the second experimental setup with just train set augmentation. As a result, we can determine that augmentation of the validation is required in order to achieve acceptable results with our proposed approach. We hope that our ideas will inspire new data augmentation techniques, whether they are generic or task specific. Furthermore, it would also be useful to implement this strategy in other languages
- …