10,710 research outputs found
Multimodal spatio-temporal deep learning framework for 3D object detection in instrumented vehicles
This thesis presents the utilization of multiple modalities, such as image and lidar, to incorporate spatio-temporal information from sequence data into deep learning architectures for 3Dobject detection in instrumented vehicles. The race to autonomy in instrumented vehicles or self-driving cars has stimulated significant research in developing autonomous driver assistance systems (ADAS) technologies related explicitly to perception systems. Object detection plays a crucial role in perception systems by providing spatial information to its subsequent modules; hence, accurate detection is a significant task supporting autonomous driving. The advent of deep learning in computer vision applications and the availability of multiple sensing modalities such as 360° imaging, lidar, and radar have led to state-of-the-art 2D and 3Dobject detection architectures. Most current state-of-the-art 3D object detection frameworks consider single-frame reference. However, these methods do not utilize temporal information associated with the objects or scenes from the sequence data. Thus, the present research hypothesizes that multimodal temporal information can contribute to bridging the gap between 2D and 3D metric space by improving the accuracy of deep learning frameworks for 3D object estimations. The thesis presents understanding multimodal data representations and selecting hyper-parameters using public datasets such as KITTI and nuScenes with Frustum-ConvNet as a baseline architecture. Secondly, an attention mechanism was employed along with convolutional-LSTM to extract spatial-temporal information from sequence data to improve 3D estimations and to aid the architecture in focusing on salient lidar point cloud features. Finally, various fusion strategies are applied to fuse the modalities and temporal information into the architecture to assess its efficacy on performance and computational complexity. Overall, this thesis has established the importance and utility of multimodal systems for refined 3D object detection and proposed a complex pipeline incorporating spatial, temporal and attention mechanisms to improve specific, and general class accuracy demonstrated on key autonomous driving data sets
Evaluation of image quality and reconstruction parameters in recent PET-CT and PET-MR systems
In this PhD dissertation, we propose to evaluate the impact of using different PET isotopes for
the National Electrical Manufacturers Association (NEMA) tests performance evaluation of the
GE Signa integrated PET/MR. The methods were divided into three closely related categories:
NEMA performance measurements, system modelling and evaluation of the image quality of
the state-of-the-art of clinical PET scanners. NEMA performance measurements for
characterizing spatial resolution, sensitivity, image quality, the accuracy of attenuation and
scatter corrections, and noise equivalent count rate (NECR) were performed using clinically
relevant and commercially available radioisotopes. Then we modelled the GE Signa integrated
PET/MR system using a realistic GATE Monte Carlo simulation and validated it with the result of
the NEMA measurements (sensitivity and NECR). Next, the effect of the 3T MR field on the
positron range was evaluated for F-18, C-11, O-15, N-13, Ga-68 and Rb-82. Finally, to evaluate the image
quality of the state-of-the-art clinical PET scanners, a noise reduction study was performed
using a Bayesian Penalized-Likelihood reconstruction algorithm on a time-of-flight PET/CT
scanner to investigate whether and to what extent noise can be reduced. The outcome of this
thesis will allow clinicians to reduce the PET dose which is especially relevant for young
patients. Besides, the Monte Carlo simulation platform for PET/MR developed for this thesis will
allow physicists and engineers to better understand and design integrated PET/MR systems
Machine learning and large scale cancer omic data: decoding the biological mechanisms underpinning cancer
Many of the mechanisms underpinning cancer risk and tumorigenesis are still not
fully understood. However, the next-generation sequencing revolution and the
rapid advances in big data analytics allow us to study cells
and complex phenotypes at unprecedented depth and breadth. While experimental
and clinical data are still fundamental to validate findings and confirm
hypotheses, computational biology is key for the analysis of system- and
population-level data for detection of hidden patterns and the generation of
testable hypotheses.
In this work, I tackle two main questions regarding cancer risk and tumorigenesis
that require novel computational methods for the analysis of system-level omic
data. First, I focused on how frequent, low-penetrance inherited variants modulate
cancer risk in the broader population. Genome-Wide Association Studies (GWAS)
have shown that Single Nucleotide Polymorphisms (SNP) contribute to cancer risk
with multiple subtle effects, but they are still failing to give further insight
into their synergistic effects. I developed a novel hierarchical Bayesian
regression model, BAGHERA, to estimate heritability at the gene-level from GWAS
summary statistics. I then used BAGHERA to analyse data from 38 malignancies in
the UK Biobank. I showed that genes with high heritable risk are involved in key
processes associated with cancer and are often localised in genes that are
somatically mutated drivers.
Heritability, like many other omics analysis methods, study the effects of DNA
variants on single genes in isolation. However, we know that most biological
processes require the interplay of multiple genes and we often lack a broad
perspective on them. For the second part of this thesis, I then worked on the
integration of Protein-Protein Interaction (PPI) graphs and omics data, which
bridges this gap and recapitulates these interactions at a system level. First,
I developed a modular and scalable Python package, PyGNA, that enables
robust statistical testing of genesets' topological properties. PyGNA complements
the literature with a tool that can be routinely introduced in bioinformatics
automated pipelines. With PyGNA I processed multiple genesets obtained from
genomics and transcriptomics data. However, topological properties alone have
proven to be insufficient to fully characterise complex phenotypes.
Therefore, I focused on a model that allows to combine topological and functional
data to detect multiple communities associated with a phenotype. Detecting
cancer-specific submodules is still an open problem, but it has the potential to
elucidate mechanisms detectable only by integrating multi-omics data. Building
on the recent advances in Graph Neural Networks (GNN), I present a supervised
geometric deep learning model that combines GNNs and Stochastic Block Models
(SBM). The model is able to learn multiple graph-aware representations, as
multiple joint SBMs, of the attributed network, accounting for nodes
participating in multiple processes. The simultaneous estimation of structure
and function provides an interpretable picture of how genes interact in specific
conditions and it allows to detect novel putative pathways associated with
cancer
Predicting potential drugs and drug-drug interactions for drug repositioning
The purpose of drug repositioning is to predict novel treatments for existing drugs. It saves time and reduces cost in drug discovery, especially in preclinical procedures. In drug repositioning, the challenging objective is to identify reasonable drugs with strong evidence. Recently, benefiting from various types of data and computational strategies, many methods have been proposed to predict potential drugs.
Signature-based methods use signatures to describe a specific disease condition and match it with drug-induced transcriptomic profiles. For a disease signature, a list of potential drugs is produced based on matching scores. In many studies, the top drugs on the list are identified as potential drugs and verified in various ways. However, there are a few limitations in existing methods: (1) For many diseases, especially cancers, the tissue samples are often heterogeneous and multiple subtypes are involved. It is challenging to identify a signature from such a group of profiles. (2) Genes are treated as independent elements in many methods, while they may associate with each other in the given condition. (3) The disease signatures cannot identify potential drugs for personalized treatments.
In order to address those limitations, I propose three strategies in this dissertation. (1) I employ clustering methods to identify sub-signatures from the heterogeneous dataset, then use a weighting strategy to concatenate them together. (2) I utilize human protein complex (HPC) information to reflect the dependencies among genes and identify an HPC signature to describe a specific type of cancer. (3) I use an HPC strategy to identify signatures for drugs, then predict a list of potential drugs for each patient.
Besides predicting potential drugs directly, more indications are essential to enhance my understanding in drug repositioning studies. The interactions between biological and biomedical entities, such as drug-drug interactions (DDIs) and drug-target interactions (DTIs), help study mechanisms behind the repurposed drugs. Machine learning (ML), especially deep learning (DL), are frontier methods in predicting those interactions. Network strategies, such as constructing a network from interactions and studying topological properties, are commonly used to combine with other methods to make predictions. However, the interactions may have different functions, and merging them in a single network may cause some biases. In order to solve it, I construct two networks for two types of DDIs and employ a graph convolutional network (GCN) model to concatenate them together.
In this dissertation, the first chapter introduces background information, objectives of studies, and structure of the dissertation. After that, a comprehensive review is provided in Chapter 2. Biological databases, methods and applications in drug repositioning studies, and evaluation metrics are discussed. I summarize three application scenarios in Chapter 2.
The first method proposed in Chapter 3 considers the issue of identifying a cancer gene signature and predicting potential drugs. The k-means clustering method is used to identify highly reliable gene signatures. The identified signature is used to match drug profiles and identify potential drugs for the given disease. The second method proposed in Chapter 4 uses human protein complex (HPC) information to identify a protein complex signature, instead of a gene signature. This strategy improves the prediction accuracy in the experiments of cancers. Chapter 5 introduces the signature-based method in personalized cancer medicine. The profiles of a given drug are used to identify a drug signature, under the HPC strategy. Each patient has a profile, which is matched with the drug signature. Each patient has a different list of potential drugs. Chapter 6 propose a graph convolutional network with multi-kernel to predict DDIs. This method constructs two DDI kernels and concatenates them in the GCN model. It achieves higher performance in predicting DDIs than three state-of-the-art methods.
In summary, this dissertation has proposed several computational algorithms for drug repositioning. Experimental results have shown that the proposed methods can achieve very good performance
Quantum materials for energy-efficient neuromorphic computing
Neuromorphic computing approaches become increasingly important as we address
future needs for efficiently processing massive amounts of data. The unique
attributes of quantum materials can help address these needs by enabling new
energy-efficient device concepts that implement neuromorphic ideas at the
hardware level. In particular, strong correlations give rise to highly
non-linear responses, such as conductive phase transitions that can be
harnessed for short and long-term plasticity. Similarly, magnetization dynamics
are strongly non-linear and can be utilized for data classification. This paper
discusses select examples of these approaches, and provides a perspective for
the current opportunities and challenges for assembling quantum-material-based
devices for neuromorphic functionalities into larger emergent complex network
systems
Boron-Tethered Auxiliaries for Directed C-H functionalisation of Aryl Boronic Esters: Feasibility Studies
Transient Interactions between a pinacol boronic ester and bifunctional template, bearing a Lewis basic moiety, were investigated to assess the feasibility of performing directed C-H functionalisation on organoboronic esters whilst maintaining the boronic moiety.
Quantitative conversion was detected for alkoxides coordinating to a fluoroaromatic pinacol boronic ester, characterised by 1H, 19F{1H} and 11B NMR, observing a general up-field shift, which is indicative of boronate formation.
DFT calculations (B3LYP/6-31G*) elucidated that steric factors outweigh trends in pKa-H, rendering tert-butoxide more weakly binding (47 kcal/mol) than methoxide (63 kcal/mol). Synthesis of templates was performed and coordination of these bifunctional alkoxide templates to fluoroaromatic pinacol boronic esters was demonstrated.
A high-throughput screening protocol was developed, utilising GC-MS to rapidly assess the components of crude mixtures. No desired product was observed in these studies but they directed work towards polydentate ligand-substrate binding. A series of N-templated iminodiacetic acid (TIDA) ligands were designed
Real-time quantum error correction beyond break-even
The ambition of harnessing the quantum for computation is at odds with the
fundamental phenomenon of decoherence. The purpose of quantum error correction
(QEC) is to counteract the natural tendency of a complex system to decohere.
This cooperative process, which requires participation of multiple quantum and
classical components, creates a special type of dissipation that removes the
entropy caused by the errors faster than the rate at which these errors corrupt
the stored quantum information. Previous experimental attempts to engineer such
a process faced an excessive generation of errors that overwhelmed the
error-correcting capability of the process itself. Whether it is practically
possible to utilize QEC for extending quantum coherence thus remains an open
question. We answer it by demonstrating a fully stabilized and error-corrected
logical qubit whose quantum coherence is significantly longer than that of all
the imperfect quantum components involved in the QEC process, beating the best
of them with a coherence gain of . We achieve this
performance by combining innovations in several domains including the
fabrication of superconducting quantum circuits and model-free reinforcement
learning
- …