323 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Computational Approaches to Drug Profiling and Drug-Protein Interactions
Despite substantial increases in R&D spending within the pharmaceutical industry, denovo drug design has become a time-consuming endeavour. High attrition rates led to a
long period of stagnation in drug approvals. Due to the extreme costs associated with
introducing a drug to the market, locating and understanding the reasons for clinical failure
is key to future productivity. As part of this PhD, three main contributions were made in
this respect. First, the web platform, LigNFam enables users to interactively explore
similarity relationships between ‘drug like’ molecules and the proteins they bind. Secondly,
two deep-learning-based binding site comparison tools were developed, competing with
the state-of-the-art over benchmark datasets. The models have the ability to predict offtarget interactions and potential candidates for target-based drug repurposing. Finally, the
open-source ScaffoldGraph software was presented for the analysis of hierarchical scaffold
relationships and has already been used in multiple projects, including integration into a
virtual screening pipeline to increase the tractability of ultra-large screening experiments.
Together, and with existing tools, the contributions made will aid in the understanding of
drug-protein relationships, particularly in the fields of off-target prediction and drug
repurposing, helping to design better drugs faster
A Review of the Role of Causality in Developing Trustworthy AI Systems
State-of-the-art AI models largely lack an understanding of the cause-effect
relationship that governs human understanding of the real world. Consequently,
these models do not generalize to unseen data, often produce unfair results,
and are difficult to interpret. This has led to efforts to improve the
trustworthiness aspects of AI models. Recently, causal modeling and inference
methods have emerged as powerful tools. This review aims to provide the reader
with an overview of causal methods that have been developed to improve the
trustworthiness of AI models. We hope that our contribution will motivate
future research on causality-based solutions for trustworthy AI.Comment: 55 pages, 8 figures. Under revie
Big Data - Supply Chain Management Framework for Forecasting: Data Preprocessing and Machine Learning Techniques
This article intends to systematically identify and comparatively analyze
state-of-the-art supply chain (SC) forecasting strategies and technologies. A
novel framework has been proposed incorporating Big Data Analytics in SC
Management (problem identification, data sources, exploratory data analysis,
machine-learning model training, hyperparameter tuning, performance evaluation,
and optimization), forecasting effects on human-workforce, inventory, and
overall SC. Initially, the need to collect data according to SC strategy and
how to collect them has been discussed. The article discusses the need for
different types of forecasting according to the period or SC objective. The SC
KPIs and the error-measurement systems have been recommended to optimize the
top-performing model. The adverse effects of phantom inventory on forecasting
and the dependence of managerial decisions on the SC KPIs for determining model
performance parameters and improving operations management, transparency, and
planning efficiency have been illustrated. The cyclic connection within the
framework introduces preprocessing optimization based on the post-process KPIs,
optimizing the overall control process (inventory management, workforce
determination, cost, production and capacity planning). The contribution of
this research lies in the standard SC process framework proposal, recommended
forecasting data analysis, forecasting effects on SC performance, machine
learning algorithms optimization followed, and in shedding light on future
research
Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology
The great behavioral heterogeneity observed between individuals with the same
psychiatric disorder and even within one individual over time complicates both
clinical practice and biomedical research. However, modern technologies are an
exciting opportunity to improve behavioral characterization. Existing
psychiatry methods that are qualitative or unscalable, such as patient surveys
or clinical interviews, can now be collected at a greater capacity and analyzed
to produce new quantitative measures. Furthermore, recent capabilities for
continuous collection of passive sensor streams, such as phone GPS or
smartwatch accelerometer, open avenues of novel questioning that were
previously entirely unrealistic. Their temporally dense nature enables a
cohesive study of real-time neural and behavioral signals.
To develop comprehensive neurobiological models of psychiatric disease, it
will be critical to first develop strong methods for behavioral quantification.
There is huge potential in what can theoretically be captured by current
technologies, but this in itself presents a large computational challenge --
one that will necessitate new data processing tools, new machine learning
techniques, and ultimately a shift in how interdisciplinary work is conducted.
In my thesis, I detail research projects that take different perspectives on
digital psychiatry, subsequently tying ideas together with a concluding
discussion on the future of the field. I also provide software infrastructure
where relevant, with extensive documentation.
Major contributions include scientific arguments and proof of concept results
for daily free-form audio journals as an underappreciated psychiatry research
datatype, as well as novel stability theorems and pilot empirical success for a
proposed multi-area recurrent neural network architecture.Comment: PhD thesis cop
Mass spectral imaging of clinical samples using deep learning
A better interpretation of tumour heterogeneity and variability is vital for the improvement of novel diagnostic techniques and personalized cancer treatments. Tumour tissue heterogeneity is characterized by biochemical heterogeneity, which can be investigated by unsupervised metabolomics.
Mass Spectrometry Imaging (MSI) combined with Machine Learning techniques have generated increasing interest as analytical and diagnostic tools for the analysis of spatial molecular patterns in tissue samples. Considering the high complexity of data produced by the application of MSI, which can consist of many thousands of spectral peaks, statistical analysis and in particular machine learning and deep learning have been investigated as novel approaches to deduce the relationships between the measured molecular patterns and the local structural and biological properties of the tissues.
Machine learning have historically been divided into two main categories: Supervised and Unsupervised learning. In MSI, supervised learning methods may be used to segment tissues into histologically relevant areas e.g. the classification of tissue regions in H&E (Haemotoxylin and Eosin) stained samples. Initial classification by an expert histopathologist, through visual inspection enables the development of univariate or multivariate models, based on tissue regions that have significantly up/down-regulated ions. However, complex data may result in underdetermined models, and alternative methods that can cope with high dimensionality and noisy data are required.
Here, we describe, apply, and test a novel diagnostic procedure built using a combination of MSI and deep learning with the objective of delineating and identifying biochemical differences between cancerous and non-cancerous tissue in metastatic liver cancer and epithelial ovarian cancer. The workflow investigates the robustness of single (1D) to multidimensional (3D) tumour analyses and also highlights possible biomarkers which are not accessible from classical visual analysis of the H&E images. The identification of key molecular markers may provide a deeper understanding of tumour heterogeneity and potential targets for intervention.Open Acces
Ghost In the Grid: Challenges for Reinforcement Learning in Grid World Environments
The current state-of-the-art deep reinforcement learning techniques require agents to gather large amounts of diverse experiences to train effective and general models. In addition, there are also many other factors that have to be taken into consideration: for example, how the agent interacts with its environment; parameter optimization techniques; environment exploration methods; and finally the diversity of environments that is provided to an agent. In this thesis, we investigate several of these factors. Firstly we introduce Griddly, a high-performance grid-world game engine that provides a state-of-the-art combination of high performance and flexibility. We demonstrate that grid worlds provide a principled and expressive substrate for fundamental research questions in reinforcement learning, whilst filtering out noise inherent in physical systems. We show that although grid-worlds are constructed with simple rules-based mechanics, they can be used to construct complex open-ended, and procedurally generated environments. We improve upon Griddly with GriddlyJS, a web-based tool for designing and testing grid-world environments for reinforcement learning research. GriddlyJS provides a rich suite of features that assist researchers in a multitude of different learning approaches. To highlight the features of GriddlyJS we present a dataset of 100 complex escape-room puzzle levels. In addition to these complex puzzle levels, we provide human-generated trajectories and a baseline policy that can be run in a web browser. We show that this tooling enables significantly faster research iteration in many sub-fields. We then explore several areas of RL research that are made accessible by the features introduced by Griddly: Firstly, we explore learning grid-world game mechanics using deep neural networks. The {\em neural game engine} is introduced which has competitive performance in terms of sample efficiency and predicting states accurately over long time horizons. Secondly, {\em conditional action trees} are introduced which describe a method for compactly expressing complex hierarchical action spaces. Expressing hierarchical action spaces as trees leads to action spaces that are additive rather than multiplicative over the factors of the action space. It is shown that these compressed action spaces reduce the required output size of neural networks without compromising performance. This makes the interfaces to complex environments significantly simpler to implement. Finally, we explore the inherent symmetry in common observation spaces, using the concept of {\em geometric deep learning}. We show that certain geometric data augmentation methods do not conform to the underlying assumptions in several training algorithms. We provide solutions to these problems in the form of novel regularization functions and demonstrate that these methods fix the underlying assumptions
- …