97 research outputs found
Recommended from our members
Automatic inference of cross-modal connection topologies for X-CNNs
This paper introduces a way to learn cross-modal convolutional neural network
(X-CNN) architectures from a base convolutional network (CNN) and the training
data to reduce the design cost and enable applying cross-modal networks in
sparse data environments. Two approaches for building X-CNNs are presented. The
base approach learns the topology in a data-driven manner, by using
measurements performed on the base CNN and supplied data. The iterative
approach performs further optimisation of the topology through a combined
learning procedure, simultaneously learning the topology and training the
network. The approaches were evaluated agains examples of hand-designed X-CNNs
and their base variants, showing superior performance and, in some cases,
gaining an additional 9% of accuracy. From further considerations, we conclude
that the presented methodology takes less time than any manual approach would,
whilst also significantly reducing the design complexity. The application of
the methods is fully automated and implemented in Xsertion library
ChronoMID—Cross-modal neural networks for 3-D temporal medical imaging data
ChronoMID—neural networks for temporally-varying, hence Chrono, Medical Imaging Data—makes the novel application of cross-modal convolutional neural networks (X-CNNs) to the medical domain. In this paper, we present multiple approaches for incorporating temporal information into X-CNNs and compare their performance in a case study on the classification of abnormal bone remodelling in mice. Previous work developing medical models has predominantly focused on either spatial or temporal aspects, but rarely both. Our models seek to unify these complementary sources of information and derive insights in a bottom-up, data-driven approach. As with many medical datasets, the case study herein exhibits deep rather than wide data; we apply various techniques, including extensive regularisation, to account for this. After training on a balanced set of approximately 70000 images, two of the models—those using difference maps from known reference points—outperformed a state-of-the-art convolutional neural network baseline by over 30pp (> 99% vs. 68.26%) on an unseen, balanced validation set comprising around 20000 images. These models are expected to perform well with sparse data sets based on both previous findings with X-CNNs and the representations of time used, which permit arbitrarily large and irregular gaps between data points. Our results highlight the importance of identifying a suitable description of time for a problem domain, as unsuitable descriptors may not only fail to improve a model, they may in fact confound it
Recommended from our members
Scientific Workflows on Clouds with Heterogeneous and Preemptible Instances
Recommended from our members
Teaching sustainability as complex systems approach: a sustainable development goals workshop
Purpose
Approaches to solving sustainability problems require a specific problem-solving mode, encompassing the complexity, fuzziness and interdisciplinary nature of the problem. This paper aims to promote a complex systems’ view of addressing sustainability problems, in particular through the tool of network science, and provides an outline of an interdisciplinary training workshop.
Design/methodology/approach
The topic of the workshop is the analysis of the Sustainable Development Goals (SDGs) as a political action plan. The authors are interested in the synergies and trade-offs between the goals, which are investigated through the structure of the underlying network. The authors use a teaching approach aligned with sustainable education and transformative learning.
Findings
Methodologies from network science are experienced as valuable tools to familiarise students with complexity and to handle the proposed case study.
Originality/value
To the best of the authors’ knowledge, this is the first work which uses network terminology and approaches to teach sustainability problems. This work highlights the potential of network science in sustainability education and contributes to accessible material.
</jats:sec
Adaptive Gaussian processes on graphs via spectral graph wavelets
Graph-based models require aggregating information in the graph from neighbourhoods of different sizes. In particular, when the data exhibit varying levels of smoothness on the graph, a multi-scale approach is required to capture the relevant information. In this work, we propose a Gaussian process model using spectral graph wavelets, which can naturally aggregate neighbourhood information at different scales. Through maximum likelihood optimisation of the model hyperparameters, the wavelets automatically adapt to the different frequencies in the data, and as a result our model goes beyond capturing low frequency information. We achieve scalability to larger graphs by using a spectrum-adaptive polynomial approximation of the filter function, which is designed to yield a low approximation error in dense areas of the graph spectrum. Synthetic and real-world experiments demonstrate the ability of our model to infer scales accurately and produce competitive performances against state-of-the-art models in graph-based learning tasks
Graph classification Gaussian processes via spectral features
Graph classification aims to categorise graphs based on their structure and node attributes. In this work, we propose to tackle this task using tools from graph signal processing by deriving spectral features, which we then use to design two variants of Gaussian process models for graph classification. The first variant uses spectral features based on the distribution of energy of a node feature signal over the spectrum of the graph. We show that even such a simple approach, having no learned parameters, can yield competitive performance compared to strong neural network and graph kernel baselines. A second, more sophisticated variant is designed to capture multi-scale and localised patterns in the graph by learning spectral graph wavelet filters, obtaining improved performance on synthetic and real-world data sets. Finally, we show that both models produce well calibrated uncertainty estimates, enabling reliable decision making based on the model predictions
Adaptive Gaussian processes on graphs via spectral graph wavelets
Graph-based models require aggregating information in the graph from neighbourhoods of different sizes. In particular, when the data exhibit varying levels of smoothness on the graph, a multi-scale approach is required to capture the relevant information. In this work, we propose a Gaussian process model using spectral graph wavelets, which can naturally aggregate neighbourhood information at different scales. Through maximum likelihood optimisation of the model hyperparameters, the wavelets automatically adapt to the different frequencies in the data, and as a result our model goes beyond capturing low frequency information. We achieve scalability to larger graphs by using a spectrum-adaptive polynomial approximation of the filter function, which is designed to yield a low approximation error in dense areas of the graph spectrum. Synthetic and real-world experiments demonstrate the ability of our model to infer scales accurately and produce competitive performances against state-of-the-art models in graph-based learning tasks
Recommended from our members
Now you see me (CME): Concept-based model extraction
Deep Neural Networks (DNNs) have achieved remarkable performance on a range
of tasks. A key step to further empowering DNN-based approaches is improving
their explainability. In this work we present CME: a concept-based model
extraction framework, used for analysing DNN models via concept-based extracted
models. Using two case studies (dSprites, and Caltech UCSD Birds), we
demonstrate how CME can be used to (i) analyse the concept information learned
by a DNN model (ii) analyse how a DNN uses this concept information when
predicting output labels (iii) identify key concept information that can
further improve DNN predictive performance (for one of the case studies, we
showed how model accuracy can be improved by over 14%, using only 30% of the
available concepts)
Modelling trait-dependent speciation with approximate Bayesian computation
Phylogeny is the field of modelling the temporal discrete dynamics of
speciation. Complex models can nowadays be studied using the Approximate
Bayesian Computation approach which avoids likelihood calculations. The field's
progression is hampered by the lack of robust software to estimate the numerous
parameters of the speciation process. In this work we present an R package,
pcmabc, based on Approximate Bayesian Computations, that implements three novel
phylogenetic algorithms for trait-dependent speciation modelling. Our
phylogenetic comparative methodology takes into account both the simulated
traits and phylogeny, attempting to estimate the parameters of the processes
generating the phenotype and the trait. The user is not restricted to a
predefined set of models and can specify a variety of evolutionary and
branching models. We illustrate the software with a simulation-reestimation
study focused around the branching Ornstein-Uhlenbeck process, where the
branching rate depends non-linearly on the value of the driving
Ornstein-Uhlenbeck process. Included in this work is a tutorial on how to use
the software
- …