73 research outputs found
Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics
This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p
Improving The Diagnosis And Risk Stratification Of Prostate Cancer
The current diagnostic and stratification pathway for prostate cancer has led to over-diagnosis and over- treatment. This thesis aims to improve the prostate cancer diagnosis pathway by developing a minimally invasive blood test to inform diagnosis alongside mpMRI and to understand the true Gleason 4 burden which will help better stratify disease and guide clinicians in treatment planning. To reduce the number of patients who have to undergo prostate biopsy after indeterminate or false positive prostate mpMRI, we aimed to develop a new panel of mRNA detectable in blood or urine that was able to improve the detection of clinical significant prostate cancer (Gleason 4+3 or ≥6mm) in combination with prostate mpMRI. mRNA expression of 28 potential genes was studied in four prostate cancer cell lines and, using publicly available datasets, a new seven gene biomarker panel was developed using machine learning techniques. The signature was then validated in blood and urine samples from the ProMPT, PROMIS and INNOVATE trials. To redefine the classification of Gleason 4 disease in prostate cancer patients, digital pathology was used to contour and accurately assess the burden and spread of Gleason 4 in a cohort of PROMIS patients compared to the gold standard manual pathology. There was a significant difference between observed and objective Gleason 4 burden that has implications in patient risk stratification and biomarker discovery. The work presented in this thesis makes a significant step toward improving the patient diagnostic and risk classification pathways by ensuring only the right patients are biopsied when necessary, improving the current pathological reference standard
Recommended from our members
Contributions to evaluation of machine learning models. Applicability domain of classification models
Artificial intelligence (AI) and machine learning (ML) present some application opportunities and
challenges that can be framed as learning problems. The performance of machine learning models
depends on algorithms and the data. Moreover, learning algorithms create a model of reality through
learning and testing with data processes, and their performance shows an agreement degree of their
assumed model with reality. ML algorithms have been successfully used in numerous classification
problems. With the developing popularity of using ML models for many purposes in different domains,
the validation of such predictive models is currently required more formally. Traditionally, there are
many studies related to model evaluation, robustness, reliability, and the quality of the data and the
data-driven models. However, those studies do not consider the concept of the applicability domain
(AD) yet. The issue is that the AD is not often well defined, or it is not defined at all in many fields. This
work investigates the robustness of ML classification models from the applicability domain
perspective. A standard definition of applicability domain regards the spaces in which the model
provides results with specific reliability.
The main aim of this study is to investigate the connection between the applicability domain approach
and the classification model performance. We are examining the usefulness of assessing the AD for
the classification model, i.e. reliability, reuse, robustness of classifiers. The work is implemented using
three approaches, and these approaches are conducted in three various attempts: firstly, assessing
the applicability domain for the classification model; secondly, investigating the robustness of the
classification model based on the applicability domain approach; thirdly, selecting an optimal model
using Pareto optimality. The experiments in this work are illustrated by considering different machine
learning algorithms for binary and multi-class classifications for healthcare datasets from public
benchmark data repositories. In the first approach, the decision trees algorithm (DT) is used for the
classification of data in the classification stage. The feature selection method is applied to choose
features for classification. The obtained classifiers are used in the third approach for selection of
models using Pareto optimality. The second approach is implemented using three steps; namely,
building classification model; generating synthetic data; and evaluating the obtained results.
The results obtained from the study provide an understanding of how the proposed approach can help
to define the model’s robustness and the applicability domain, for providing reliable outputs. These
approaches open opportunities for classification data and model management. The proposed
algorithms are implemented through a set of experiments on classification accuracy of instances,
which fall in the domain of the model. For the first approach, by considering all the features, the
highest accuracy obtained is 0.98, with thresholds average of 0.34 for Breast cancer dataset. After
applying recursive feature elimination (RFE) method, the accuracy is 0.96% with 0.27 thresholds
average. For the robustness of the classification model based on the applicability domain approach,
the minimum accuracy is 0.62% for Indian Liver Patient data at r=0.10, and the maximum accuracy is
0.99% for Thyroid dataset at r=0.10. For the selection of an optimal model using Pareto optimality,
the optimally selected classifier gives the accuracy of 0.94% with 0.35 thresholds average.
This research investigates critical aspects of the applicability domain as related to the robustness of
classification ML algorithms. However, the performance of machine learning techniques depends on
the degree of reliable predictions of the model. In the literature, the robustness of the ML model can
be defined as the ability of the model to provide the testing error close to the training error. Moreover,
the properties can describe the stability of the model performance when being tested on the new
datasets. Concluding, this thesis introduced the concept of applicability domain for classifiers and
tested the use of this concept with some case studies on health-related public benchmark datasets.Ministry of Higher Education in Liby
Geometric data understanding : deriving case specific features
There exists a tradition using precise geometric modeling, where uncertainties in data can be considered noise. Another tradition relies on statistical nature of vast quantity of data, where geometric regularity is intrinsic to data and statistical models usually grasp this level only indirectly. This work focuses on point cloud data of natural resources and the silhouette recognition from video input as two real world examples of problems having geometric content which is intangible at the raw data presentation.
This content could be discovered and modeled to some degree by such machine learning (ML) approaches like deep learning, but either a direct coverage of geometry in samples or addition of special geometry invariant layer is necessary. Geometric content is central when there is a need for direct observations of spatial variables, or one needs to gain a mapping to a geometrically consistent data representation, where e.g. outliers or noise can be easily discerned.
In this thesis we consider transformation of original input data to a geometric feature space in two example problems. The first example is curvature of surfaces, which has met renewed interest since the introduction of ubiquitous point cloud data and the maturation of the discrete differential geometry. Curvature spectra can characterize a spatial sample rather well, and provide useful features for ML purposes. The second example involves projective methods used to video stereo-signal analysis in swimming analytics.
The aim is to find meaningful local geometric representations for feature generation, which also facilitate additional analysis based on geometric understanding of the model. The features are associated directly to some geometric quantity, and this makes it easier to express the geometric constraints in a natural way, as shown in the thesis. Also, the visualization and further feature generation is much easier. Third, the approach provides sound baseline methods to more traditional ML approaches, e.g. neural network methods. Fourth, most of the ML methods can utilize the geometric features presented in this work as additional features.Geometriassa käytetään perinteisesti tarkkoja malleja, jolloin datassa esiintyvät epätarkkuudet edustavat melua. Toisessa perinteessä nojataan suuren datamäärän tilastolliseen luonteeseen, jolloin geometrinen säännönmukaisuus on datan sisäsyntyinen ominaisuus, joka hahmotetaan tilastollisilla malleilla ainoastaan epäsuorasti. Tämä työ keskittyy kahteen esimerkkiin: luonnonvaroja kuvaaviin pistepilviin ja videohahmontunnistukseen. Nämä ovat todellisia ongelmia, joissa geometrinen sisältö on tavoittamattomissa raakadatan tasolla.
Tämä sisältö voitaisiin jossain määrin löytää ja mallintaa koneoppimisen keinoin, esim. syväoppimisen avulla, mutta joko geometria pitää kattaa suoraan näytteistämällä tai tarvitaan neuronien lisäkerros geometrisia invariansseja varten. Geometrinen sisältö on keskeinen, kun tarvitaan suoraa avaruudellisten suureiden havainnointia, tai kun tarvitaan kuvaus geometrisesti yhtenäiseen dataesitykseen, jossa poikkeavat näytteet tai melu voidaan helposti erottaa.
Tässä työssä tarkastellaan datan muuntamista geometriseen piirreavaruuteen kahden esimerkkiohjelman suhteen. Ensimmäinen esimerkki on pintakaarevuus, joka on uudelleen virinneen kiinnostuksen kohde kaikkialle saatavissa olevan datan ja diskreetin geometrian kypsymisen takia. Kaarevuusspektrit voivat luonnehtia avaruudellista kohdetta melko hyvin ja tarjota koneoppimisessa hyödyllisiä piirteitä. Toinen esimerkki koskee projektiivisia menetelmiä käytettäessä stereovideosignaalia uinnin analytiikkaan.
Tavoite on löytää merkityksellisiä paikallisen geometrian esityksiä, jotka samalla mahdollistavat muun geometrian ymmärrykseen perustuvan analyysin. Piirteet liittyvät suoraan johonkin geometriseen suureeseen, ja tämä helpottaa luonnollisella tavalla geometristen rajoitteiden käsittelyä, kuten väitöstyössä osoitetaan. Myös visualisointi ja lisäpiirteiden luonti muuttuu helpommaksi. Kolmanneksi, lähestymistapa suo selkeän vertailumenetelmän perinteisemmille koneoppimisen lähestymistavoille, esim. hermoverkkomenetelmille. Neljänneksi, useimmat koneoppimismenetelmät voivat hyödyntää tässä työssä esitettyjä geometrisia piirteitä lisäämällä ne muiden piirteiden joukkoon
Deep learning in food category recognition
Integrating artificial intelligence with food category recognition has been a field of interest for research for the
past few decades. It is potentially one of the next steps in revolutionizing human interaction with food. The
modern advent of big data and the development of data-oriented fields like deep learning have provided advancements
in food category recognition. With increasing computational power and ever-larger food datasets,
the approach’s potential has yet to be realized. This survey provides an overview of methods that can be applied
to various food category recognition tasks, including detecting type, ingredients, quality, and quantity. We
survey the core components for constructing a machine learning system for food category recognition, including
datasets, data augmentation, hand-crafted feature extraction, and machine learning algorithms. We place a
particular focus on the field of deep learning, including the utilization of convolutional neural networks, transfer
learning, and semi-supervised learning. We provide an overview of relevant studies to promote further developments
in food category recognition for research and industrial applicationsMRC (MC_PC_17171)Royal Society (RP202G0230)BHF (AA/18/3/34220)Hope Foundation for Cancer Research (RM60G0680)GCRF (P202PF11)Sino-UK Industrial
Fund (RP202G0289)LIAS (P202ED10Data Science
Enhancement Fund (P202RE237)Fight for Sight (24NN201);Sino-UK
Education Fund (OP202006)BBSRC (RM32G0178B8
Generative Models for Preprocessing of Hospital Brain Scans
I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions
Gaining Insight into Determinants of Physical Activity using Bayesian Network Learning
Contains fulltext :
228326pre.pdf (preprint version ) (Open Access)
Contains fulltext :
228326pub.pdf (publisher's version ) (Open Access)BNAIC/BeneLearn 202
- …