13,657 research outputs found
Seeing into Darkness: Scotopic Visual Recognition
Images are formed by counting how many photons traveling from a given set of
directions hit an image sensor during a given time interval. When photons are
few and far in between, the concept of `image' breaks down and it is best to
consider directly the flow of photons. Computer vision in this regime, which we
call `scotopic', is radically different from the classical image-based paradigm
in that visual computations (classification, control, search) have to take
place while the stream of photons is captured and decisions may be taken as
soon as enough information is available. The scotopic regime is important for
biomedical imaging, security, astronomy and many other fields. Here we develop
a framework that allows a machine to classify objects with as few photons as
possible, while maintaining the error rate below an acceptable threshold. A
dynamic and asymptotically optimal speed-accuracy tradeoff is a key feature of
this framework. We propose and study an algorithm to optimize the tradeoff of a
convolutional network directly from lowlight images and evaluate on simulated
images from standard datasets. Surprisingly, scotopic systems can achieve
comparable classification performance as traditional vision systems while using
less than 0.1% of the photons in a conventional image. In addition, we
demonstrate that our algorithms work even when the illuminance of the
environment is unknown and varying. Last, we outline a spiking neural network
coupled with photon-counting sensors as a power-efficient hardware realization
of scotopic algorithms.Comment: 23 pages, 6 figure
Multiscale Machine Learning and Numerical Investigation of Ageing in Infrastructures
Infrastructure is a critical component of a country’s economic growth. Interaction with extreme service environments can adversely affect the long-term performance of infrastructure and accelerate ageing. This research focuses on using machine learning to improve the efficiency of analysing the multiscale ageing impact on infrastructure.
First, a data-driven campaign is developed to analyse the condition of an ageing infrastructure. A machine learning-based framework is proposed to predict the state of various assets across a railway system.
The ageing of the bond in fibre-reinforced polymer (FRP)-strengthened concrete elements is investigated using machine learning. Different machine learning models are developed to characterise the long-term performance of the bond.
The environmental ageing of composite materials is investigated by a micromechanics-based machine learning model. A mathematical framework is developed to automatically generate microstructures. The microstructures are analysed by the finite element (FE) method. The generated data is used to develop a machine learning model to study the degradation of the transverse performance of composites under humid conditions.
Finally, a multiscale FE and machine learning framework is developed to expand the understanding of composite material ageing. A moisture diffusion analysis is performed to simulate the water uptake of composites under water immersion conditions. The results are downscaled to obtain micromodel stress fields. Numerical homogenisation is used to obtain the composite transverse behaviour. A machine learning model is developed based on the multiscale simulation results to model the ageing process of composites under water immersion.
The frameworks developed in this thesis demonstrate how machine learning improves the analysis of ageing across multiple scales of infrastructure. The resulting understanding can help develop more efficient strategies for the rehabilitation of ageing infrastructure
DropIn: Making Reservoir Computing Neural Networks Robust to Missing Inputs by Dropout
The paper presents a novel, principled approach to train recurrent neural
networks from the Reservoir Computing family that are robust to missing part of
the input features at prediction time. By building on the ensembling properties
of Dropout regularization, we propose a methodology, named DropIn, which
efficiently trains a neural model as a committee machine of subnetworks, each
capable of predicting with a subset of the original input features. We discuss
the application of the DropIn methodology in the context of Reservoir Computing
models and targeting applications characterized by input sources that are
unreliable or prone to be disconnected, such as in pervasive wireless sensor
networks and ambient intelligence. We provide an experimental assessment using
real-world data from such application domains, showing how the Dropin
methodology allows to maintain predictive performances comparable to those of a
model without missing features, even when 20\%-50\% of the inputs are not
available
Distributed classifier based on genetically engineered bacterial cell cultures
We describe a conceptual design of a distributed classifier formed by a
population of genetically engineered microbial cells. The central idea is to
create a complex classifier from a population of weak or simple classifiers. We
create a master population of cells with randomized synthetic biosensor
circuits that have a broad range of sensitivities towards chemical signals of
interest that form the input vectors subject to classification. The randomized
sensitivities are achieved by constructing a library of synthetic gene circuits
with randomized control sequences (e.g. ribosome-binding sites) in the front
element. The training procedure consists in re-shaping of the master population
in such a way that it collectively responds to the "positive" patterns of input
signals by producing above-threshold output (e.g. fluorescent signal), and
below-threshold output in case of the "negative" patterns. The population
re-shaping is achieved by presenting sequential examples and pruning the
population using either graded selection/counterselection or by
fluorescence-activated cell sorting (FACS). We demonstrate the feasibility of
experimental implementation of such system computationally using a realistic
model of the synthetic sensing gene circuits.Comment: 31 pages, 9 figure
A Novel Deep Reinforcement Learning (DRL) Algorithm to Apply Artificial Intelligence-Based Maintenance in Electrolysers
Hydrogen provides a clean source of energy that can be produced with the aid of electrolysers.
For electrolysers to operate cost-effectively and safely, it is necessary to define an appropriate
maintenance strategy. Predictive maintenance is one of such strategies but often relies on data from
sensors which can also become faulty, resulting in false information. Consequently, maintenance
will not be performed at the right time and failure will occur. To address this problem, the artificial
intelligence concept is applied to make predictions on sensor readings based on data obtained from
another instrument within the process. In this study, a novel algorithm is developed using Deep
Reinforcement Learning (DRL) to select the best feature(s) among measured data of the electrolyser,
which can best predict the target sensor data for predictive maintenance. The features are used as
input into a type of deep neural network called long short-term memory (LSTM) to make predictions.
The DLR developed has been compared with those found in literatures within the scope of this study.
The results have been excellent and, in fact, have produced the best scores. Specifically, its correlation
coefficient with the target variable was practically total (0.99). Likewise, the root-mean-square error
(RMSE) between the experimental sensor data and the predicted variable was only 0.1351.This research was funded by the Spanish Government, grant (1) Ref: PID2020-116616RBC31
and grant (2) Ref: RED2022-134588-T REDGENERA
- …