5,504 research outputs found
DropIn: Making Reservoir Computing Neural Networks Robust to Missing Inputs by Dropout
The paper presents a novel, principled approach to train recurrent neural
networks from the Reservoir Computing family that are robust to missing part of
the input features at prediction time. By building on the ensembling properties
of Dropout regularization, we propose a methodology, named DropIn, which
efficiently trains a neural model as a committee machine of subnetworks, each
capable of predicting with a subset of the original input features. We discuss
the application of the DropIn methodology in the context of Reservoir Computing
models and targeting applications characterized by input sources that are
unreliable or prone to be disconnected, such as in pervasive wireless sensor
networks and ambient intelligence. We provide an experimental assessment using
real-world data from such application domains, showing how the Dropin
methodology allows to maintain predictive performances comparable to those of a
model without missing features, even when 20\%-50\% of the inputs are not
available
Recommended from our members
Behavioural pattern identification and prediction in intelligent environments
In this paper, the application of soft computing techniques in prediction of an occupant's behaviour in an inhabited intelligent environment is addressed. In this research, daily activities of elderly people who live in their own homes suffering from dementia are studied. Occupancy sensors are used to extract the movement patterns of the occupant. The occupancy data is then converted into temporal sequences of activities which are eventually used to predict the occupant behaviour. To build the prediction model, different dynamic recurrent neural networks are investigated. Recurrent neural networks have shown a great ability in finding the temporal relationships of input patterns. The experimental results show that non-linear autoregressive network with exogenous inputs model correctly extracts the long term prediction patterns of the occupant and outperformed the Elman network. The results presented here are validated using data generated from a simulator and real environments
Towards Deep Learning Models for Psychological State Prediction using Smartphone Data: Challenges and Opportunities
There is an increasing interest in exploiting mobile sensing technologies and
machine learning techniques for mental health monitoring and intervention.
Researchers have effectively used contextual information, such as mobility,
communication and mobile phone usage patterns for quantifying individuals' mood
and wellbeing. In this paper, we investigate the effectiveness of neural
network models for predicting users' level of stress by using the location
information collected by smartphones. We characterize the mobility patterns of
individuals using the GPS metrics presented in the literature and employ these
metrics as input to the network. We evaluate our approach on the open-source
StudentLife dataset. Moreover, we discuss the challenges and trade-offs
involved in building machine learning models for digital mental health and
highlight potential future work in this direction.Comment: 6 pages, 2 figures, In Proceedings of the NIPS Workshop on Machine
Learning for Healthcare 2017 (ML4H 2017). Colocated with NIPS 201
Automated Architecture Design for Deep Neural Networks
Machine learning has made tremendous progress in recent years and received
large amounts of public attention. Though we are still far from designing a
full artificially intelligent agent, machine learning has brought us many
applications in which computers solve human learning tasks remarkably well.
Much of this progress comes from a recent trend within machine learning, called
deep learning. Deep learning models are responsible for many state-of-the-art
applications of machine learning. Despite their success, deep learning models
are hard to train, very difficult to understand, and often times so complex
that training is only possible on very large GPU clusters. Lots of work has
been done on enabling neural networks to learn efficiently. However, the design
and architecture of such neural networks is often done manually through trial
and error and expert knowledge. This thesis inspects different approaches,
existing and novel, to automate the design of deep feedforward neural networks
in an attempt to create less complex models with good performance that take
away the burden of deciding on an architecture and make it more efficient to
design and train such deep networks.Comment: Undergraduate Thesi
- …