371 research outputs found
Fast human motion prediction for human-robot collaboration with wearable interfaces
In this paper, we aim at improving human motion prediction during human-robot
collaboration in industrial facilities by exploiting contributions from both
physical and physiological signals. Improved human-machine collaboration could
prove useful in several areas, while it is crucial for interacting robots to
understand human movement as soon as possible to avoid accidents and injuries.
In this perspective, we propose a novel human-robot interface capable to
anticipate the user intention while performing reaching movements on a working
bench in order to plan the action of a collaborative robot. The proposed
interface can find many applications in the Industry 4.0 framework, where
autonomous and collaborative robots will be an essential part of innovative
facilities. A motion intention prediction and a motion direction prediction
levels have been developed to improve detection speed and accuracy. A Gaussian
Mixture Model (GMM) has been trained with IMU and EMG data following an
evidence accumulation approach to predict reaching direction. Novel dynamic
stopping criteria have been proposed to flexibly adjust the trade-off between
early anticipation and accuracy according to the application. The output of the
two predictors has been used as external inputs to a Finite State Machine (FSM)
to control the behaviour of a physical robot according to user's action or
inaction. Results show that our system outperforms previous methods, achieving
a real-time classification accuracy of after
from movement onset
Probability density estimation of photometric redshifts based on machine learning
Photometric redshifts (photo-z's) provide an alternative way to estimate the
distances of large samples of galaxies and are therefore crucial to a large
variety of cosmological problems. Among the various methods proposed over the
years, supervised machine learning (ML) methods capable to interpolate the
knowledge gained by means of spectroscopical data have proven to be very
effective. METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric
Redshifts) is a novel method designed to provide a reliable PDF (Probability
density Function) of the error distribution of photometric redshifts predicted
by ML methods. The method is implemented as a modular workflow, whose internal
engine for photo-z estimation makes use of the MLPQNA neural network (Multi
Layer Perceptron with Quasi Newton learning rule), with the possibility to
easily replace the specific machine learning model chosen to predict photo-z's.
After a short description of the software, we present a summary of results on
public galaxy data (Sloan Digital Sky Survey - Data Release 9) and a comparison
with a completely different method based on Spectral Energy Distribution (SED)
template fitting.Comment: 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016
784995
METAPHOR: Probability density estimation for machine learning based photometric redshifts
We present METAPHOR (Machine-learning Estimation Tool for Accurate
PHOtometric Redshifts), a method able to provide a reliable PDF for photometric
galaxy redshifts estimated through empirical techniques. METAPHOR is a modular
workflow, mainly based on the MLPQNA neural network as internal engine to
derive photometric galaxy redshifts, but giving the possibility to easily
replace MLPQNA with any other method to predict photo-z's and their PDF. We
present here the results about a validation test of the workflow on the
galaxies from SDSS-DR9, showing also the universality of the method by
replacing MLPQNA with KNN and Random Forest models. The validation test include
also a comparison with the PDF's derived from a traditional SED template
fitting method (Le Phare).Comment: proceedings of the International Astronomical Union, IAU-325
symposium, Cambridge University pres
A Critical Review of Stall Control Techniques in Industrial Fans
This paper reviews modelling and interpretation advances of industrial fan stall phenomena, related stall detection methods, and control technologies. Competing theories have helped engineers refine fan stability and control technology. With the development of these theories, three major issues have emerged. In this paper, we first consider the interplay between aerodynamic perturbations and instability inception. An understanding of the key physical phenomena that occurs with stall inception is critical to alleviate stall by design or through active or passive control methods. We then review the use of passive and active control strategies to improve fan stability. Whilst historically compressor design engineers have used passive control techniques, recent technologies have prompted them to install high-response stall detection and control systems that provide industrial fan designers with new insight into how they may detect and control stall. Finally, the paper reviews the methods and prospects for early stall detection to complement control systems with a warning capability. Engineers may use an effective real-time stall warning system to extend a fan's operating range by allowing it to operate safely at a reduced stall margin. This may also enable the fan to operate in service at a more efficient point on its characteristic
Anomaly detection in Astrophysics: a comparison between unsupervised Deep and Machine Learning on KiDS data
Every field of Science is undergoing unprecedented changes in the discovery
process, and Astronomy has been a main player in this transition since the
beginning. The ongoing and future large and complex multi-messenger sky surveys
impose a wide exploiting of robust and efficient automated methods to classify
the observed structures and to detect and characterize peculiar and unexpected
sources. We performed a preliminary experiment on KiDS DR4 data, by applying to
the problem of anomaly detection two different unsupervised machine learning
algorithms, considered as potentially promising methods to detect peculiar
sources, a Disentangled Convolutional Autoencoder and an Unsupervised Random
Forest. The former method, working directly on images, is considered
potentially able to identify peculiar objects like interacting galaxies and
gravitational lenses. The latter instead, working on catalogue data, could
identify objects with unusual values of magnitudes and colours, which in turn
could indicate the presence of singularities.Comment: Preprint version of the manuscript to appear in the Volume
"Intelligent Astrophysics" of the series "Emergence, Complexity and
Computation", Book eds. I. Zelinka, D. Baron, M. Brescia, Springer Nature
Switzerland, ISSN: 2194-728
Anomaly Detection in Astrophysics: A Comparison Between Unsupervised Deep and Machine Learning on KiDS Data
Every field of Science is undergoing unprecedented changes in the discovery process, and Astronomy has been a main player in this transition since the beginning. The ongoing and future large and complex multi-messenger sky surveys impose a wide exploiting of robust and efficient automated methods to classify the observed structures and to detect and characterize peculiar and unexpected sources. We performed a preliminary experiment on KiDS DR4 data, by applying to the problem of anomaly detection two different unsupervised machine learning algorithms, considered as potentially promising methods to detect peculiar sources, a Disentangled Convolutional Autoencoder and an Unsupervised Random Forest. The former method, working directly on images, is considered potentially able to identify peculiar objects like interacting galaxies and gravitational lenses. The latter instead, working on catalogue data, could identify objects with unusual values of magnitudes and colours, which in turn could indicate the presence of singularities
Statistical analysis of probability density functions for photometric redshifts through the KiDS-ESO-DR3 galaxies
Despite the high accuracy of photometric redshifts (zphot) derived using
Machine Learning (ML) methods, the quantification of errors through reliable
and accurate Probability Density Functions (PDFs) is still an open problem.
First, because it is difficult to accurately assess the contribution from
different sources of errors, namely internal to the method itself and from the
photometric features defining the available parameter space. Second, because
the problem of defining a robust statistical method, always able to quantify
and qualify the PDF estimation validity, is still an open issue. We present a
comparison among PDFs obtained using three different methods on the same data
set: two ML techniques, METAPHOR (Machine-learning Estimation Tool for Accurate
PHOtometric Redshifts) and ANNz2, plus the spectral energy distribution
template fitting method, BPZ. The photometric data were extracted from the KiDS
(Kilo Degree Survey) ESO Data Release 3, while the spectroscopy was obtained
from the GAMA (Galaxy and Mass Assembly) Data Release 2. The statistical
evaluation of both individual and stacked PDFs was done through quantitative
and qualitative estimators, including a dummy PDF, useful to verify whether
different statistical estimators can correctly assess PDF quality. We conclude
that, in order to quantify the reliability and accuracy of any zphot PDF
method, a combined set of statistical estimators is required.Comment: Accepted for publication by MNRAS, 20 pages, 14 figure
Effect of lower limb exoskeleton on the modulation of neural activity and gait classification
: Neurorehabilitation with robotic devices requires a paradigm shift to enhance human-robot interaction. The coupling of robot assisted gait training (RAGT) with a brain-machine interface (BMI) represents an important step in this direction but requires better elucidation of the effect of RAGT on the user's neural modulation. Here, we investigated how different exoskeleton walking modes modify brain and muscular activity during exoskeleton assisted gait. We recorded electroencephalographic (EEG) and electromyographic (EMG) activity from ten able-bodied volunteers walking with an exoskeleton with three modes of user assistance (i.e., transparent, adaptive and full assistance) and during free overground gait. Results identified that exoskeleton walking (irrespective of the exoskeleton mode) induces a stronger modulation of central mid-line mu (8-13 Hz) and low-beta (14-20 Hz) rhythms compared to free overground walking. These modifications are accompanied by a significant re-organization of the EMG patterns in exoskeleton walking. On the other hand, we observed no significant differences in neural activity during exoskeleton walking with the different assistance levels. We subsequently implemented four gait classifiers based on deep neural networks trained on the EEG data during the different walking conditions. Our hypothesis was that exoskeleton modes could impact the creation of a BMI-driven RAGT. We demonstrated that all classifiers achieved an average accuracy of 84.13 ± 3.49% in classifying swing and stance phases on their respective datasets. In addition, we demonstrated that the classifier trained on the transparent mode exoskeleton data can classify gait phases during adaptive and full modes with an accuracy of 78.3 ± 4.8%, while the classifier trained on free overground walking data fails to classify the gait during exoskeleton walking (accuracy of 59.4 ± 11.8%). These findings provide important insights into the effect of robotic training on neural activity and contribute to the advancement of BMI technology for improving robotic gait rehabilitation therapy
- …