10,896 research outputs found
Using accelerometer, high sample rate GPS and magnetometer data to develop a cattle movement and behaviour model
The study described in this paper developed a model of animal movement, which explicitly recognised each individual as the central unit of measure. The model was developed by learning from a real dataset that measured and calculated, for individual cows in a herd, their linear and angular positions and directional and angular speeds. Two learning algorithms were implemented: a Hidden Markov model (HMM) and a long-term prediction algorithm. It is shown that a HMM can be used to describe the animal's movement and state transition behaviour within several “stay” areas where cows remained for long periods. Model parameters were estimated for hidden behaviour states such as relocating, foraging and bedding. For cows’ movement between the “stay” areas a long-term prediction algorithm was implemented. By combining these two algorithms it was possible to develop a successful model, which achieved similar results to the animal behaviour data collected. This modelling methodology could easily be applied to interactions of other animal specie
Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech
The rapid population aging has stimulated the development of assistive
devices that provide personalized medical support to the needies suffering from
various etiologies. One prominent clinical application is a computer-assisted
speech training system which enables personalized speech therapy to patients
impaired by communicative disorders in the patient's home environment. Such a
system relies on the robust automatic speech recognition (ASR) technology to be
able to provide accurate articulation feedback. With the long-term aim of
developing off-the-shelf ASR systems that can be incorporated in clinical
context without prior speaker information, we compare the ASR performance of
speaker-independent bottleneck and articulatory features on dysarthric speech
used in conjunction with dedicated neural network-based acoustic models that
have been shown to be robust against spectrotemporal deviations. We report ASR
performance of these systems on two dysarthric speech datasets of different
characteristics to quantify the achieved performance gains. Despite the
remaining performance gap between the dysarthric and normal speech, significant
improvements have been reported on both datasets using speaker-independent ASR
architectures.Comment: to appear in Computer Speech & Language -
https://doi.org/10.1016/j.csl.2019.05.002 - arXiv admin note: substantial
text overlap with arXiv:1807.1094
Efficient spike-sorting of multi-state neurons using inter-spike intervals information
We demonstrate the efficacy of a new spike-sorting method based on a Markov
Chain Monte Carlo (MCMC) algorithm by applying it to real data recorded from
Purkinje cells (PCs) in young rat cerebellar slices. This algorithm is unique
in its capability to estimate and make use of the firing statistics as well as
the spike amplitude dynamics of the recorded neurons. PCs exhibit multiple
discharge states, giving rise to multimodal interspike interval (ISI)
histograms and to correlations between successive ISIs. The amplitude of the
spikes generated by a PC in an "active" state decreases, a feature typical of
many neurons from both vertebrates and invertebrates. These two features
constitute a major and recurrent problem for all the presently available
spike-sorting methods. We first show that a Hidden Markov Model with 3
log-Normal states provides a flexible and satisfying description of the complex
firing of single PCs. We then incorporate this model into our previous MCMC
based spike-sorting algorithm (Pouzat et al, 2004, J. Neurophys. 91, 2910-2928)
and test this new algorithm on multi-unit recordings of bursting PCs. We show
that our method successfully classifies the bursty spike trains fired by PCs by
using an independent single unit recording from a patch-clamp pipette.Comment: 25 pages, to be published in Journal of Neurocience Method
Fast human motion prediction for human-robot collaboration with wearable interfaces
In this paper, we aim at improving human motion prediction during human-robot
collaboration in industrial facilities by exploiting contributions from both
physical and physiological signals. Improved human-machine collaboration could
prove useful in several areas, while it is crucial for interacting robots to
understand human movement as soon as possible to avoid accidents and injuries.
In this perspective, we propose a novel human-robot interface capable to
anticipate the user intention while performing reaching movements on a working
bench in order to plan the action of a collaborative robot. The proposed
interface can find many applications in the Industry 4.0 framework, where
autonomous and collaborative robots will be an essential part of innovative
facilities. A motion intention prediction and a motion direction prediction
levels have been developed to improve detection speed and accuracy. A Gaussian
Mixture Model (GMM) has been trained with IMU and EMG data following an
evidence accumulation approach to predict reaching direction. Novel dynamic
stopping criteria have been proposed to flexibly adjust the trade-off between
early anticipation and accuracy according to the application. The output of the
two predictors has been used as external inputs to a Finite State Machine (FSM)
to control the behaviour of a physical robot according to user's action or
inaction. Results show that our system outperforms previous methods, achieving
a real-time classification accuracy of after
from movement onset
- …