3,888 research outputs found

    Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data

    Get PDF
    Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented

    Segmentation of intentional human gestures for sports video annotation

    Full text link
    We present results on the recognition of intentional human gestures for video annotation and retrieval. We define a gesture as a particular, repeatable, human movement having a predefined meaning. An obvious application of the work is in sports video annotation where umpire gestures indicate specific events. Our approach is to augment video with data obtained from accelerometers worn as wrist bands by one or more officials. We present the recognition performance using a Hidden Markov Model approach for gesture modeling with both isolated gestures and gestures segmented from a stream

    Hierarchical recognition of intentional human gestures for sports video annotation

    Full text link
    We present a novel technique for the recognition of complex human gestures for video annotation using accelerometers and the hidden Markov model. Our extension to the standard hidden Markov model allows us to consider gestures at different levels of abstraction through a hierarchy of hidden states. Accelerometers in the form of wrist bands are attached to humans performing intentional gestures, such as umpires in sports. Video annotation is then performed by populating the video with time stamps indicating significant events, where a particular gesture occurs. The novelty of the technique lies in the development of a probabilistic hierarchical framework for complex gesture recognition and the use of accelerometers to extract gestures and significant events for video annotation

    Detecting Steps Walking at very Low Speeds Combining Outlier Detection, Transition Matrices and Autoencoders from Acceleration Patterns

    Get PDF
    In this paper, we develop and validate a new algorithm to detect steps while walking at speeds between 30 and 40 steps per minute based on the data sensed from a single tri-axial accelerometer. The algorithm concatenates three consecutive phases. First, an outlier detection is performed on the sensed data based on the Mahalanobis distance to pre-detect candidate points in the acceleration time series that may contain a ground contact segment of data while walking. Second, the acceleration segment around the pre-detected point is used to calculate the transition matrix in order to capture the time dependencies. Finally, autoencoders, trained with data segments containing ground contact transition matrices from acceleration series from labeled steps are used to reconstruct the computed transition matrices at each pre-detected point. A similarity index is used to assess if the pre-selected point contains a true step in the 30-40 steps per minute speed range. Our experimental results, based on a database from three different participants performing similar activities to the target one, are able to achieve a recall = 0.88 with precision = 0.50 improving the results when directly applying the autoencoders to acceleration patterns (recall = 0.77 with precision = 0.50)

    Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition

    Get PDF
    Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware

    Automatic detection of traffic lights, street crossings and urban roundabouts combining outlier detection and deep learning classification techniques based on GPS traces while driving

    Get PDF
    The automatic generation of street networks is attracting the attention of research and industry communities in areas such as routable map generation. This paper presents a novel mechanism that focuses on the automatic detection of street elements such as traffic lights, street crossings and roundabouts which could be used to generate street maps and populate them with traffic influencing infrastructural elements such as traffic lights. In order to minimize the system requirements and simplify the data collection from many users with minimal impact for them, only traces of GPS data from a mobile device while driving are used. Speed and acceleration time series are derived from the GPS data. An outlier detection algorithm is used first in order to detect abnormal driving locations (which can be due to infrastructural elements or particular traffic conditions). Using deep learning, speed and acceleration patterns are automatically analyzed at each outlier in order to extract relevant features which are then classified into a traffic light, street crossing, urban roundabout or other element. The classification results are enhanced by adding the degree of atypicity for each point calculated as the percentage of times that a particular location is detected as an outlier in several drives. The proposed algorithm achieves a combined recall of 0.89 and a combined precision of 0.88 for classification.The research leading to these results has received funding from the “HERMES-SMART DRIVER” project TIN2013-46801-C4-2-R (MINECO), funded by the Spanish Agencia Estatal de Investigación (AEI), and the “ANALYTICS USING SENSOR DATA FOR FLATCITY” project TIN2016-77158-C4-1-R (MINECO/ ERDF, EU) funded by the Spanish Agencia Estatal de Investigación (AEI) and the European Regional Development Fund (ERDF)

    A study on virtual reality and developing the experience in a gaming simulation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Masters by ResearchVirtual Reality (VR) is an experience where a person is provided with the freedom of viewing and moving in a virtual world [1]. The experience is not constrained to a limited control. Here, it was triggered interactively according to the user’s physical movement [1] [2]. So the user feels as if they are seeing the real world; also, 3D technologies allow the viewer to experience the volume of the object and its prospection in the virtual world [1]. The human brain generates the depth when each eye receives the images in its point of view. For learning for and developing the project using the university’s facilities, some of the core parts of the research have been accomplished, such as designing the VR motion controller and VR HMD (Head Mount Display), using an open source microcontroller. The VR HMD with the VR controller gives an immersive feel and a complete VR system [2]. The motive was to demonstrate a working model to create a VR experience on a mobile platform. Particularly, the VR system uses a micro electro-mechanical system to track motion without a tracking camera. The VR experience has also been developed in a gaming simulation. To produce this, Maya, Unity, Motion Analysis System, MotionBuilder, Arduino and programming have been used. The lessons and codes taken or improvised from [33] [44] [25] and [45] have been studied and implemented

    Perception of tactile vibrations and a putative neuronal code

    Get PDF
    We devised a delayed comparison task, appropriate for human and rats, in which subjects discriminate between pairs of vibration delivered either to their whiskers, in rats, or fingertips, in humans, with a delay inserted between the two stimuli. Stimuli were composed of a random time series of velocity values (\u201cnoise\u201d) taken from a Gaussian distribution with 0 mean and standard deviation referred to as \u3c31 for the first stimulus and \u3c32 for the second stimulus. The subject must select a response depending on the two vibrations\u2019 relative standard deviations, \u3c31>\u3c32 or \u3c31<\u3c32. In the standard condition, the base and comparison stimuli both had duration of 400 ms and they were separated by a 800 ms pause. In this condition, humans had better performance than did rats on average, yet the best rats were better than the worst humans. To learn how signals are integrated over time, we varied the duration of the second stimulus. In rats, the performance was progressively improved when the comparison stimulus duration increased from 200 to 400 and then to 600 ms. In humans, the effect of comparison stimulus duration was different: an increase in duration did not improve their performance but biased their choice. Stimuli of longer duration were perceived as having a larger value of \u3c3. We employed a novel psychophysical reverse correlation method to find out which kinematic features of the stochastic stimulus influenced the choices of the subjects. This analysis revealed that rats rely principally on features related to velocity and speed values normalized by stimulus duration \u2013 that is, the rate of velocity and speed features per unit time. In contrast, while human subjects used velocity- and speed-related features, they tended to be influenced by the summated values of those features over time. The summation strategy in humans versus the rate strategy in rats accounts for both (i) the lack of improvement in humans for greater stimulus durations and (ii) the bias by which they judged longer stimuli as having a greater value of \u3c3. Next, we focused on the capacity of rats to accomplish a task of parametric working memory, a capacity until now not found in rodents. For delays between the base and comparison stimuli of up to 6-10 seconds, humans and rats showed similar performance. However when the difference in \u3c3 was small, the rats\u2019 performance began to decay over long inter-stimulus delays more markedly than did the humans\u2019 performance. The next chapter reports the analyses of the activity of barrel cortex neurons during the vibration comparison task. 35% of sampled neuron clusters showed a significant change in firing rate as \u3c3 varied, and the change was positive in every case \u2013 the slope of firing rate versus \u3c3 was positive. We used methods related to signal detection theory to estimate the behavioral performance that could be supported by single neuron clusters and found that the resulting \u201cneurometric\u201d curve was much less steep performance than the psychometric curve (the performance of the whole rat). This led to the notion that stimuli are encoded by larger populations. A general linear model (GLM) that combined multiple simultaneously recorded 2 clusters performed much better than single clusters and began to approach animal performance. We conclude that a potential code for the stimulus is the variation in firing rate according to \u3c3, distributed across large populations.In conclusion, this thesis characterizes the perceptual capacities of humans and rats in a novel working memory task. Both humans and rats can extract the statistical structure of a \u201cnoisy\u201d tactile vibration, but seem to integrate signals by different operations. A major finding is that rats are endowed with a capacity to hold stimulus parameters in working memory with a proficiency that, until now, could be ascribed only to primates. The statistical properties of the stimulus appear to be encoded by a distributed population
    • …
    corecore