162,089 research outputs found

    IMU-based Modularized Wearable Device for Human Motion Classification

    Full text link
    Human motion analysis is used in many different fields and applications. Currently, existing systems either focus on one single limb or one single class of movements. Many proposed systems are designed to be used in an indoor controlled environment and must possess good technical know-how to operate. To improve mobility, a less restrictive, modularized, and simple Inertial Measurement units based system is proposed that can be worn separately and combined. This allows the user to measure singular limb movements separately and also monitor whole body movements over a prolonged period at any given time while not restricted to a controlled environment. For proper analysis, data is conditioned and pre-processed through possible five stages namely power-based, clustering index-based, Kalman filtering, distance-measure-based, and PCA-based dimension reduction. Different combinations of the above stages are analyzed using machine learning algorithms for selected case studies namely hand gesture recognition and environment and shoe parameter-based walking pattern analysis to validate the performance capability of the proposed wearable device and multi-stage algorithms. The results of the case studies show that distance-measure-based and PCA-based dimension reduction will significantly improve human motion identification accuracy. This is further improved with the introduction of the Kalman filter. An LSTM neural network is proposed as an alternate classifier and the results indicate that it is a robust classifier for human motion recognition. As the results indicate, the proposed wearable device architecture and multi-stage algorithms are cable of distinguishing between subtle human limb movements making it a viable tool for human motion analysis.Comment: 10 pages, 12 figures, 28 reference

    Lidar-based Gait Analysis and Activity Recognition in a 4D Surveillance System

    Get PDF
    This paper presents new approaches for gait and activity analysis based on data streams of a Rotating Multi Beam (RMB) Lidar sensor. The proposed algorithms are embedded into an integrated 4D vision and visualization system, which is able to analyze and interactively display real scenarios in natural outdoor environments with walking pedestrians. The main focus of the investigations are gait based person re-identification during tracking, and recognition of specific activity patterns such as bending, waving, making phone calls and checking the time looking at wristwatches. The descriptors for training and recognition are observed and extracted from realistic outdoor surveillance scenarios, where multiple pedestrians are walking in the field of interest following possibly intersecting trajectories, thus the observations might often be affected by occlusions or background noise. Since there is no public database available for such scenarios, we created and published a new Lidar-based outdoors gait and activity dataset on our website, that contains point cloud sequences of 28 different persons extracted and aggregated from 35 minutes-long measurements. The presented results confirm that both efficient gait-based identification and activity recognition is achievable in the sparse point clouds of a single RMB Lidar sensor. After extracting the people trajectories, we synthesized a free-viewpoint video, where moving avatar models follow the trajectories of the observed pedestrians in real time, ensuring that the leg movements of the animated avatars are synchronized with the real gait cycles observed in the Lidar stream

    A Study and Estimation a Lost Person Behavior in Crowded Areas Using Accelerometer Data from Smartphones

    Get PDF
    As smartphones become more popular, applications are being developed with new and innovative ways to solve problems in the day-to-day lives of users. One area of smartphone technology that has been developed in recent years is human activity recognition (HAR). This technology uses various sensors that are built into the smartphone to sense a person\u27s activity in real time. Applications that incorporate HAR can be used to track a person\u27s movements and are very useful in areas such as health care. We use this type of motion sensing technology, specifically, using data collected from the accelerometer sensor. The purpose of this study is to study and estimate the person who may become lost in a crowded area. The application is capable of estimating the movements of people in a crowded area, and whether or not the person is lost in a crowded area based on his/her movements as detected by the smartphone. This will be a great benefit to anyone interested in crowd management strategies. In this paper, we review related literature and research that has given us the basis for our own research. We also detail research on lost person behavior. We looked at the typical movements a person will likely make when he/she is lost and used these movements to indicate lost person behavior. We then evaluate and describe the creation of the application, all of its components, and the testing process

    Multistatic human micro-Doppler classification of armed/unarmed personnel

    Get PDF
    Classification of different human activities using multistatic micro-Doppler data and features is considered in this paper, focusing on the distinction between unarmed and potentially armed personnel. A database of real radar data with more than 550 recordings from 7 different human subjects has been collected in a series of experiments in the field with a multistatic radar system. Four key features were extracted from the micro-Doppler signature after Short Time Fourier Transform analysis. The resulting feature vectors were then used as individual, pairs, triplets, and all together before inputting to different types of classifiers based on the discriminant analysis method. The performance of different classifiers and different feature combinations is discussed aiming at identifying the most appropriate features for the unarmed vs armed personnel classification, as well as the benefit of combining multistatic data rather than using monostatic data only

    The role of human body movements in mate selection

    Get PDF
    It is common scientific knowledge, that most of what we say within a conversation is not only expressed by the words meaning alone, but also through our gestures, postures, and body movements. This non-verbal mode is possibly rooted firmly in our human evolutionary heritage, and as such, some scientists argue that it serves as a fundamental assessment and expression tool for our inner qualities. Studies of nonverbal communication have established that a universal, culture-free, non-verbal sign system exists, that is available to all individuals for negotiating social encounters. Thus, it is not only the kind of gestures and expressions humans use in social communication, but also the way these movements are performed, as this seems to convey key information about an individuals quality. Dance, for example, is a special form of movement, which can be observed in human courtship displays. Recent research suggests that people are sensitive to the variation in dance movements, and that dance performance provides information about an individuals mate quality in terms of health and strength. This article reviews the role of body movement in human non-verbal communication, and highlights its significance in human mate preferences in order to promote future work in this research area within the evolutionary psychology framework

    Which One is Me?: Identifying Oneself on Public Displays

    Get PDF
    While user representations are extensively used on public displays, it remains unclear how well users can recognize their own representation among those of surrounding users. We study the most widely used representations: abstract objects, skeletons, silhouettes and mirrors. In a prestudy (N=12), we identify five strategies that users follow to recognize themselves on public displays. In a second study (N=19), we quantify the users' recognition time and accuracy with respect to each representation type. Our findings suggest that there is a significant effect of (1) the representation type, (2) the strategies performed by users, and (3) the combination of both on recognition time and accuracy. We discuss the suitability of each representation for different settings and provide specific recommendations as to how user representations should be applied in multi-user scenarios. These recommendations guide practitioners and researchers in selecting the representation that optimizes the most for the deployment's requirements, and for the user strategies that are feasible in that environment
    • …
    corecore