60 research outputs found

    Deep state-space modeling for explainable representation, analysis, and generation of professional human poses

    Full text link
    The analysis of human movements has been extensively studied due to its wide variety of practical applications. Nevertheless, the state-of-the-art still faces scientific challenges while modeling human movements. Firstly, new models that account for the stochasticity of human movement and the physical structure of the human body are required to accurately predict the evolution of full-body motion descriptors over time. Secondly, the explainability of existing deep learning algorithms regarding their body posture predictions while generating human movements still needs to be improved as they lack comprehensible representations of human movement. This paper addresses these challenges by introducing three novel approaches for creating explainable representations of human movement. In this work, full-body movement is formulated as a state-space model of a dynamic system whose parameters are estimated using deep learning and statistical algorithms. The representations adhere to the structure of the Gesture Operational Model (GOM), which describes movement through its spatial and temporal assumptions. Two approaches correspond to deep state-space models that apply nonlinear network parameterization to provide interpretable posture predictions. The third method trains GOM representations using one-shot training with Kalman Filters. This training strategy enables users to model single movements and estimate their mathematical representation using procedures that require less computational power than deep learning algorithms. Ultimately, two applications of the generated representations are presented. The first is for the accurate generation of human movements, and the second is for body dexterity analysis of professional movements, where dynamic associations between body joints and meaningful motion descriptors are identified.Comment: Under revie

    Towards the Design of a Natural User Interface for Performing and Learning Musical Gestures

    Get PDF
    AbstractA large variety of musical instruments, either acoustical or digital, are based on a keyboard scheme. Keyboard instruments can produce sounds through acoustic means but they are increasingly used to control digital sound synthesis processes with nowadays music. Interestingly, with all the different possibilities of sonic outcomes, the input remains a musical gesture. In this paper we present the conceptualization of a Natural User Interface (NUI), named the Intangible Musical Instrument (IMI), aiming to support both learning of expert musical gestures and performing music as a unified user experience. The IMI is designed to recognize metaphors of pianistic gestures, focusing on subtle uses of fingers and upper-body. Based on a typology of musical gestures, a gesture vocabulary has been created, hierarchized from basic to complex. These piano-like gestures are finally recognized and transformed into sounds

    Gesture recognition using a depth camera for human robot collaboration on assembly line

    Get PDF
    International audienceWe present a framework and preliminary experimental results for technical gestures recognition using a RGB-D camera. We have studied a collaborative task between a robot and an operator: the assembly of a motor hoses. The goal is to enable the robot to understand which task has just been executed by a human operator in order to anticipate on his actions, to adapt his speed and react properly if an unusual event occurs. The depth camera is placed above the operator, to minimize the possible occlusion on an assembly line, and we track the head and the hands of the operator using the geodesic distance between the head and the pixels of his torso. To describe his movements we used the shape of the shortest routes joining the head and the hands. We then used a discreet HMM to learn and recognize five gestures performed during the motor hoses assembly. By using gesture from the same operator for the learning and the recognition, we reach a good recognition rate of 93%. These results are encouraging and ongoing work will lead us to experiment our set up on a larger pool of operators and recognize the gesture in real time

    Real-time recognition of human gestures for collaborative robots on assembly-line

    No full text
    International audienceWe present a framework and preliminary experimental results for real-time recognition of human operator actions. The goal is, for a collaborative industrial robot operating on same assembly-line as workers, to allow adaptation of its behavior and speed for smooth human-robot cooperation. To this end, it is necessary for the robot to monitor and understand behavior of humans around it. The real-time motion capture is performed using a "MoCap suit" of 12 inertial sensors estimating joint angles of upper-half of human body (neck, wrists, elbows, shoulders, etc...). In our experiment, we consider one particular assembly operation on car doors, which we have further subdivided into 4 successive steps: removing the adhesive protection from the waterproofing sheet, positioning the waterproofing sheet on the door, pre-sticking the sheet on the door, and finally installing the window "sealing strip". The gesture recognition is achieved continuously in real-time, using a technique combining an automatic time-rescaling similar to Dynamic Time Warp (DTW), and Hidden Markov Model (HMM) for estimating respective probabilities of the 4 learnt actions. Preliminary evaluation, conducted in real-world on an experimental assembly cell of car manufacturer PSA, shows a very promising action correct recognition rate of 96% on several repetitions of the same assembly operation by a single operator. Ongoing work aims at evaluating our framework for same actions recognition but on more executions by a larger pool of different human operators, and also to estimate false recognition rates on unrelated gestures. Another interesting potential perspective is the use of workers' motion capture in order to estimate effort and stress, for helping prevention of physical causes of some musculoskeletal disorders

    A User-Adaptive Gesture Recognition System Applied to Human-Robot Collaboration in Factories

    No full text
    International audienceEnabling Human-Robot collaboration (HRC) requires robot with the capacity to understand its environment and actions performed by persons interacting with it. In this paper we are dealing with industrial collaborative robots on assembly line in automotive factories. These robots have to work with operators on common tasks. We are working on technical gestures recognition to allow robot to understand which task is being executed by the operator, in order to synchronize its actions. We are using a depth-camera with a top view and we track hands positions of the worker. We use discrete HMMs to learn and recognize technical gestures. We are also interested in a system of gestures recognition which can adapt itself to the operator. Indeed, a same technical gesture seems very similar from an operator to another, but each operator has his/her own way to perform it. In this paper, we study an adaptation of the recognition system by modifying the learning database with a addition very small amount of gestures. Our research shows that by adding 2 sets of gestures to be recognized from the operator who is working with the robot, which represents less than 1% of the database, we can improve correct recognitions rate by ~3.5%. When we add 10 sets of gestures, 2.6% of the database, the improvement reaches 5.7%

    Music gestural skills development engaging teachers, learners and expert performers

    Get PDF
    International audienceThis article presents a platform for learning theoretical knowledge and practical motor skills of musical gestures by combining functionalities of Learning Management Systems (LMS) and Serious Gaming (SG). The teacher designs his/her educational scenario that can be articulated by both theoretical and practical activities. The learner accesses online multimedia courses by using his/her LMS client which can be a computer, tablet orsmartphone and the serious game by using his/her computer and the motion capture sensors. During practicing, his/her gestures are compared in real-time with the expert gestures and s/he is evaluated both in terms of correct fingerings and kinematics. Finally, the platform offers a single profile for the learner for theoretical and practical activities

    Capture, modeling and recognition of expert technical gestures in wheel-throwing art of pottery

    No full text
    International audienceThis research has been conducted in the context of the ArtiMuse project that aims at the modeling and renewal of rare gestural knowledge and skills involved in the traditional craftsmanship and more precisely in the art of the wheel-throwing pottery. These knowledge and skills constitute the Intangible Cultural Heritage and refer to the fruit of diverse expertise founded and propagated over the centuries thanks to the ingeniousness of the gesture and the creativity of the human spirit. Nowadays, this expertise is very often threatened with disappearance because of the difficulty to resist to globalization and the fact that most of those "expertise holders" are not easily accessible due to geographical or other constraints. In this paper, a methodological framework for capturing and modeling gestural knowledge and skills in wheel-throwing pottery is proposed. It is based on capturing gestures using wireless inertial sensors and statistical modeling. In particular, we used a system that allows for online alignment of gestures using a modified Hidden Markov Model. This methodology is implemented into a Human-Computer Interface, which permits both the modeling and recognition of expert technical gestures. This system could be used to assist in the learning of these gestures by giving continuous feedback in real-time by measuring the difference between expert and learner gestures. The system has been tested and evaluated on different potters with a rare expertise, which is strongly related to their local identity

    Motion Capture Benchmark of Real Industrial Tasks and Traditional Crafts for Human Movement Analysis

    Full text link
    Human movement analysis is a key area of research in robotics, biomechanics, and data science. It encompasses tracking, posture estimation, and movement synthesis. While numerous methodologies have evolved over time, a systematic and quantitative evaluation of these approaches using verifiable ground truth data of three-dimensional human movement is still required to define the current state of the art. This paper presents seven datasets recorded using inertial-based motion capture. The datasets contain professional gestures carried out by industrial operators and skilled craftsmen performed in real conditions in-situ. The datasets were created with the intention of being used for research in human motion modeling, analysis, and generation. The protocols for data collection are described in detail, and a preliminary analysis of the collected data is provided as a benchmark. The Gesture Operational Model, a hybrid stochastic-biomechanical approach based on kinematic descriptors, is utilized to model the dynamics of the experts' movements and create mathematical representations of their motion trajectories for analysis and quantifying their body dexterity. The models allowed accurate the generation of human professional poses and an intuitive description of how body joints cooperate and change over time through the performance of the task

    Hand gesture recognition for driver vehicle interaction

    No full text
    International audienceIn this paper, we present a new driver vehicle interface based on hand gestures that uses a hierarchical model to minimize resources requirements. Depth information is provided by time of flight sensor with automotive certification. In particular, we develop our implementation of a Random Forest based posture classification in two subcases: micro gestures at the wheel and macro gestures in front of the touch screen

    Towards a Hand Skeletal Model for Depth Images Applied to Capture Music-like Finger Gestures

    No full text
    International audienceThe Intangible Cultural Heritage (ICH) implies gestural knowledge and skills in performing arts, such as music, and its preservation and transmission is a worldwide challenge according to UNESCO. This paper presents an ongoing research that aims at the development of a computer vision methodology for the recognition of music-like complex hand and finger gestures performed in space. This methodology can contribute both to the analysis of classical music playing schools, such as the European and the Russian, and to the finger gesture control of sound as a new interface for musical expression. An implementation of a generic method for building body subpart classification model applied in musical gestures is presented. A robust classification model from a reduced training dataset, as well as a method for spatial aggregation of the classification results, which provides a confidence measure on each hand subpart location is developed. A 80% pixel-wise classification accuracy and 95% ponctual subpart location accuracy is achieved when musical finger gestures with a semi-closed hand are performed in front of the camera and the rotation around camera axis is not too important
    • …
    corecore