15 research outputs found

    Can we Take Advantage of Time-Interval Pattern Mining to Model Students Activity?

    Get PDF
    International audienceAnalyzing students' activities in their learning process is an issue that has received significant attention in the educational data mining research field. Many approaches have been proposed, including the popular sequential pattern mining. However, the vast majority of the works do not focus on the time of occurrence of the events within the activities. This paper relies on the hypothesis that we can get a better understanding of students' activities, as well as design more accurate models, if time is considered. With this in mind, we propose to study time-interval patterns. To highlight the benefits of managing time, we analyze the data collected about 113 first-year university students interacting with their LMS. Experiments reveal that frequent time-interval patterns are actually identified, which means that some students' activities are regulated not only by the order of learning resources but also by time. In addition, the experiments emphasize that the sets of intervals highly influence the patterns mined and that the set of intervals that represents the human natural time (minute, hour, day, etc.) seems to be the most appropriate one to represent time gap between resources. Finally, we show that time-interval pattern mining brings additional information compared to sequential pattern mining. Indeed, not only the view of students' possible future activities is less uncertain (in terms of learning resources and their temporal gap) but also, as soon as two students differ in their time-intervals, this di↵erence indicates that their following activities are likely to diverge

    Prediction of Intention during Interaction with iCub with Probabilistic Movement Primitives

    Get PDF
    This article describes our open-source software for predicting the intention of a user physically interacting with the humanoid robot iCub. Our goal is to allow the robot to infer the intention of the human partner during collaboration, by predicting the future intended trajectory: this capability is critical to design anticipatory behaviors that are crucial in human–robot collaborative scenarios, such as in co-manipulation, cooperative assembly, or transportation. We propose an approach to endow the iCub with basic capabilities of intention recognition, based on Probabilistic Movement Primitives (ProMPs), a versatile method for representing, generalizing, and reproducing complex motor skills. The robot learns a set of motion primitives from several demonstrations, provided by the human via physical interaction. During training, we model the collaborative scenario using human demonstrations. During the reproduction of the collaborative task, we use the acquired knowledge to recognize the intention of the human partner. Using a few early observations of the state of the robot, we can not only infer the intention of the partner but also complete the movement, even if the user breaks the physical interaction with the robot. We evaluate our approach in simulation and on the real iCub. In simulation, the iCub is driven by the user using the Geomagic Touch haptic device. In the real robot experiment, we directly interact with the iCub by grabbing and manually guiding the robot’s arm. We realize two experiments on the real robot: one with simple reaching trajectories, and one inspired by collaborative object sorting. The software implementing our approach is open source and available on the GitHub platform. In addition, we provide tutorials and videos

    Prédiction multi-modale à l'aide d'apprentissage PRObabiliste de Mouvement Primitives

    Get PDF
    International audienceThis paper proposes a method for multi-modal prediction of intention based on a probabilistic description of movement primitives and goals. We target dyadic interaction between a human and a robot in a collaborative scenario. The robot acquires multi-modal models of collaborative action primitives containing gaze cues from the human partner and kinetic information about the manipulation primitives of its arm. We show that if the partner guides the robot with the gaze cue, the robot recognizes the intended action primitive even in the case of ambiguous actions. Furthermore, this prior knowledge acquired by gaze greatly improves the prediction of the future intended trajectory during a physical interaction. Results with the humanoid iCub are presented and discussed.Dans ce papier, nous proposons une méthode de prédiction multi-modale de l'intention basé sur une description probabiliste de primitives de mouvements et de buts. On s'interesse ici à un scénario d'interaction collaborative entre un humain et un robot. Le robot modelise l'action collaborative de manière multi-modale, à l'aide de primitives contenant des informations visuelles (orientation du regard du partenaire) ainsi que des informations sur la dynamique de ses propre bras. Nous montrons dans cette étude que si le partenaire guide le robot en utilisant son regard, le robot reconnait l'action attendu par le partenaire et ce, même dans le cas où les mouvements sont ambigus. Nous montrons aussi qu'en guidant le début du mouvement du robot physiquement , le robot peut même afiner sa trajectoire pour respecter encore mieux la volonté de son partenaire. Finalement, en utilisant les deux modalités, le robot peut utiliser l'information visuelle comme un a-priori sur l'action a effectuer, ce qui permet d'améliorer la reconnaissance de la trajectoire attendue lors d'interaction physique

    Developmental Learning of Audio-Visual Integration From Facial Gestures Of a Social Robot

    Get PDF
    We present a robot head with facial gestures, audio and vision capabilities toward the emergence of infant-like social features. For this, we propose a neural architecture that integrates these three modalities following a developmental stage with social interaction with a caregiver. During dyadic interaction with the experimenter, the robot learns to categorize audio-speech gestures of vowels /a/, /i/, /o/ as a baby would do it, by linking someone-else facial expressions to its own movements. We show that multimodal integration in the neural network is more robust than unimodal learning so that it compensates erroneous or noisy information coming from each modality. Therefore, facial mimicry with a partner can be reproduced using redundant audiovisual signals or noisy information from one modality only. Statistical experiments on 24 naive participants show the robustness of our algorithm during human-robot interactions in public environment where many people move and talk all the time. We then discuss our model in the light of human-robot communication, the development of social skills and language in infants

    One-shot Evaluation of the Control Interface of a Robotic Arm by Non-Experts

    Get PDF
    International audienceIn this paper we study the relation between the performance of use and user preferences for a robotic arm control interface. We are interested in the user preference of non-experts after a one-shot evaluation of the interfaces on a test task. We also probe into the possible relation between user performance and individual factors. After a focus group study, we choose to compare the robotic arm joystick and a graphical user interface. Then, we studied the user performance and subjective evaluation of the interfaces during an experiment with the robot arm Jaco and N=23 healthy adults. Our preliminary results show that the user preference for a particular interface does not seem to depend on their performance in using it: for example, many users expressed their preference for the joystick while they were better performing with the graphical interface. Contrary to our expectations, this result does not seem to relate to the user's individual factors that we evaluated, namely desire for control and negative attitude towards robots

    Prediction of Intention during Interaction with iCub with Probabilistic Movement Primitives

    Get PDF
    International audienceThis paper describes our open-source software for predicting the intention of a user physically interacting with the humanoid robot iCub. Our goal is to allow the robot to infer the intention of the human partner during collaboration, by predicting the future intended trajectory: this capability is critical to design anticipatory behaviors that are crucial in human-robot collaborative scenarios, such as in co-manipulation, cooperative assembly or transportation. We propose an approach to endow the iCub with basic capabilities of intention recognition, based on Probabilistic Movement Primitives (ProMPs), a versatile method for representing, generalizing, and reproducing complex motor skills. The robot learns a set of motion primitives from several demonstrations, provided by the human via physical interaction. During training, we model the collaborative scenario using human demonstrations. During the reproduction of the collaborative task, we use the acquired knowledge to recognize the intention of the human partner. Using a few early observations of the state of the robot, we can not only infer the intention of the partner, but also complete the movement, even if the user breaks the physical interaction with the robot. We evaluate our approach in simulation and on the real iCub. In simulation, the iCub is driven by the user using the Geomagic Touch haptic device. In the real robot experiment, we directly interact with the iCub by grabbing and manually guiding the robot's arm. We realize two experiments on the real robot: one with simple reaching trajectories, and one inspired by collaborative object sorting. The software implementing our approach is open-source and available on the GitHub platform. Additionally, we provide tutorials and videos

    The CoDyCo Project achievements and beyond: Towards Human Aware Whole-body Controllers for Physical Human Robot Interaction

    Get PDF
    International audienceThe success of robots in real-world environments is largely dependent on their ability to interact with both humans and said environment. The FP7 EU project CoDyCo focused on the latter of these two challenges by exploiting both rigid and compliant contacts dynamics in the robot control problem. Regarding the former, to properly manage interaction dynamics on the robot control side, an estimation of the human behaviours and intentions is necessary. In this paper we present the building blocks of such a human-in-the-loop controller, and validate them in both simulation and on the iCub humanoid robot using a human-robot interaction scenario. In this scenario, a human assists the robot in standing up from being seated on a bench

    Prédiction du mouvement humain pour la robotique collaborative : du geste accompagné au mouvement corps entier.

    No full text
    This thesis lies at the intersection between machine learning and humanoid robotics, under the theme of human-robot interaction and within the cobotics (collaborative robotics) field.It focuses on prediction for non-verbal human-robot interactions, with an emphasis on gestural interaction.The prediction of the intention, understanding, and reproduction of gestures are therefore central topics of this thesis.First, the robots learn gestures by demonstration: a user grabs its arm and makes it perform the gestures to be learned several times. The robot must then be able to reproduce these different movements while generalizing them to adapt them to the situation. To do so, using its proprioceptive sensors, it interprets the perceived signals to understand the movement made by the user in order to generate similar ones later on.Second, the robot learns to recognize the intention of the human partner based on the gestures that the human initiates: the robot then has to perform the gestures adapted to the situation and corresponding to the user's expectations. This requires the robot to understand the user's gestures. To this end, different perceptual modalities have been explored.Using proprioceptive sensors, the robot feels the user's gestures through its own body: it is then a question of physical human-robot interaction.Using visual sensors, the robot interprets the movement of the user's head.Finally, using external sensors, the robot recognizes and predicts the user's whole body movement. In that case, the user wears sensors (in our case, a wearable motion tracking suit by XSens) that transmit his posture to the robot. In addition, the coupling of these modalities was studied.From a methodological point of view, the learning and the recognition of time series (gestures) have been central to this thesis. In that aspect, two approaches have been developed. The first is based on the statistical modeling of movement primitives (corresponding to gestures): ProMPs. The second adds Deep Learning to the first one, by using auto-encoders in order to model whole-body gestures containing a lot of information while allowing a prediction in soft real time. Various issues were taken into account during this thesis regarding the creation and development of our methods. These issues revolve around: the prediction of trajectory durations, the reduction of the cognitive and motor load imposed on the user, the need for speed (soft real-time) and accuracy in predictions.Cette thèse se situe à l’intersection de l’apprentissage automatique et de la robotique humanoïde, dans la thématique de l’interaction homme-robot, et dans le domaine de la cobotique (robotique collaborative).Elle se focalise sur les interactions non verbales humain-robot, en particulier sur l’interaction gestuelle. La prédiction de l'intention, la compréhension et la reproduction de gestes sont donc des questions centrales de cette thèse. Dans un premier temps, il s’agit de faire apprendre au robot des gestes par démonstration : un utilisateur prend le robot par le bras et lui fait réaliser les gestes à apprendre et ce, plusieurs fois. Le robot doit ensuite être capable de reproduire ces différents mouvements tout en les généralisant pour s’adapter au contexte. Pour cela, à l’aide de ses capteurs proprioceptifs, il interprète les signaux perçus pour comprendre le mouvement que lui fait réaliser l’utilisateur, afin d’en générer des similaires par la suite. Dans un second temps, le robot apprend à reconnaître l’intention de l’humain avec lequel il interagit et cela, à partir des gestes que ce dernier initie : il s’agit ensuite pour le robot de produire les gestes adaptés à la situation et correspondant aux attentes de l’utilisateur. Cela nécessite que le robot comprenne la gestuelle de l’utilisateur. Pour cela, différentes modalités perceptives ont été explorées.À l’aide de capteurs proprioceptifs, le robot ressent les gestes de l’utilisateur au travers de son propre corps : il s’agit alors d’interaction physique humain-robot.À l’aide de capteurs visuels, le robot interprète le mouvement de la tête de l’utilisateur. Enfin, à l’aide de capteurs externes, le robot reconnaît et prédit le mouvement corps entier de l’utilisateur. Dans ce dernier cas, l’utilisateur porte lui-même des capteurs (vêtement X-Sens) qui transmettent sa posture au robot. De plus, le couplage de ces modalités a été étudié. D’un point de vue méthodologique, nous venons de voir que les questions d’apprentissage et de reconnaissance de séries temporelles (les gestes) ont été centrales dans cette thèse. Pour cela, deux approches ont été développées. La première est fondée sur la modélisation statistique de primitives de mouvements (correspondant aux gestes) : les ProMPs. La seconde,ajoute à la première du Deep Learning, par l’utilisation d’auto-encodeurs, afin de modéliser des gestes corps entier contenant beaucoup d’informations, tout en permettant une prédiction en temps réel mou. Lors de cette thèse, différents enjeux ont notamment été pris en compte pour la création et le développement de nos méthodes. Ces enjeux concernent : la prédiction des durées des trajectoires, la réduction de la charge cognitive et motrice imposée à l’utilisateur, le besoin de rapidité (temps réel mou) et de précision dans les prédictions

    Movement Prediction for human-robot collaboration : from simple gesture to whole-body movement

    No full text
    Cette thèse se situe à l’intersection de l’apprentissage automatique et de la robotique humanoïde, dans le domaine de la robotique collaborative. Elle se focalise sur les interactions non verbales humain-robot, en particulier sur l’interaction gestuelle. La prédiction de l’intention, la compréhension et la reproduction de gestes sont les questions centrales de cette thèse. Dans un premier temps, le robot apprend des gestes par démonstration : un utilisateur prend le bras du robot et lui fait réaliser les gestes à apprendre plusieurs fois. Le robot doit alors reproduire ces différents mouvements tout en les généralisant pour les adapter au contexte. Pour cela, à l’aide de ses capteurs proprioceptifs, il interprète les signaux perçus pour comprendre le mouvement guidé par l’utilisateur, afin de pouvoir en générer des similaires. Dans un second temps, le robot apprend à reconnaître l’intention de l’humain avec lequel il interagit, à partir des gestes que ce dernier initie. Le robot produit ensuite des gestes adaptés à la situation et correspondant aux attentes de l’utilisateur. Cela nécessite que le robot comprenne la gestuelle de l’utilisateur. Pour cela, différentes modalités perceptives ont été explorées. À l’aide de capteurs proprioceptifs, le robot ressent les gestes de l’utilisateur au travers de son propre corps : il s’agit alors d’interaction physique humain-robot. À l’aide de capteurs visuels, le robot interprète le mouvement de la tête de l’utilisateur. Enfin, à l’aide de capteurs externes, le robot reconnaît et prédit le mouvement corps entier de l’utilisateur. Dans ce dernier cas, l’utilisateur porte lui-même des capteurs (vêtement X-Sens) qui transmettent sa posture au robot. De plus, le couplage de ces modalités a été étudié. D’un point de vue méthodologique, nous nous sommes focalisés sur les questions d’apprentissage et de reconnaissance de gestes. Une première approche permet de modéliser statistiquement des primitives de mouvements representant les gestes : les ProMPs. La seconde, ajoute à la première du Deep Learning, par l’utilisation d’auto-encodeurs, afin de modéliser des gestes corps entier contenant beaucoup d’informations, tout en permettant une prédiction en temps réel mou. Différents enjeux ont notamment été pris en compte, concernant la prédiction des durées des trajectoires, la réduction de la charge cognitive et motrice imposée à l’utilisateur, le besoin de rapidité (temps réel mou) et de précision dans les prédictionsThis thesis lies at the intersection between machine learning and humanoid robotics, under the theme of human-robot interaction and within the cobotics (collaborative robotics) field. It focuses on prediction for non-verbal human-robot interactions, with an emphasis on gestural interaction. The prediction of the intention, understanding, and reproduction of gestures are therefore central topics of this thesis. First, the robots learn gestures by demonstration: a user grabs its arm and makes it perform the gestures to be learned several times. The robot must then be able to reproduce these different movements while generalizing them to adapt them to the situation. To do so, using its proprioceptive sensors, it interprets the perceived signals to understand the user's movement in order to generate similar ones later on. Second, the robot learns to recognize the intention of the human partner based on the gestures that the human initiates. The robot can then perform gestures adapted to the situation and corresponding to the user’s expectations. This requires the robot to understand the user’s gestures. To this end, different perceptual modalities have been explored. Using proprioceptive sensors, the robot feels the user’s gestures through its own body: it is then a question of physical human-robot interaction. Using visual sensors, the robot interprets the movement of the user’s head. Finally, using external sensors, the robot recognizes and predicts the user’s whole body movement. In that case, the user wears sensors (in our case, a wearable motion tracking suit by XSens) that transmit his posture to the robot. In addition, the coupling of these modalities was studied. From a methodological point of view, the learning and the recognition of time series (gestures) have been central to this thesis. In that aspect, two approaches have been developed. The first is based on the statistical modeling of movement primitives (corresponding to gestures) : ProMPs. The second adds Deep Learning to the first one, by using auto-encoders in order to model whole-body gestures containing a lot of information while allowing a prediction in soft real time. Various issues were taken into account during this thesis regarding the creation and development of our methods. These issues revolve around: the prediction of trajectory durations, the reduction of the cognitive and motor load imposed on the user, the need for speed (soft real-time) and accuracy in prediction

    Peut-on tirer parti de la fouille de motifs temporels afin de modéliser l'activité des étudiants ?

    No full text
    International audienceAnalyzing students’ activities in their learning process is an issue that has received significant attention in the educational data mining research field. Many approaches have been proposed, including the popular sequential pattern mining. However, the vast majority of the works do not focus on the time of occurrence of the events within the activities. This paper relies on the hypothesis that we can get a better understanding of students’ activities, as well as design moreaccurate models, if time is considered. With this in mind, we propose to study time-interval patterns. To highlight the benefits of managing time, we analyze the data collected about 113 first-year university students interacting with their LMS. Experiments reveal that frequent time-interval patterns are actually identified, which means that some students’ activities are regulated not only by the order of learning resources but also by time. In addition, the experiments emphasize that the sets of intervals highly influence the patterns mined and that the set of intervals that represents the human natural time (minute, hour, day, etc.) seems to be the most appropriate one to represent timegap between resources. Finally, we show that time-interval pattern mining brings additional information compared to sequential pattern mining. Indeed, not only the view of students’ possible future activities is less uncertain (in terms of learning resources and their temporal gap) but also, as soon as two students differ in their time-intervals, this difference indicates that their following activities are likely to diverge.L'analyse des activités des étudiants dans leur processus d'apprentissage est une question qui a reçu une attention significative dans le domaine de la recherche en fouille de données éducatives. De nombreuses approches ont été proposées, y compris la populaire fouille de motifs séquentiels. Cependant, la grande majorité des travaux ne se concentrent pas sur le moment où les événements se produisent au sein des activités. Dans cet article, nous nous appuyons sur l'hypothèse selon laquelle nous pouvons mieux comprendre les activités des étudiants, ainsi que concevoir des modèles plus précis, si le temps est pris en compte. Dans cette optique, nous proposons d'étudier les motifs temporels. Pour mettre en évidence les avantages de la gestion du temps, nous analysons les données collectées sur 113 étudiants de première année d'université interagissant avec leur LMS. Les expériences révèlent que des motifs temporels fréquents sont effectivement identifiés, ce qui signifie que certaines activités des étudiants sont régulées non seulement par l'ordre des ressources d'apprentissage, mais aussi par le temps. De plus, les expériences soulignent que les ensembles d'intervalles influencent fortement les motifs extraits et que l'ensemble d'intervalles qui représente le temps naturel humain (minute, heure, jour, etc.) semble être le plus approprié pour représenter l'écart de temps entre les ressources. Enfin, nous montrons que l'exploration de motifs temporels apporte des informations supplémentaires par rapport à l'exploration de motifs séquentiels. En effet, non seulement la vue des activités futures possibles des étudiants est moins incertaine (en termes de ressources d'apprentissage et de leur écart temporel), mais aussi, dès que deux étudiants diffèrent dans leurs rythmes, cette différence indique que leurs activités suivantes sont susceptibles de diverger
    corecore