1,990 research outputs found

    A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention Task

    Get PDF
    International audienceIn this work we tackle the problem of child engagement estimation while children freely interact with a robot in a friendly, room-like environment. We propose a deep-based multi-view solution that takes advantage of recent developments in human pose detection. We extract the child's pose from different RGB-D cameras placed regularly in the room, fuse the results and feed them to a deep neural network trained for classifying engagement levels. The deep network contains a recurrent layer, in order to exploit the rich temporal information contained in the pose data. The resulting method outperforms a number of baseline classifiers, and provides a promising tool for better automatic understanding of a child's attitude, interest and attention while cooperating with a robot. The goal is to integrate this model in next generation social robots as an attention monitoring tool during various Child Robot Interaction (CRI) tasks both for Typically Developed (TD) children and children affected by autism (ASD)

    Multi Agent Systems

    Get PDF
    Research on multi-agent systems is enlarging our future technical capabilities as humans and as an intelligent society. During recent years many effective applications have been implemented and are part of our daily life. These applications have agent-based models and methods as an important ingredient. Markets, finance world, robotics, medical technology, social negotiation, video games, big-data science, etc. are some of the branches where the knowledge gained through multi-agent simulations is necessary and where new software engineering tools are continuously created and tested in order to reach an effective technology transfer to impact our lives. This book brings together researchers working in several fields that cover the techniques, the challenges and the applications of multi-agent systems in a wide variety of aspects related to learning algorithms for different devices such as vehicles, robots and drones, computational optimization to reach a more efficient energy distribution in power grids and the use of social networks and decision strategies applied to the smart learning and education environments in emergent countries. We hope that this book can be useful and become a guide or reference to an audience interested in the developments and applications of multi-agent systems

    Neuromorphic Computing Systems for Tactile Sensing Perception

    Get PDF
    Touch sensing plays an important role in humans daily life. Tasks like exploring, grasping and manipulating objects deeply rely on it. As such, Robots and hand prosthesis endowed with the sense of touch can better and more easily manipulate objects, and physically collaborate with other agents. Towards this goal, information about touched objects and surfaces has to be inferred from raw data coming from the sensors. The orientation of edges, which is employed as a pre-processing stage in both artificial vision and touch, is a key indication for object discrimination. Inspired on the encoding of edges in human first-order tactile afferents, we developed a biologically inspired, spiking models architecture that mimics human tactile perception with computational primitives that are implementable on low-power subthreshold neuromorphic hardware. The network architecture uses three layers of Leaky Integrate and Fire neurons to distinguish different edge orientations of a bar pressed on the artificial skin of the iCub robot. We demonstrated that the network architecture can learn the appropriate connectivity through unsupervised spike-based learning, and that the number and spatial distribution of sensitive areas within receptive fields are important in edge orientation discrimination. The unconstrained and random structure of the connectivity among layers can produce unbalanced activity in the output neurons, which are driven by a variable amount of synaptic inputs. We explored two different mechanisms of synaptic normalization (weights normalization and homeostasis), defining how this can be useful during the learning phase and inference phase. The network is successfully able to discriminate between 35 orientations of 36 (0 degree to 180 degree with 5 degree step increments) with homeostasis and weights normalization mechanism. Besides edge orientation discrimination, we modified the network architecture to be able to classify six different touch modalities (e.g. poke, press, grab, squeeze, push, and rolling a wheel). We demonstrated the ability of the network to learn appropriate connectivity patterns for the classification, achieving a total accuracy of 88.3 %. Furthermore, another application scenario on the tactile object shapes recognition has been considered because of its importance in robotic manipulation. We illustrated that the network architecture with 2 layers of spiking neurons was able to discriminate the tactile object shapes with accuracy 100 %, after integrating to it an array of 160 piezoresistive tactile sensors where the object shapes are applied

    From Constraints to Opportunities: Efficient Object Detection Learning for Humanoid Robots

    Get PDF
    Reliable perception and efficient adaptation to novel conditions are priority skills for robots that function in ever-changing environments. Indeed, autonomously operating in real world scenarios raises the need of identifying different context\u2019s states and act accordingly. Moreover, the requested tasks might not be known a-priori, requiring the system to update on-line. Robotic platforms allow to gather various types of perceptual information due to the multiple sensory modalities they are provided with. Nonetheless, latest results in computer vision motivate a particular interest in visual perception. Specifically, in this thesis, I mainly focused on the object detection task since it can be at the basis of more sophisticated capabilities. The vast advancements in latest computer vision research, brought by deep learning methods, are appealing in a robotic setting. However, their adoption in applied domains is not straightforward since adapting them to new tasks is strongly demanding in terms of annotated data, optimization time and computational resources. These requirements do not generally meet current robotics constraints. Nevertheless, robotic platforms and especially humanoids present opportunities that can be exploited. The sensors they are provided with represent precious sources of additional information. Moreover, their embodiment in the workspace and their motion capabilities allow for a natural interaction with the environment. Motivated by these considerations, in this Ph.D project, I mainly aimed at devising and developing solutions able to integrate the worlds of computer vision and robotics, by focusing on the task of object detection. Specifically, I dedicated a large amount of effort in alleviating state-of-the-art methods requirements in terms of annotated data and training time, preserving their accuracy by exploiting robotics opportunity

    Activity Report 2020 : Automatic Control Lund University

    Get PDF

    Acquisition and distribution of synergistic reactive control skills

    Get PDF
    Learning from demonstration is an afficient way to attain a new skill. In the context of autonomous robots, using a demonstration to teach a robot accelerates the robot learning process significantly. It helps to identify feasible solutions as starting points for future exploration or to avoid actions that lead to failure. But the acquisition of pertinent observationa is predicated on first segmenting the data into meaningful sequences. These segments form the basis for learning models capable of recognising future actions and reconstructing the motion to control a robot. Furthermore, learning algorithms for generative models are generally not tuned to produce stable trajectories and suffer from parameter redundancy for high degree of freedom robots This thesis addresses these issues by firstly investigating algorithms, based on dynamic programming and mixture models, for segmentation sensitivity and recognition accuracy on human motion capture data sets of repetitive and categorical motion classes. A stability analysis of the non-linear dynamical systems derived from the resultant mixture model representations aims to ensure that any trajectories converge to the intended target motion as observed in the demonstrations. Finally, these concepts are extended to humanoid robots by deploying a factor analyser for each mixture model component and coordinating the structure into a low dimensional representation of the demonstrated trajectories. This representation can be constructed as a correspondence map is learned between the demonstrator and robot for joint space actions. Applying these algorithms for demonstrating movement skills to robot is a further step towards autonomous incremental robot learning

    Visual Prediction of Rover Slip: Learning Algorithms and Field Experiments

    Get PDF
    Perception of the surrounding environment is an essential tool for intelligent navigation in any autonomous vehicle. In the context of Mars exploration, there is a strong motivation to enhance the perception of the rovers beyond geometry-based obstacle avoidance, so as to be able to predict potential interactions with the terrain. In this thesis we propose to remotely predict the amount of slip, which reflects the mobility of the vehicle on future terrain. The method is based on learning from experience and uses visual information from stereo imagery as input. We test the algorithm on several robot platforms and in different terrains. We also demonstrate its usefulness in an integrated system, onboard a Mars prototype rover in the JPL Mars Yard. Another desirable capability for an autonomous robot is to be able to learn about its interactions with the environment in a fully automatic fashion. We propose an algorithm which uses the robot's sensors as supervision for vision-based learning of different terrain types. This algorithm can work with noisy and ambiguous signals provided from onboard sensors. To be able to cope with rich, high-dimensional visual representations we propose a novel, nonlinear dimensionality reduction technique which exploits automatic supervision. The method is the first to consider supervised nonlinear dimensionality reduction in a probabilistic framework using supervision which can be noisy or ambiguous. Finally, we consider the problem of learning to recognize different terrains, which addresses the time constraints of an onboard autonomous system. We propose a method which automatically learns a variable-length feature representation depending on the complexity of the classification task. The proposed approach achieves a good trade-off between decrease in computational time and recognition performance.</p

    Sistemas de suporte à condução autónoma adequados a plataforma robótica 4-wheel skid-steer: percepção, movimento e simulação

    Get PDF
    As competições de robótica móvel desempenham papel preponderante na difusão da ciência e da engenharia ao público em geral. E também um espaço dedicado ao ensaio e comparação de diferentes estratégias e abordagens aos diversos desafios da robótica móvel. Uma das vertentes que tem reunido maior interesse nos promotores deste género de iniciativas e entre o público em geral são as competições de condução autónoma. Tipicamente as Competi¸c˜oes de Condução Autónoma (CCA) tentam reproduzir um ambiente semelhante a uma estrutura rodoviária tradicional, no qual sistemas autónomos deverão dar resposta a um conjunto variado de desafios que vão desde a deteção da faixa de rodagem `a interação com distintos elementos que compõem uma estrutura rodoviária típica, do planeamento trajetórias à localização. O objectivo desta dissertação de mestrado visa documentar o processo de desenho e concepção de uma plataforma robótica móvel do tipo 4-wheel skid-steer para realização de tarefas de condução autónoma em ambiente estruturado numa pista que pretende replicar uma via de circulação automóvel dotada de sinalética básica e alguns obstáculos. Paralelamente, a dissertação pretende também fazer uma análise qualitativa entre o processo de simulação e a sua transposição para uma plataforma robótica física. inferir sobre a diferenças de performance e de comportamento.Mobile robotics competitions play an important role in the diffusion of science and engineering to the general public. It is also a space dedicated to test and compare different strategies and approaches to several challenges of mobile robotics. One of the aspects that has attracted more the interest of promoters for this kind of initiatives and general public is the autonomous driving competitions. Typically, Autonomous Driving Competitions (CCAs) attempt to replicate an environment similar to a traditional road structure, in which autonomous systems should respond to a wide variety of challenges ranging from lane detection to interaction with distinct elements that exist in a typical road structure, from planning trajectories to location. The aim of this master’s thesis is to document the process of designing and endow a 4-wheel skid-steer mobile robotic platform to carry out autonomous driving tasks in a structured environment on a track that intends to replicate a motorized roadway including signs and obstacles. In parallel, the dissertation also intends to make a qualitative analysis between the simulation process and the transposition of the developed algorithm to a physical robotic platform, analysing the differences in performance and behavior
    corecore