679 research outputs found

    Neuro-Controllers, scalability and adaptation

    Get PDF
    A Layered Evolution (LE) paradigm based method for the generation of a neuron-controller is developed and verified through simulations and experimentally. It is intended to solve scalability issues in systems with many behavioral modules. Each and every module is a genetically evolved neuro-controller specialized in performing a different task. The main goal is to reach a combination of different basic behavioral elements using different artificial neural-network paradigms concerning mobile robot navigation in an unknown environment. The obtained controller is evaluated over different scenarios in a structured environment, ranging from a detailed simulation model to a real experiment. Finally most important implies are shown through several focuses

    Neuro-Controllers, scalability and adaptation

    Get PDF
    A Layered Evolution (LE) paradigm based method for the generation of a neuron-controller is developed and verified through simulations and experimentally. It is intended to solve scalability issues in systems with many behavioral modules. Each and every module is a genetically evolved neuro-controller specialized in performing a different task. The main goal is to reach a combination of different basic behavioral elements using different artificial neural-network paradigms concerning mobile robot navigation in an unknown environment. The obtained controller is evaluated over different scenarios in a structured environment, ranging from a detailed simulation model to a real experiment. Finally most important implies are shown through several focuses.Red de Universidades con Carreras en Informática (RedUNCI

    A Hierarchical Extension of the D ∗ Algorithm

    Get PDF
    In this paper a contribution to the practice of path planning using a new hierarchical extension of the D ∗ algorithm is introduced. A hierarchical graph is stratified into several abstraction levels and used to model environments for path planning. The hierarchical D∗ algorithm uses a downtop strategy and a set of pre-calculated trajectories in order to improve performance. This allows optimality and specially lower computational time. It is experimentally proved how hierarchical search algorithms and on-line path planning algorithms based on topological abstractions can be combined successfully

    Hierarchical D ∗ algorithm with materialization of costs for robot path planning

    Get PDF
    In this paper a new hierarchical extension of the D ∗ algorithm for robot path planning is introduced. The hierarchical D ∗ algorithm uses a down-top strategy and a set of precalculated paths (materialization of path costs) in order to improve performance. This on-line path planning algorithm allows optimality and specially lower computational time. H-Graphs (hierarchical graphs) are modified and adapted to support on-line path planning with materialization of costs and multiple hierarchical levels. Traditional on-line robot path planning focused in horizontal spaces is also extended to vertical and interbuilding spaces. Some experimental results are showed and compared to other path planning algorithms

    Towards an Interactive Humanoid Companion with Visual Tracking Modalities

    Get PDF
    The idea of robots acting as human companions is not a particularly new or original one. Since the notion of “robot ” was created, the idea of robots replacing humans in dangerous, dirty and dull activities has been inseparably tied with the fantasy of human-like robots being friends and existing side by side with humans. In 1989, Engelberger (Engelberger

    Body gestures recognition for human robot interaction

    Get PDF
    In this project, a solution for human gesture classification is proposed. The solution uses a Deep Learning model and is meant to be useful for non-verbal communication between humans and robots. The State-of-the-Art is researched in an effort to achieve a model ready to work with natural gestures without restrictions. The research will focus on the creation of a temPoral bOdy geSTUre REcognition model (POSTURE) that can recognise continuous gestures performed in real-life situations. The suggested model takes into account spatial and temporal components so as to achieve the recognition of more natural and intuitive gestures. In a first step, a framework extracts from all the images the corresponding landmarks for each of the body joints. Next, some data filtering techniques are applied with the aim of avoiding problems related with the data. Afterwards, the filtered data is input into an State-of-the-Art Neural Network. And finally, different neural network configurations and approaches are tested to find the optimal performance. The obtained outcome shows the research has been done in the right track and how, despite of the dataset problems found, even better results can be achievedObjectius de Desenvolupament Sostenible::9 - Indústria, Innovació i Infraestructur

    iGrace – Emotional Computational Model for EmI Companion Robot.

    Get PDF
    Chapitre 4We will discuss in this chapter the research in the field of emotional interaction, to maintain a non-verbal interaction with children from 4 to 8 years. This work fits into the EmotiRob project, whose goal is to comfort the children vulnerable and / or in hospitalization with an emotional robot companion. The use of robots in hospitals is still limited; we decided to put forward simple robot architecture and therefore, the emotional expression. In this context, a robot too complex and too voluminous must be avoided. After a study of advanced research on perception and emotional synthesis, it was important to determine the most appropriate way to express emotions in order to have a recognition rate acceptable to our target. Following an experiment on this subject, we were able to determine the degrees of freedom needed for the robot to express the six primary emotions. The second step was the definition and description of our emotional model. In order to have a wide range of expressions, while respecting the number of degrees of freedom, we use the concepts of emotional experiences. They provide us with almost two hundred different behaviors for the model. However we decide as a first step to limit ourselves to only fifty behaviors. This diversification is possible thanks to a mix of emotions linked to the dynamics of emotions. This theoretical model now established, we have started various experiments on a variety of audiences in order to validate the first time in its relevance and the rate of recognition of emotions. The first experiment was performed using a simulator for the capture of speech and the emotional and behavioral synthesis of the robot. This, validates the model assumptions that will be integrated EMI - Emotional Model of Interaction. Future phases of the project will evaluate the robot, both in its expression than in providing comfort to children. We describe the protocols used and present the results for EMI. These experiments will allow us to adjust and adapt the model. We will finish this chapter with a brief description of the robot's architecture, and the improvements to be made for the second version of EMI
    corecore