3,246 research outputs found

    Personalizing Human-Robot Dialogue Interactions using Face and Name Recognition

    Get PDF
    Task-oriented dialogue systems are computer systems that aim to provide an interaction indistinguishable from ordinary human conversation with the goal of completing user- defined tasks. They are achieving this by analyzing the intents of users and choosing respective responses. Recent studies show that by personalizing the conversations with this systems one can positevely affect their perception and long-term acceptance. Personalised social robots have been widely applied in different fields to provide assistance. In this thesis we are working on development of a scientific conference assistant. The goal of this assistant is to provide the conference participants with conference information and inform about the activities for their spare time during conference. Moreover, to increase the engagement with the robot our team has worked on personalizing the human-robot interaction by means of face and name recognition. To achieve this personalisation, first the name recognition ability of available physical robot was improved, next by the concent of the participants their pictures were taken and used for memorization of returning users. As acquiring the consent for personal data storage is not an optimal solution, an alternative method for participants recognition using QR Codes on their badges was developed and compared to pre-trained model in terms of speed. Lastly, the personal details of each participant, as unviversity, country of origin, was acquired prior to conference or during the conversation and used in dialogues. The developed robot, called DAGFINN was displayed at two conferences happened this year in Stavanger, where the first time installment did not involve personalization feature. Hence, we conclude this thesis by discussing the influence of personalisation on dialogues with the robot and participants satisfaction with developed social robot

    Solving Multi-agent planning tasks by using automated planning

    Get PDF
    This dissertation consists on developing a control system for an autonomous multiagent system using Automated Planning and Computer Vision to solve warehouse organization tasks. This work presents an heterogeneous multi-agent system where each robot has different capabilities. In order to complete the proposed task, the robots will need to collaborate. On one hand, there are coordinator robots that collect information about the boxes to get their destination storage position using Computer Vision. On the other hand, there are cargo robots that push the boxes more easily than the coordinators but they have no camera devices to identify the boxes. Then, both robots must collaborate in order to solve the warehouse problem due to the different sensors and actuators that they have available. This work has been developed in Java. It uses JNAOqi to communicate with the NAO robots (coordinators) and rosjava to communicate with the P3DX robots (cargos). The control modules are deployed in the PELEA architecuture. The empirical evaluation has been conducted in a real environment using two robots: one NAO8 Robot and one P3DX robot.Este trabajo presenta el desarrollo de un sistema de control para un sistema autónomo multi-agente con Planificación Automática y Visión Artificial para resolver tareas de ordenación de almacenes. En el proyecto se presenta un sistema multi-agente heterogéneo donde cada agente tiene diferentes habilidades. Para poder completar la tarea propuesta, los agentes, en este caso robots, deben colaborar. Por un lado, hay robots coordinadores que recogen información de las cajas medinte Visión Artificial para conocer la posición de almacenaje de la caja. Por otro lado, hay robots de carga que empujan las cajas hasta su destino con mayor facilidad que los coordinadores pero que no tienen cámaras de video para identificar las cajas. Por ello, ambos robots tienen que colaborar para resolver el problema de ordenación debido a los diferentes sensores y actuadores que tienen disponibles. El proyecto se ha desarrollado en Java. Se ha utilizado JNAOqi para comunicarse con los robots NAO (coordinadores) y rosjava para comunicarse con los robots P3DX (carga). La evaluación empírica se ha realizado en un entorno real utilizando dos robots: un robot NAO y un robot P3DX.Ingeniería Informátic

    Perspectives and approaches for the internet of things

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de ComputadoresThis thesis was developed based on a scenario in which a CEO of a certain company asked the author to conduct an exploratory work evaluating the potential opportunities and limitations of this emerging area described as the future of the Internet, the Internet of Things (IoT). The objective is thus to provide the reader with a wide view of the vital points for the implementation and exploitation of the IoT, a technology that promises to deliver a new and wider range of applications to the society. In this subject there is a need to gather and organize information produced by several researchers and contributors. Due to the fact of being a new area and researchers work independently of each other, the work is scattered and inconsistencies can be found among different projects and publications. As such, in a first stage some definitions are provided and an attempt to clarify concepts is made. To support and emphasize the exponential growth of IoT, a brief historical overview is provided to the reader. This overview is based on the new trends and expectations that arise every day through news, potential businesses and also in important tools such as Google Trends. Several examples of applications in the context of the IoT, illustrate the benefits, not only in terms of society, but also for business opportunities, safety, and well-being. The main areas of interest to achieve the IoT such as: hardware, software, modeling, methods of connection, security and integration are studied in this work, in order to provide some insight into current strong and weak points. As the Internet of Things become a matter of large interest, various research groups are active in exploring and organizing projects in this area. Some of these projects, namely the ones considered the most important, are also presented in this thesis. Taking into account the facts surrounding this new technology, it becomes quite important to bring them together, clarifying them and trying to open new perspectives for further studies and improvements. Finally, in order to allow a practical evaluation of the technology, a prototype is developed around the connection of an intelligent object – a small mobile robot – to the Internet. A set of conclusions and future work directions are then presented which take into account the findings of the bibliographic analysis as well as the acquired experience with the implementation of the prototype

    Automation and Robotics: Latest Achievements, Challenges and Prospects

    Get PDF
    This SI presents the latest achievements, challenges and prospects for drives, actuators, sensors, controls and robot navigation with reverse validation and applications in the field of industrial automation and robotics. Automation, supported by robotics, can effectively speed up and improve production. The industrialization of complex mechatronic components, especially robots, requires a large number of special processes already in the pre-production stage provided by modelling and simulation. This area of research from the very beginning includes drives, process technology, actuators, sensors, control systems and all connections in mechatronic systems. Automation and robotics form broad-spectrum areas of research, which are tightly interconnected. To reduce costs in the pre-production stage and to reduce production preparation time, it is necessary to solve complex tasks in the form of simulation with the use of standard software products and new technologies that allow, for example, machine vision and other imaging tools to examine new physical contexts, dependencies and connections

    Human-Robot Perception in Industrial Environments: A Survey

    Get PDF
    Perception capability assumes significant importance for human–robot interaction. The forthcoming industrial environments will require a high level of automation to be flexible and adaptive enough to comply with the increasingly faster and low-cost market demands. Autonomous and collaborative robots able to adapt to varying and dynamic conditions of the environment, including the presence of human beings, will have an ever-greater role in this context. However, if the robot is not aware of the human position and intention, a shared workspace between robots and humans may decrease productivity and lead to human safety issues. This paper presents a survey on sensory equipment useful for human detection and action recognition in industrial environments. An overview of different sensors and perception techniques is presented. Various types of robotic systems commonly used in industry, such as fixed-base manipulators, collaborative robots, mobile robots and mobile manipulators, are considered, analyzing the most useful sensors and methods to perceive and react to the presence of human operators in industrial cooperative and collaborative applications. The paper also introduces two proofs of concept, developed by the authors for future collaborative robotic applications that benefit from enhanced capabilities of human perception and interaction. The first one concerns fixed-base collaborative robots, and proposes a solution for human safety in tasks requiring human collision avoidance or moving obstacles detection. The second one proposes a collaborative behavior implementable upon autonomous mobile robots, pursuing assigned tasks within an industrial space shared with human operators

    Multimodal sensor-based human-robot collaboration in assembly tasks

    Get PDF
    This work presents a framework for Human-Robot Collaboration (HRC) in assembly tasks that uses multimodal sensors, perception and control methods. First, vision sensing is employed for user identification to determine the collaborative task to be performed. Second, assembly actions and hand gestures are recognised using wearable inertial measurement units (IMUs) and convolutional neural networks (CNN) to identify when robot collaboration is needed and bring the next object to the user for assembly. If collaboration is not required, then the robot performs a solo task. Third, the robot arm uses time domain features from tactile sensors to detect when an object has been touched and grasped for handover actions in the assembly process. These multimodal sensors and computational modules are integrated in a layered control architecture for HRC collaborative assembly tasks. The proposed framework is validated in real-time using a Universal Robot arm (UR3) to collaborate with humans for assembling two types of objects 1) a box and 2) a small chair, and to work on a solo task of moving a stack of Lego blocks when collaboration with the user is not needed. The experiments show that the robot is capable of sensing and perceiving the state of the surrounding environment using multimodal sensors and computational methods to act and collaborate with humans to complete assembly tasks successfully.</p

    Exploring multimedia and interactive technologies

    Get PDF
    The goal of multimedia design strategies and innovation is to produce meaningful learning environments that relate to and build upon what the learner already knows and what the learner seeks. The multimedia tools used to achieve knowledge transfer should activate recall or prior knowledge and help the learner alter and encode new structures. Traditionally, multimedia has been localized to specific delivery systems and demographics based on the government, industry, or academic concentration. The presenter will explore the introduction of immersive telecommunications technologies, constructivist learning methodologies, and adult learning models to standardize networking and multimedia-based services and products capable of adapting to wired and wireless environments, different devices and conditions on a global scale

    Multimodal sensor-based human-robot collaboration in assembly tasks

    Get PDF
    This work presents a framework for Human-Robot Collaboration (HRC) in assembly tasks that uses multimodal sensors, perception and control methods. First, vision sensing is employed for user identification to determine the collaborative task to be performed. Second, assembly actions and hand gestures are recognised using wearable inertial measurement units (IMUs) and convolutional neural networks (CNN) to identify when robot collaboration is needed and bring the next object to the user for assembly. If collaboration is not required, then the robot performs a solo task. Third, the robot arm uses time domain features from tactile sensors to detect when an object has been touched and grasped for handover actions in the assembly process. These multimodal sensors and computational modules are integrated in a layered control architecture for HRC collaborative assembly tasks. The proposed framework is validated in real-time using a Universal Robot arm (UR3) to collaborate with humans for assembling two types of objects 1) a box and 2) a small chair, and to work on a solo task of moving a stack of Lego blocks when collaboration with the user is not needed. The experiments show that the robot is capable of sensing and perceiving the state of the surrounding environment using multimodal sensors and computational methods to act and collaborate with humans to complete assembly tasks successfully.</p
    corecore