1,810 research outputs found

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Robotics 2010

    Get PDF
    Without a doubt, robotics has made an incredible progress over the last decades. The vision of developing, designing and creating technical systems that help humans to achieve hard and complex tasks, has intelligently led to an incredible variety of solutions. There are barely technical fields that could exhibit more interdisciplinary interconnections like robotics. This fact is generated by highly complex challenges imposed by robotic systems, especially the requirement on intelligent and autonomous operation. This book tries to give an insight into the evolutionary process that takes place in robotics. It provides articles covering a wide range of this exciting area. The progress of technical challenges and concepts may illuminate the relationship between developments that seem to be completely different at first sight. The robotics remains an exciting scientific and engineering field. The community looks optimistically ahead and also looks forward for the future challenges and new development

    Enhanced Living Environments

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1303 “Algorithms, Architectures and Platforms for Enhanced Living Environments (AAPELE)”. The concept of Enhanced Living Environments (ELE) refers to the area of Ambient Assisted Living (AAL) that is more related with Information and Communication Technologies (ICT). Effective ELE solutions require appropriate ICT algorithms, architectures, platforms, and systems, having in view the advance of science and technology in this area and the development of new and innovative solutions that can provide improvements in the quality of life for people in their homes and can reduce the financial burden on the budgets of the healthcare providers. The aim of this book is to become a state-of-the-art reference, discussing progress made, as well as prompting future directions on theories, practices, standards, and strategies related to the ELE area. The book contains 12 chapters and can serve as a valuable reference for undergraduate students, post-graduate students, educators, faculty members, researchers, engineers, medical doctors, healthcare organizations, insurance companies, and research strategists working in this area

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Formation control of autonomous vehicles with emotion assessment

    Get PDF
    Autonomous driving is a major state-of-the-art step that has the potential to transform the mobility of individuals and goods fundamentally. Most developed autonomous ground vehicles (AGVs) aim to sense the surroundings and control the vehicle autonomously with limited or no driver intervention. However, humans are a vital part of such vehicle operations. Therefore, an approach to understanding human emotions and creating trust between humans and machines is necessary. This thesis proposes a novel approach for multiple AGVs, consisting of a formation controller and human emotion assessment for autonomous driving and collaboration. As the interaction between multiple AGVs is essential, the performance of two multi-robot control algorithms is analysed, and a platoon formation controller is proposed. On the other hand, as the interaction between AGVs and humans is equally essential to create trust between humans and AGVs, the human emotion assessment method is proposed and used as feedback to make autonomous decisions for AGVs. A novel simulation platform is developed for navigating multiple AGVs and testing controllers to realise this concept. Further to this simulation tool, a method is proposed to assess human emotion using the affective dimension model and physiological signals such as an electrocardiogram (ECG) and photoplethysmography (PPG). The experiments are carried out to verify that humans' felt arousal and valence levels could be measured and translated to different emotions for autonomous driving operations. A per-subject-based classification accuracy is statistically significant and validates the proposed emotion assessment method. Also, a simulation is conducted to verify AGVs' velocity control effect of different emotions on driving tasks

    Social Emotions in Multiagent Systems

    Full text link
    Tesis por compendioA lo largo de los últimos años, los sistemas multi-agente (SMA) han demostrado ser un paradigma potente y versátil, con un gran potencial a la hora de resolver problemas complejos en entornos dinámicos y distribuidos. Este potencial no se debe principalmente a sus características individuales (como son su autonomía, su capacidad de percepción, reacción y de razonamiento), sino que también a la capacidad de comunicación y cooperación a la hora de conseguir un objetivo. De hecho, su capacidad social es la que más llama la atención, es este comportamiento social el que dota de potencial a los sistemas multi-agente. Estas características han hecho de los SMA, la herramienta de inteligencia artificial (IA) más utilizada para el diseño de entornos virtuales inteligentes (IVE), los cuales son herramientas de simulación compleja basadas en agentes. Sin embargo, los IVE incorporan restricciones físicas (como gravedad, fuerzas, rozamientos, etc.), así como una representación 3D de lo que se quiere simular. Así mismo, estas herramientas no son sólo utilizadas para la realización de simulaciones. Con la aparición de nuevas aplicaciones como \emph{Internet of Things (IoT)}, \emph{Ambient Intelligence (AmI)}, robot asistentes, entre otras, las cuales están en contacto directo con el ser humano. Este contacto plantea nuevos retos a la hora de interactuar con estas aplicaciones. Una nueva forma de interacción que ha despertado un especial interés, es el que se relaciona con la detección y/o simulación de estados emocionales. Esto ha permitido que estas aplicaciones no sólo puedan detectar nuestros estados emocionales, sino que puedan simular y expresar sus propias emociones mejorando así la experiencia del usuario con dichas aplicaciones. Con el fin de mejorar la experiencia humano-máquina, esta tesis plantea como objetivo principal la creación de modelos emocionales sociales, los cuales podrán ser utilizados en aplicaciones MAS permitiendo a los agentes interpretar y/o emular diferentes estados emocionales y, además, emular fenómenos de contagio emocional. Estos modelos permitirán realizar simulaciones complejas basadas en emociones y aplicaciones más realistas en dominios como IoT, AIm, SH.Over the past few years, multi-agent systems (SMA) have proven to be a powerful and versatile paradigm, with great potential for solving complex problems in dynamic and distributed environments. This potential is not primarily due to their individual characteristics (such as their autonomy, their capacity for perception, reaction and reasoning), but also the ability to communicate and cooperate in achieving a goal. In fact, its social capacity is the one that draws the most attention, it is this social behavior that gives potential to multi-agent systems. These characteristics have made the SMA, the artificial intelligence (AI) tool most used for the design of intelligent virtual environments (IVE), which are complex agent-based simulation tools. However, IVE incorporates physical constraints (such as gravity, forces, friction, etc.), as well as a 3D representation of what you want to simulate. Also, these tools are not only used for simulations. With the emergence of new applications such as \emph {Internet of Things (IoT)}, \emph {Ambient Intelligence (AmI)}, robot assistants, among others, which are in direct contact with humans. This contact poses new challenges when it comes to interacting with these applications. A new form of interaction that has aroused a special interest is that which is related to the detection and / or simulation of emotional states. This has allowed these applications not only to detect our emotional states, but also to simulate and express their own emotions, thus improving the user experience with those applications. In order to improve the human-machine experience, this thesis aims to create social emotional models, which can be used in MAS applications, allowing agents to interpret and / or emulate different emotional states, and emulate phenomena of emotional contagion. These models will allow complex simulations based on emotions and more realistic applications in domains like IoT, AIm, SH.Al llarg dels últims anys, els sistemes multi-agent (SMA) han demostrat ser un paradigma potent i versàtil, amb un gran potencial a l'hora de resoldre problemes complexos en entorns dinàmics i distribuïts. Aquest potencial no es deu principalment a les seues característiques individuals (com són la seua autonomia, la seua capacitat de percepció, reacció i de raonament), sinó que també a la capacitat de comunicació i cooperació a l'hora d'aconseguir un objectiu. De fet, la seua capacitat social és la que més crida l'atenció, és aquest comportament social el que dota de potencial als sistemes multi-agent. Aquestes característiques han fet dels SMA, l'eina d'intel·ligència artificial (IA) més utilitzada per al disseny d'entorns virtuals intel·ligents (IVE), els quals són eines de simulació complexa basades en agents. No obstant això, els IVE incorporen restriccions físiques (com gravetat, forces, fregaments, etc.), així com una representació 3D del que es vol simular. Així mateix, aquestes eines no són només utilitzades per a la realització de simulacions. Amb l'aparició de noves aplicacions com \emph{Internet of Things (IOT)}, \emph{Ambient Intelligence (AmI)}, robot assistents, entre altres, les quals estan en contacte directe amb l'ésser humà. Aquest contacte planteja nous reptes a l'hora d'interactuar amb aquestes aplicacions. Una nova forma d'interacció que ha despertat un especial interès, és el que es relaciona amb la detecció i/o simulació d'estats emocionals. Això ha permès que aquestes aplicacions no només puguen detectar els nostres estats emocionals, sinó que puguen simular i expressar les seues pròpies emocions millorant així l'experiència de l'usuari amb aquestes aplicacions. Per tal de millorar l'experiència humà-màquina, aquesta tesi planteja com a objectiu principal la creació de models emocionals socials, els quals podran ser utilitzats en aplicacions MAS permetent als agents interpretar i/o emular diferents estats emocionals i, a més, emular fenòmens de contagi emocional. Aquests models permetran realitzar simulacions complexes basades en emocions i aplicacions més realistes en dominis com IoT, AIM, SH.Rincón Arango, JA. (2018). Social Emotions in Multiagent Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/98090TESISCompendi

    ACII 2009: Affective Computing and Intelligent Interaction. Proceedings of the Doctoral Consortium 2009

    Get PDF
    corecore