2,032 research outputs found

    Software architecture for smart emotion recognition and regulation of the ageing adult

    Get PDF
    This paper introduces the architecture of an emotion-aware ambient intelligent and gerontechnological project named “Improvement of the Elderly Quality of Life and Care through Smart Emotion Regulation”. The objective of the proposal is to find solutions for improving the quality of life and care of the elderly who can or want to continue living at home by using emotion regulation techniques. A series of sensors is used for monitoring the elderlies’ facial and gestural expression, activity and behaviour, as well as relevant physiological data. This way the older people’s emotions are inferred and recognized. Music, colour and light are the stimulating means to regulate their emotions towards a positive and pleasant mood. Then, the paper proposes a gerontechnological software architecture that enables real-time, continuous monitoring of the elderly and provides the best-tailored reactions of the ambience in order to regulate the older person’s emotions towards a positive mood. After describing the benefits of the approach for emotion recognition and regulation in the elderly, the eight levels that compose the architecture are described.This work was partially supported by Spanish Ministerio de Economía y Competitividad/FEDER under TIN2013-47074-C2-1-R grant. José Carlos Castillo was partially supported by a grant from Iceland, Liechtenstein and Norway through the EEA Financial Mechanism, operated by Universidad Complutense de Madrid.Publicad

    Human activity monitoring by local and global finite state machines

    Get PDF
    There are a number of solutions to automate the monotonous task of looking at a monitor to find suspicious behaviors in video surveillance scenarios. Detecting strange objects and intruders, or tracking people and objects, is essential for surveillance and safety in crowded environments. The present work deals with the idea of jointly modeling simple and complex behaviors to report local and global human activities in natural scenes. Modeling human activities with state machines is still common in our days and is the approach offered in this paper. We incorporate knowledge about the problem domain into an expected structure of the activity model. Motion-based image features are linked explicitly to a symbolic notion of hierarchical activity through several layers of more abstract activity descriptions. Atomic actions are detected at a low level and fed to hand-crafted grammars to detect activity patterns of interest. Also, we work with shape and trajectory to indicate the events related to moving objects. In order to validate our proposal we have performed several tests with some CAVIAR test cases

    A proposal for local and global human activities identification

    Get PDF
    There are a number of solutions to automate the monotonous task of looking at a monitor to find suspicious behaviors in video surveillance scenarios. Detecting strange objects and intruders, or tracking people and objects, is essential for surveillance and safety in crowded environments. The present work deals with the idea of jointly modeling simple and complex behaviors to report local and global human activities in natural scenes. In order to validate our proposal we have performed some tests with some CAVIAR test cases. In this paper we show some relevant results for some study cases related to visual surveillance, namely ?speed detection?, ?position and direction analysis?, and ?possible cashpoint holdup detection?

    Detecting and Classifying Human Touches in a Social Robot Through Acoustic Sensing and Machine Learning

    Get PDF
    An important aspect in Human-Robot Interaction is responding to different kinds of touch stimuli. To date, several technologies have been explored to determine how a touch is perceived by a social robot, usually placing a large number of sensors throughout the robot's shell. In this work, we introduce a novel approach, where the audio acquired from contact microphones located in the robot's shell is processed using machine learning techniques to distinguish between different types of touches. The system is able to determine when the robot is touched (touch detection), and to ascertain the kind of touch performed among a set of possibilities: stroke, tap, slap, and tickle (touch classification). This proposal is cost-effective since just a few microphones are able to cover the whole robot's shell since a single microphone is enough to cover each solid part of the robot. Besides, it is easy to install and configure as it just requires a contact surface to attach the microphone to the robot's shell and plug it into the robot's computer. Results show the high accuracy scores in touch gesture recognition. The testing phase revealed that Logistic Model Trees achieved the best performance, with an F-score of 0.81. The dataset was built with information from 25 participants performing a total of 1981 touch gestures.The research leading to these results has received funding from the projects: Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; and RoboCity2030-III-CM, funded by Comunidad de Madrid and cofunded by Structural Funds of the EU.Publicad

    Detecting, locating and recognising human touches in social robots with contact microphones

    Get PDF
    There are many situations in our daily life where touch gestures during natural human–human interaction take place: meeting people (shaking hands), personal relationships (caresses), moments of celebration or sadness (hugs), etc. Considering that robots are expected to form part of our daily life in the future, they should be endowed with the capacity of recognising these touch gestures and the part of its body that has been touched since the gesture’s meaning may differ. Therefore, this work presents a learning system for both purposes: detect and recognise the type of touch gesture (stroke, tickle, tap and slap) and its localisation. The interpretation of the meaning of the gesture is out of the scope of this paper. Different technologies have been applied to perceive touch by a social robot, commonly using a large number of sensors. Instead, our approach uses 3 contact microphones installed inside some parts of the robot. The audio signals generated when the user touches the robot are sensed by the contact microphones and processed using Machine Learning techniques. We acquired information from sensors installed in two social robots, Maggie and Mini (both developed by the RoboticsLab at the Carlos III University of Madrid), and a real-time version of the whole system has been deployed in the robot Mini. The system allows the robot to sense if it has been touched or not, to recognise the kind of touch gesture, and its approximate location. The main advantage of using contact microphones as touch sensors is that by using just one, it is possible to “cover” a whole solid part of the robot. Besides, the sensors are unaffected by ambient noises, such as human voice, TV, music etc. Nevertheless, the fact of using several contact microphones makes possible that a touch gesture is detected by all of them, and each may recognise a different gesture at the same time. The results show that this system is robust against this phenomenon. Moreover, the accuracy obtained for both robots is about 86%.The research leading to these results has received funding from the projects: ‘‘Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES)’’, funded by the Spanish "Ministerio de Ciencia, Innovación y Universidades, Spain" and from RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by ‘"Programas de Actividades I+D en la Comunidad de Madrid’" and cofunded by Structural Funds of the EU, Slovak Republic.Publicad

    Baterías Li-ion de alta energía combinando silicio nanométrico y espinelas de alto potencial

    Get PDF
    II Encuentro sobre nanociencia y nanotecnología de investigadores y tecnólogos de la Universidad de Córdoba. NANOUC

    Four-features evaluation of text to speech systems for three social robots

    Get PDF
    The success of social robotics is directly linked to their ability of interacting with people. Humans possess verbal and non-verbal communication skills, and, therefore, both are essential for social robots to get a natural human&-robot interaction. This work focuses on the first of them since the majority of social robots implement an interaction system endowed with verbal capacities. In order to do this implementation, we must equip social robots with an artificial voice system. In robotics, a Text to Speech (TTS) system is the most common speech synthesizer technique. The performance of a speech synthesizer is mainly evaluated by its similarity to the human voice in relation to its intelligibility and expressiveness. In this paper, we present a comparative study of eight off-the-shelf TTS systems used in social robots. In order to carry out the study, 125 participants evaluated the performance of the following TTS systems: Google, Microsoft, Ivona, Loquendo, Espeak, Pico, AT&T, and Nuance. The evaluation was performed after observing videos where a social robot communicates verbally using one TTS system. The participants completed a questionnaire to rate each TTS system in relation to four features: intelligibility, expressiveness, artificiality, and suitability. In this study, four research questions were posed to determine whether it is possible to present a ranking of TTS systems in relation to each evaluated feature, or, on the contrary, there are no significant differences between them. Our study shows that participants found differences between the TTS systems evaluated in terms of intelligibility, expressiveness, and artificiality. The experiments also indicated that there was a relationship between the physical appearance of the robots (embodiment) and the suitability of TTS systems.The research leading to these results has received funding from the projects: “Development of social robots to help seniors with cognitive impairment (ROBSEN)”, funded by the Ministerio de Economía y Competitividad; “RoboCity2030-DIH-CM”, funded by Comunidad de Madrid and co-funded by Structural Funds of the EU; “Robots Sociales para estimulación física, cognitiva y afectiva de mayores (ROSES)” funded by Agencia Estatal de Investigación (AEI).Publicad

    Identification and distance estimation of users and objects by means of electronic beacons in social robotics

    Get PDF
    Social robots are intended to coexist and to communicate with humans in a natural way. This requires these robots to be able to identify people (and objects) around them to use that information during human-robot dialogs. In this work we present how electronic beacons can benefit the interactions between humans and social robots. In particular, Bluetooth 4.0 Low Energy beacons are presented as the most suitable option, among the up-to-date available technologies. In order to show the advantages of the system during human-robot interaction, first, we present the integration of the information provided by these devices in the robot’s dialog system; and after, a hidden toy hunt game is described as a case study of a scenario where electronic beacons ease the interaction between humans and a social robot.The research leading to these results has received funding from the projects: Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad (DPI2014-57684-R); and RoboCity2030-III-CM, funded by Comunidad de Madrid and cofunded by Structural Funds of the EU (S2013/MIT-2748)

    A Bio-inspired Motivational Decision Making System for Social Robots Based on the Perception of the User

    Get PDF
    Nowadays, many robotic applications require robots making their own decisions and adapting to different conditions and users. This work presents a biologically inspired decision making system, based on drives, motivations, wellbeing, and self-learning, that governs the behavior of the robot considering both internal and external circumstances. In this paper we state the biological foundations that drove the design of the system, as well as how it has been implemented in a real robot. Following a homeostatic approach, the ultimate goal of the robot is to keep its wellbeing as high as possible. In order to achieve this goal, our decision making system uses learning mechanisms to assess the best action to execute at any moment. Considering that the proposed system has been implemented in a real social robot, human-robot interaction is of paramount importance and the learned behaviors of the robot are oriented to foster the interactions with the user. The operation of the system is shown in a scenario where the robot Mini plays games with a user. In this context, we have included a robust user detection mechanism tailored for short distance interactions. After the learning phase, the robot has learned how to lead the user to interact with it in a natural way.The research leading to these results has received funding from the projects: Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; and RoboCity2030-III-CM, funded by Comunidad de Madrid and cofunded by Structural Funds of the EU

    Planificación para la implantación de un Sistema de Gestión de la Calidad en el RAAA 74: Definición de Fases y Responsabilidades

    Get PDF
    Actualmente todo el mundo sabe de la importancia que posee la calidad en el Ejército de Tierra, sin ella las unidades de fuerza terrestre serían incapaces de ejercer sus cometidos con unas garantías de seguridad y aptitud mínimas. No obstante, dicha función de la calidad debe gestionarse para lograr llegar a su objetivo. Debido a lo expuesto en este proyecto se ha investigado como se implantaría un Sistema de Gestión de la Calidad en la Unidad de Reparaciones III/ 74, San Roque, la cual es encargada de ejercer las funciones logísticas en el Regimiento de Artillería Antiaérea 74 sobre todo lo relacionado del sistema HAWK.Para la realización de este trabajo se va a comenzar dando unas nociones básicas sobre definiciones y principios a día de hoy que plantea la norma ISO 9001:2015 para un SGC; ya que sobre estos se van a desarrollar. Acto seguido se van a describir los objetivos y metodología que se va a seguir en este TFG para su desarrollo; así como el contexto de la unidad donde se ha desarrollado e investigado. Sobre esta última se estudiará además el estado en la actualidad en el que se encuentra respecto a su gestión de la calidad, señalando su respectiva problemática, para así entender mejor su elaboración y en qué aspectos se debe centrar el trabajo a desarrollar.En primer lugar se realizará un estudio sobre los procesos que componen el SGC; esto se compondrá primero de una clasificación de ellos en tipos de procesos, ya que no todos los procesos tienen la misma finalidad dentro del sistema .A continuación, se hará una descripción breve de todos ellos, señalando los elementos de entrada y salida de los que se componen; además se desarrollarán unas fichas técnicas de todos los procesos, en los que se describan todos los ámbitos recalcados en la norma que debe contener un proceso de un SGC. Por último se mostrará como interaccionan dichos procesos en el sistema, señalando los vínculos que existen entre los procesos y sus tipos.Por otro lado, se realizará un procedimiento de implantación del SGC, siendo una de las más importantes partes del presente TFG. Este consistirá en una adaptación al Ciclo Gestor del Tiempo [13], en el cual primeramente se definirán las fases de las que va a consistir dicho procedimiento, proporcionando las tareas y herramientas para su realización. En un segundo lugar se van a proporcionar los recursos de los que se va a disponer para dicha implantación, así como la asignación de estos a las distintas responsabilidades a hacer de cada fase. Acto seguido se estimará la duración de las que se va a componer cada fase y el procedimiento en general; estas duraciones servirán para desarrollar el calendario en el que se va a implantar el SGC, estipulando además la secuenciación que tendrán dichas fases en el calendario. En último lugar, se analizarán los principales riesgos a los que está expuesto dicho procedimiento, proporcionando en su caso medidas para su inmediata respuesta.Por último, se desarrollará unas conclusiones de lo que ha supuesto este proyecto, analizando los puntos claves y la problemática surgida en los distintos apartados que suponen. Además se contará alguna de las líneas futuras y recomendaciones posibles para este trabajo.<br /
    corecore