7 research outputs found

    ENRICHME integration of ambient intelligence and robotics for AAL

    Get PDF
    Technological advances and affordability of recent smart sensors, as well as the consolidation of common software platforms for the integration of the latter and robotic sensors, are enabling the creation of complex active and assisted living environments for improving the quality of life of the elderly and the less able people. One such example is the integrated system developed by the European project ENRICHME, the aim of which is to monitor and prolong the independent living of old people affected by mild cognitive impairments with a combination of smart-home, robotics and web technologies. This paper presents in particular the design and technological solutions adopted to integrate, process and store the information provided by a set of fixed smart sensors and mobile robot sensors in a domestic scenario, including presence and contact detectors, environmental sensors, and RFID-tagged objects, for long-term user monitoring an

    Framework for controlling automation devices based on gestures

    Get PDF
    Nowadays people's routine is becoming more and more fulfilled, when we are at home or in the office we have constant activities, appointments, and meetings. With the growth of technology, there are several ways it can help and even replace people in certain tasks. Nevertheless, many mundane tasks are not yet possible to be done only by a computer or a robot, and machines and people must work together to achieve the objective. Specifically, devices such as sensors or actuators have become very common in our daily life and they are a great help to have a more comfortable environment in our homes or our work, once they allow us to control the lighting, air conditioning, or multimedia, etc. With this in mind, it is important to develop adaptive interfaces that can instantly adjust to the needs and conditions of each user, making people’s activities more efficient. This dissertation presents a framework that performs the integration of human actions, being gestures and/or poses to activate and control devices that have the standard KNX protocol. A pose detection algorithm is used to detect different gestures/poses, where each gesture or group of gestures integrated with KNX allows easy and universal communication with various types of existing automation devices. The algorithm has standard gestures/poses which are obtained by making comparisons between the coordinates (key points) obtained by the pose estimation, comparing the key points, for example, of the wrist and shoulder for detection of a gesture where the user raises the arm vertically. In addition to standard gestures/poses, it is possible to carry out training so that the algorithm can learn new gestures and, in this way, being adaptive for each type of user. This way, with the detection of the user's gesture/pose, an interaction is then made with the home automation different types of equipment by KNX protocol, each user's gesture performs an interaction on the equipment, such as activating, deactivating, changing its intensity or its mode of operation. The results show that the framework is capable of effortlessly controlling different devices with different functionalities.Atualmente a rotina das pessoas está cada vez mais preenchida, quando estamos em casa ou no escritório cumprindo atividades, compromissos e reuniões constantemente. Com o crescimento da tecnologia, existem várias maneiras de auxiliar e até mesmo substituir as pessoas em certas tarefas. No entanto, muitas tarefas mundanas ainda não são possíveis de serem realizadas apenas por um computador ou um robô, e é necessário que a máquina e a pessoa trabalhem juntas para atingir o objetivo. Especificamente, dispositivos como sensores ou atuadores tornaram-se muito comuns no nosso dia a dia e são de grande ajuda para termos um ambiente mais confortável nas nossas casas ou no nosso trabalho, uma vez que nos permitem controlar diferentes sistemas e equipamentos como a iluminação, ar condicionado ou multimídia, etc. Com isso em mente, é importante desenvolver interfaces adaptativas que possam se ajustar instantaneamente às necessidades e condições de cada utilizador, tornando as atividades das pessoas mais simples e eficientes. Esta dissertação apresenta um framework que utiliza a integração de ações humanas, gestos e/ou poses para ativar e controlar dispositivos de automação que possuam o protocolo padrão KNX. O algortimo possui gestos/poses standard onde são obtidos realizando comparações entre as coordenadas (pontos chave - keypoints) obtidas pela pose estimation, comparando os keypoints, por exemplo, do pulso e do ombro para uma deteção de um gesto onde o utilizador levanta o braço verticalmente. Além de gestos/poses standard é possivel realizar o treinamento para que o algortimo aprenda novos gestos e, desta forma, sendo adaptativo para cada tipo de utilizador. Deste modo, com a detecção de gesto/pose do utilizador é então feita uma interação com os equipamentos de domotica por protocolo KNX, cada gesto do utilizador realiza uma interação nos equipamentos, como ativar, desativar, alterar sua intensidade ou seu modo de funcionamento. Os resultados mostram que o framework é capaz de controlar facilmente diferentes dispositivos com diferentes funcionalidades

    An ambient agent model for reading companion robot

    Get PDF
    Reading is essentially a problem-solving task. Based on what is read, like problem solving, it requires effort, planning, self-monitoring, strategy selection, and reflection. Also, as readers are trying to solve difficult problems, reading materials become more complex, thus demands more effort and challenges cognition. To address this issue, companion robots can be deployed to assist readers in solving difficult reading tasks by making reading process more enjoyable and meaningful. These robots require an ambient agent model, monitoring of a reader’s cognitive demand as it could consist of more complex tasks and dynamic interactions between human and environment. Current cognitive load models are not developed in a form to have reasoning qualities and not integrated into companion robots. Thus, this study has been conducted to develop an ambient agent model of cognitive load and reading performance to be integrated into a reading companion robot. The research activities were based on Design Science Research Process, Agent-Based Modelling, and Ambient Agent Framework. The proposed model was evaluated through a series of verification and validation approaches. The verification process includes equilibria evaluation and automated trace analysis approaches to ensure the model exhibits realistic behaviours and in accordance to related empirical data and literature. On the other hand, validation process that involved human experiment proved that a reading companion robot was able to reduce cognitive load during demanding reading tasks. Moreover, experiments results indicated that the integration of an ambient agent model into a reading companion robot enabled the robot to be perceived as a social, intelligent, useful, and motivational digital side-kick. The study contribution makes it feasible for new endeavours that aim at designing ambient applications based on human’s physical and cognitive process as an ambient agent model of cognitive load and reading performance was developed. Furthermore, it also helps in designing more realistic reading companion robots in the future

    ENRICHME integration of ambient intelligence and robotics for AAL

    No full text
    Technological advances and affordability of recent smart sensors, as well as the consolidation of common software platforms for the integration of the latter and robotic sensors, are enabling the creation of complex active and assisted living environments for improving the quality of life of the elderly and the less able people. One such example is the integrated system developed by the European project ENRICHME, the aim of which is to monitor and prolong the independent living of old people affected by mild cognitive impairments with a combination of smart-home, robotics and web technologies. This paper presents in particular the design and technological solutions adopted to integrate, process and store the information provided by a set of fixed smart sensors and mobile robot sensors in a domestic scenario, including presence and contact detectors, environmental sensors, and RFID-tagged objects, for long-term user monitoring and adaptation

    Determining the effect of human cognitive biases in social robots for human-robotm interactions

    Get PDF
    The research presented in this thesis describes a model for aiding human-robot interactions based on the principle of showing behaviours which are created based on 'human' cognitive biases by a robot in human-robot interactions. The aim of this work is to study how cognitive biases can affect human-robot interactions in the long term. Currently, most human-robot interactions are based on a set of well-ordered and structured rules, which repeat regardless of the person or social situation. This trend tends to provide an unrealistic interaction, which can make difficult for humans to relate ‘naturally’ with the social robot after a number of relations. The main focus of these interactions is that the social robot shows a very structured set of behaviours and, as such, acts unnaturally and mechanical in terms of social interactions. On the other hand, fallible behaviours (e.g. forgetfulness, inability to understand other’ emotions, bragging, blaming others) are common behaviours in humans and can be seen in regular social interactions. Some of these fallible behaviours are caused by the various cognitive biases. Researchers studied and developed various humanlike skills (e.g. personality, emotions expressions, traits) in social robots to make their behaviours more humanlike, and as a result, social robots can perform various humanlike actions, such as walking, talking, gazing or emotional expression. But common human behaviours such as forgetfulness, inability to understand other emotions, bragging or blaming are not present in the current social robots; such behaviours which exist and influence people have not been explored in social robots. The study presented in this thesis developed five cognitive biases in three different robots in four separate experiments to understand the influences of such cognitive biases in human–robot interactions. The results show that participants initially liked to interact with the robot with cognitive biased behaviours more than the robot without such behaviours. In my first two experiments, the robots (e.g., ERWIN, MyKeepon) interacted with the participants using a single bias (i.e., misattribution and empathy gap) cognitive biases accordingly, and participants enjoyed the interactions using such bias effects: for example, forgetfulness, source confusions, always showing exaggerated happiness or sadness and so on in the robots. In my later experiments, participants interacted with the robot (e.g., MARC) three times, with a time interval between two interactions, and results show that the likeness the interactions where the robot shows biased behaviours decreases less than the interactions where the robot did not show any biased behaviours. In the current thesis, I describe the investigations of these traits of forgetfulness, the inability to understand others’ emotions, and bragging and blaming behaviours, which are influenced by cognitive biases, and I also analyse people’s responses to robots displaying such biased behaviours in human–robot interactions
    corecore