1,937 research outputs found

    Touch Screen Avatar English Learning System For University Students Learning Simplicity

    Get PDF
    This paper discusses on touch screen avatar for an English language learning application system. The system would be a combination of avatar as Animated Pedagogical Agent (APA) and a touch screen application that adapt the up to date gesture-based computing which is found as having potential to change the way how we learn as it could reduce the amount of Information Communication Technology (ICT) devices used during teaching and learning process. The key here is interaction between university students and touch screen avatar intelligent application system as well as learning resources that could be learned anytime anywhere twenty four hours in seven days 24/7 based on their study time preference where they could learn at their own comfort out of the tradition. The students would be provided with a learning tool that could help them learn interactively with the current trend which they might be interested with based on their own personalization. Apart from that, their performance shall be monitored from a distance and evaluated to avoid disturbing their learning process from working smoothly and getting rid of feeling of being controlled. Thus, the students are expected to have lower affective filter level that may enhance the way they learn unconsciously. Keywords: Gesture-Based Computing, Avatar, Portable Learning Tool, Interactivity, Language Learnin

    Developing an Autonomous Mobile Robotic Device for Monitoring and Assisting Older People

    Get PDF
    A progressive increase of the elderly population in the world has required technological solutions capable of improving the life prospects of people suffering from senile dementias such as Alzheimer's. Socially Assistive Robotics (SAR) in the research field of elderly care is a solution that can ensure, through observation and monitoring of behaviors, their safety and improve their physical and cognitive health. A social robot can autonomously and tirelessly monitor a person daily by providing assistive tasks such as remembering to take medication and suggesting activities to keep the assisted active both physically and cognitively. However, many projects in this area have not considered the preferences, needs, personality, and cognitive profiles of older people. Moreover, other projects have developed specific robotic applications making it difficult to reuse and adapt them on other hardware devices and for other different functional contexts. This thesis presents the development of a scalable, modular, multi-tenant robotic application and its testing in real-world environments. This work is part of the UPA4SAR project ``User-centered Profiling and Adaptation for Socially Assistive Robotics''. The UPA4SAR project aimed to develop a low-cost robotic application for faster deployment among the elderly population. The architecture of the proposed robotic system is modular, robust, and scalable due to the development of functionality in microservices with event-based communication. To improve robot acceptance the functionalities, enjoyed through microservices, adapt the robot's behaviors based on the preferences and personality of the assisted person. A key part of the assistance is the monitoring of activities that are recognized through deep neural network models proposed in this work. The final experimentation of the project carried out in the homes of elderly volunteers was performed with complete autonomy of the robotic system. Daily care plans customized to the person's needs and preferences were executed. These included notification tasks to remember when to take medication, tasks to check if basic nutrition activities were accomplished, entertainment and companionship tasks with games, videos, music for cognitive and physical stimulation of the patient

    ROS2 gesture classification pipeline towards gamified neuro-rehabilitation therapy

    Get PDF
    [Resumen] La rehabilitación es una herramienta esencial que ayuda a las personas a restaurar la movilidad en las extremidades afectadas por diversas afecciones, como enfermedades neurológicas. Las terapias convencionales, que incluyen terapia ocupacional, física y del habla, se han mejorado con nuevas tecnologías, como sistemas robóticos asistidos y juegos de realidad virtual y aumentada, para aumentar la participación y, en consecuencia, la efectividad. Esta investigación se centra en la implementación de un dispositivo portátil de sensores de electromiograma (EMG) de ocho canales, el brazalete Mindrove, para el reconocimiento de gestos. El objetivo es desarrollar un modelo clasificador utilizando el algoritmo de Máquinas de Vectores de Soporte (SVM) para distinguir ocho gestos diferentes de la mano y aplicarlo en un sistema de reconocimiento de gestos. El estudio demuestra la viabilidad de este sistema de reconocimiento y explora la aplicación potencial de esta tecnología en juegos interactivos de Unity para terapia de rehabilitación. Los resultados muestran una precisión prometedora en la clasificación del modelo y se necesita más investigación para abordar los desafíos relacionados con la especificidad del usuario y la precisión del reconocimiento de gestos. El trabajo futuro implica ampliar el repertorio de gestos reconocidos, incorporar datos adicionales del sensor y explorar técnicas de extracción de características más avanzadas para mejorar el rendimiento general del sistema de reconocimiento de gestos en terapias de rehabilitación.[Abstract] Rehabilitation is an essential tool that aids individuals in restoring mobility in limbs affected by various conditions, such as neurological diseases. Conventional therapies, including occupational, physical, and speech therapy, have been improved by new technologies, such as assistive robotic systems, along with virtual and augmented reality games, to enhance engagement and, consequently, effectiveness. This research focuses on implementing an eight-channel electromyogram (EMG) wearable sensor device, Mindrove armband, for gesture recognition. The objective is to develop a classifier model using the Support Vector Machine (SVM) algorithm to distinguish eight different hand gestures and apply it in a gesture recognition system. The study demonstrates the feasibility of this recognition system and explores the potential application of this technology in interactive Unity games for rehabilitation therapy. The results show promising accuracy in model classification, and further research is needed to address challenges related to user specificity and gesture recognition accuracy. Future work involves expanding the repertoire of recognized gestures, incorporating additional sensor data, and exploring more advanced feature extraction techniques to enhance the overall performance of the gesture recognition system in rehabilitation therapies.Ministerio de Ciencia e Innovación; PID2020-113508RBI0

    Conversational affective social robots for ageing and dementia support

    Get PDF
    Socially assistive robots (SAR) hold significant potential to assist older adults and people with dementia in human engagement and clinical contexts by supporting mental health and independence at home. While SAR research has recently experienced prolific growth, long-term trust, clinical translation and patient benefit remain immature. Affective human-robot interactions are unresolved and the deployment of robots with conversational abilities is fundamental for robustness and humanrobot engagement. In this paper, we review the state of the art within the past two decades, design trends, and current applications of conversational affective SAR for ageing and dementia support. A horizon scanning of AI voice technology for healthcare, including ubiquitous smart speakers, is further introduced to address current gaps inhibiting home use. We discuss the role of user-centred approaches in the design of voice systems, including the capacity to handle communication breakdowns for effective use by target populations. We summarise the state of development in interactions using speech and natural language processing, which forms a baseline for longitudinal health monitoring and cognitive assessment. Drawing from this foundation, we identify open challenges and propose future directions to advance conversational affective social robots for: 1) user engagement, 2) deployment in real-world settings, and 3) clinical translation

    Dwell-free input methods for people with motor impairments

    Full text link
    Millions of individuals affected by disorders or injuries that cause severe motor impairments have difficulty performing compound manipulations using traditional input devices. This thesis first explores how effective various assistive technologies are for people with motor impairments. The following questions are studied: (1) What activities are performed? (2) What tools are used to support these activities? (3) What are the advantages and limitations of these tools? (4) How do users learn about and choose assistive technologies? (5) Why do users adopt or abandon certain tools? A qualitative study of fifteen people with motor impairments indicates that users have strong needs for efficient text entry and communication tools that are not met by existing technologies. To address these needs, this thesis proposes three dwell-free input methods, designed to improve the efficacy of target selection and text entry based on eye-tracking and head-tracking systems. They yield: (1) the Target Reverse Crossing selection mechanism, (2) the EyeSwipe eye-typing interface, and (3) the HGaze Typing interface. With Target Reverse Crossing, a user moves the cursor into a target and reverses over a goal to select it. This mechanism is significantly more efficient than dwell-time selection. Target Reverse Crossing is then adapted in EyeSwipe to delineate the start and end of a word that is eye-typed with a gaze path connecting the intermediate characters (as with traditional gesture typing). When compared with a dwell-based virtual keyboard, EyeSwipe affords higher text entry rates and a more comfortable interaction. Finally, HGaze Typing adds head gestures to gaze-path-based text entry to enable simple and explicit command activations. Results from a user study demonstrate that HGaze Typing has better performance and user satisfaction than a dwell-time method

    Personalized robot assistant for support in dressing

    Get PDF
    Robot-assisted dressing is performed in close physical interaction with users who may have a wide range of physical characteristics and abilities. Design of user adaptive and personalized robots in this context is still indicating limited, or no consideration, of specific user-related issues. This paper describes the development of a multi-modal robotic system for a specific dressing scenario - putting on a shoe, where users’ personalized inputs contribute to a much improved task success rate. We have developed: 1) user tracking, gesture recognition andposturerecognitionalgorithmsrelyingonimagesprovidedby a depth camera; 2) a shoe recognition algorithm from RGB and depthimages;3)speechrecognitionandtext-to-speechalgorithms implemented to allow verbal interaction between the robot and user. The interaction is further enhanced by calibrated recognition of the users’ pointing gestures and adjusted robot’s shoe delivery position. A series of shoe fitting experiments have been performed on two groups of users, with and without previous robot personalization, to assess how it affects the interaction performance. Our results show that the shoe fitting task with the personalized robot is completed in shorter time, with a smaller number of user commands and reduced workload
    corecore