494 research outputs found

    An investigation of effects of the partial active assistance in a virtual environment based rehabilitation system

    Get PDF
    This thesis describes a study on a new active assistance in robotic rehabilitation in a haptic virtual environment for post-stroke patients. The novelty of this active assistance system lies in that the assistance is directly rendered on the result of a task performing. Active assistance will generally raise the confidence level of patients in performing a rehabilitation exercise. However, an overly high assistance level may induce cognitive fatigue with patients and thus decreases their motivation of performing a rehabilitation exercise. This thesis hypothesizes that a proper active assistance can improve the performance of a rehabilitation exercise, but will not reduce the motivation of patients in doing rehabilitation exercise. However, due to the difficulty in obtaining a proper number of patients for the experiment, the study turned to healthy people. Accordingly, a revised hypothesis is that active assistance on healthy people does not improve the task performance and not reduces the motivation of healthy people. In this thesis, first, a test-bed with the haptic virtual environment was designed and constructed. The test-bed included a simple task – i.e., following a predefined circle trajectory. Then, a statistical experiment was designed and an experiment was conducted on the test-bed. The experimental results test the hypothesis successfully. The main contributions of this thesis are: (1) the development of a new active assistance system for rehabilitation in a virtual environment and (2) the experimental study on the motivation of healthy people with the developed active assistance system. A care must, however, be taken that the experiment was conducted on healthy people and the conclusion drawn from the study may not be valid on patients

    Neuroplasticity of Ipsilateral Cortical Motor Representations, Training Effects and Role in Stroke Recovery

    Get PDF
    This thesis examines the contribution of the ipsilateral hemisphere to motor control with the aim of evaluating the potential of the contralesional hemisphere to contribute to motor recovery after stroke. Predictive algorithms based on neurobiological principles emphasize integrity of the ipsilesional corticospinal tract as the strongest prognostic indicator of good motor recovery. In contrast, extensive lesions placing reliance on alternative contralesional ipsilateral motor pathways are associated with poor recovery. Within the predictive algorithms are elements of motor control that rely on contributions from ipsilateral motor pathways, suggesting that balanced, parallel contralesional contributions can be beneficial. Current therapeutic approaches have focussed on the maladaptive potential of the contralesional hemisphere and sought to inhibit its activity with neuromodulation. Using Transcranial Magnetic Stimulation I seek examples of beneficial plasticity in ipsilateral cortical motor representations of expert performers, who have accumulated vast amounts of deliberate practise training skilled bilateral activation of muscles habitually under ipsilateral control. I demonstrate that ipsilateral cortical motor representations reorganize in response to training to acquisition of skilled motor performance. Features of this reorganization are compatible with evidence suggesting ipsilateral importance in synergy representations, controlled through corticoreticulopropriospinal pathways. I demonstrate that ipsilateral plasticity can associate positively with motor recovery after stroke. Features of plastic change in ipsilateral cortical representations are shown in response to robotic training of chronic stroke patients. These findings have implications for the individualization of motor rehabilitation after stroke, and prompt reappraisal of the approach to therapeutic intervention in the chronic phase of stroke

    Fabricate 2020

    Get PDF
    Fabricate 2020 is the fourth title in the FABRICATE series on the theme of digital fabrication and published in conjunction with a triennial conference (London, April 2020). The book features cutting-edge built projects and work-in-progress from both academia and practice. It brings together pioneers in design and making from across the fields of architecture, construction, engineering, manufacturing, materials technology and computation. Fabricate 2020 includes 32 illustrated articles punctuated by four conversations between world-leading experts from design to engineering, discussing themes such as drawing-to-production, behavioural composites, robotic assembly, and digital craft

    Upper limb movement control after stroke and in healthy ageing: does intensive upper limb neurorehabilitation improve motor control and reduce motor impairment in the chronic phase of stroke?

    Get PDF
    Stroke affects people of all ages, but many are in the elderly population. 75% of stroke survivors have residual upper limb motor impairment and resultant disability. This thesis firstly examines upper limb motor control in chronic stroke. Evidence is emerging that high dose, high intensity complex neurorehabilitation interventions in chronic stroke patients produce unprecedented gains on clinical outcome scores of motor impairment, function and activity. But whether these clinical improvements represent behavioural repair or merely behavioural compensation remains undetermined. To address this question, upper limb movement kinematics, strength and joint range and clinical scores were measured in 52 chronic stroke patients before and after an intensive three-week treatment intervention. 29 chronic stroke patients who had not undergone treatment were similarly assessed, three-weeks apart. Significant improvements in motor control, arm strength and joint range in addition to gains on clinical scores were observed in the impaired arm of the intervention group. Crucially, changes in motor control occurred independently of changes in strength and joint range. Improvements in motor control were retained in a cohort of 28 patients in the intervention group, also assessed 6-weeks and 6-months after treatment had ended, demonstrating persistent changes in motor behaviour. These results suggest that behavioural restitution has occurred. Secondly, knowledge of the effects of normal healthy ageing on upper limb motor control is essential to informing research and delivery of clinical services. To this end, movement kinematics were measured in both arms of 57 healthy adults aged 22 to 82 years. A decline in motor control was observed as age increased, particularly in the non-dominant arm. However, motor control in healthy adults of all ages remained significantly better than in chronic stroke patients pre- and post-intervention. This thesis provides new evidence that treatment-driven improvements in motor control are achievable in the chronic post-stroke upper limb, which strongly suggests that motor control should remain a therapeutic target well beyond the current three to six-month post-stroke window. It will inform the continued development and delivery of high dose, high intensity upper limb neurorehabilitation treatment interventions for stroke patients of all ages

    Artificial Intelligence for Hospital Health Care:Application Cases and Answers to Challenges in European Hospitals

    Get PDF
    The development and implementation of artificial intelligence (AI) applications in health care contexts is a concurrent research and management question. Especially for hospitals, the expectations regarding improved efficiency and effectiveness by the introduction of novel AI applications are huge. However, experiences with real-life AI use cases are still scarce. As a first step towards structuring and comparing such experiences, this paper is presenting a comparative approach from nine European hospitals and eleven different use cases with possible application areas and benefits of hospital AI technologies. This is structured as a current review and opinion article from a diverse range of researchers and health care professionals. This contributes to important improvement options also for pandemic crises challenges, e.g., the current COVID-19 situation. The expected advantages as well as challenges regarding data protection, privacy, or human acceptance are reported. Altogether, the diversity of application cases is a core characteristic of AI applications in hospitals, and this requires a specific approach for successful implementation in the health care sector. This can include specialized solutions for hospitals regarding human-computer interaction, data management, and communication in AI implementation projects

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    The re-education of upper limb movement post stroke using iterative learning control mediated by electrical stimulation

    No full text
    An inability to perform tasks involving reaching is a common problem following stroke. Evidence supports the use of robotic therapy and electrical stimulation (ES) to reduce upper limb impairments following stroke, but current systems may not encourage maximal voluntary contribution from the participant. This study developed and tested iterative learning control (ILC) algorithms mediated by ES, using a purpose designed robotic workstation, for upper limb rehabilitation post stroke. Surface electromyography (EMG) which may be related to impaired performance and function was used to investigate seven shoulder and elbow muscle activation patterns in eight neurologically intact and five chronic stroke participants during nine tracking tasks. The participants’ forearm was supported using a hinged arm-holder, which constrained their hand to move in a two dimensional horizontal plane.Outcome measures taken prior to and after an intervention consisted of the Fugl-Meyer Assessment (FMA) and the Action Research Arm Test (ARAT), isometric force and error tracking. The intervention for stroke participants consisted of eighteen sessions in which a similar range of tracking tasks were performed with the addition of responsive electrical stimulation to their triceps muscle. A question set was developed to understand participants’ perceptions of the ILC system. Statistically significant improvements were measured (p?0.05) in: FMA motor score, unassisted tracking, and in isometric force. Statistically significant differences in muscle activation patterns were observed between stroke and neurologically intact participants for timing, amplitude and coactivation patterns. After the intervention significant changes were observed in many of these towards neurologically intact ranges. The robot–assisted therapy was well accepted and tolerated by the stroke participants. This study has demonstrated the feasibility of using ILC mediated by ES for upper limb stroke rehabilitation in the treatment of stroke patients with upper limb hemiplegia

    Human-Robot Interaction architecture for interactive and lively social robots

    Get PDF
    Mención Internacional en el título de doctorLa sociedad está experimentando un proceso de envejecimiento que puede provocar un desequilibrio entre la población en edad de trabajar y aquella fuera del mercado de trabajo. Una de las soluciones a este problema que se están considerando hoy en día es la introducción de robots en multiples sectores, incluyendo el de servicios. Sin embargo, para que esto sea una solución viable, estos robots necesitan ser capaces de interactuar con personas de manera satisfactoria, entre otras habilidades. En el contexto de la aplicación de robots sociales al cuidado de mayores, esta tesis busca proporcionar a un robot social las habilidades necesarias para crear interacciones entre humanos y robots que sean naturales. En concreto, esta tesis se centra en tres problemas que deben ser solucionados: (i) el modelado de interacciones entre humanos y robots; (ii) equipar a un robot social con las capacidades expresivas necesarias para una comunicación satisfactoria; y (iii) darle al robot una apariencia vivaz. La solución al problema de modelado de diálogos presentada en esta tesis propone diseñar estos diálogos como una secuencia de elementos atómicos llamados Actos Comunicativos (CAs, por sus siglas en inglés). Se pueden parametrizar en tiempo de ejecución para completar diferentes objetivos comunicativos, y están equipados con mecanismos para manejar algunas de las imprecisiones que pueden aparecer durante interacciones. Estos CAs han sido identificados a partir de la combinación de dos dimensiones: iniciativa (si la tiene el robot o el usuario) e intención (si se pretende obtener o proporcionar información). Estos CAs pueden ser combinados siguiendo una estructura jerárquica para crear estructuras mas complejas que sean reutilizables. Esto simplifica el proceso para crear nuevas interacciones, permitiendo a los desarrolladores centrarse exclusivamente en diseñar el flujo del diálogo, sin tener que preocuparse de reimplementar otras funcionalidades que tienen que estar presentes en todas las interacciones (como el manejo de errores, por ejemplo). La expresividad del robot está basada en el uso de una librería de gestos, o expresiones, multimodales predefinidos, modelados como estructuras similares a máquinas de estados. El módulo que controla la expresividad recibe peticiones para realizar dichas expresiones, planifica su ejecución para evitar cualquier conflicto que pueda aparecer, las carga, y comprueba que su ejecución se complete sin problemas. El sistema es capaz también de generar estas expresiones en tiempo de ejecución a partir de una lista de acciones unimodales (como decir una frase, o mover una articulación). Una de las características más importantes de la arquitectura de expresividad propuesta es la integración de una serie de métodos de modulación que pueden ser usados para modificar los gestos del robot en tiempo de ejecución. Esto permite al robot adaptar estas expresiones en base a circunstancias particulares (aumentando al mismo tiempo la variabilidad de la expresividad del robot), y usar un número limitado de gestos para mostrar diferentes estados internos (como el estado emocional). Teniendo en cuenta que ser reconocido como un ser vivo es un requisito para poder participar en interacciones sociales, que un robot social muestre una apariencia de vivacidad es un factor clave en interacciones entre humanos y robots. Para ello, esta tesis propone dos soluciones. El primer método genera acciones a través de las diferentes interfaces del robot a intervalos. La frecuencia e intensidad de estas acciones están definidas en base a una señal que representa el pulso del robot. Dicha señal puede adaptarse al contexto de la interacción o al estado interno del robot. El segundo método enriquece las interacciones verbales entre el robot y el usuario prediciendo los gestos no verbales más apropiados en base al contenido del diálogo y a la intención comunicativa del robot. Un modelo basado en aprendizaje automático recibe la transcripción del mensaje verbal del robot, predice los gestos que deberían acompañarlo, y los sincroniza para que cada gesto empiece en el momento preciso. Este modelo se ha desarrollado usando una combinación de un encoder diseñado con una red neuronal Long-Short Term Memory, y un Conditional Random Field para predecir la secuencia de gestos que deben acompañar a la frase del robot. Todos los elementos presentados conforman el núcleo de una arquitectura de interacción humano-robot modular que ha sido integrada en múltiples plataformas, y probada bajo diferentes condiciones. El objetivo central de esta tesis es contribuir al área de interacción humano-robot con una nueva solución que es modular e independiente de la plataforma robótica, y que se centra en proporcionar a los desarrolladores las herramientas necesarias para desarrollar aplicaciones que requieran interacciones con personas.Society is experiencing a series of demographic changes that can result in an unbalance between the active working and non-working age populations. One of the solutions considered to mitigate this problem is the inclusion of robots in multiple sectors, including the service sector. But for this to be a viable solution, among other features, robots need to be able to interact with humans successfully. This thesis seeks to endow a social robot with the abilities required for a natural human-robot interactions. The main objective is to contribute to the body of knowledge on the area of Human-Robot Interaction with a new, platform-independent, modular approach that focuses on giving roboticists the tools required to develop applications that involve interactions with humans. In particular, this thesis focuses on three problems that need to be addressed: (i) modelling interactions between a robot and an user; (ii) endow the robot with the expressive capabilities required for a successful communication; and (iii) endow the robot with a lively appearance. The approach to dialogue modelling presented in this thesis proposes to model dialogues as a sequence of atomic interaction units, called Communicative Acts, or CAs. They can be parametrized in runtime to achieve different communicative goals, and are endowed with mechanisms oriented to solve some of the uncertainties related to interaction. Two dimensions have been used to identify the required CAs: initiative (the robot or the user), and intention (either retrieve information or to convey it). These basic CAs can be combined in a hierarchical manner to create more re-usable complex structures. This approach simplifies the creation of new interactions, by allowing developers to focus exclusively on designing the flow of the dialogue, without having to re-implement functionalities that are common to all dialogues (like error handling, for example). The expressiveness of the robot is based on the use of a library of predefined multimodal gestures, or expressions, modelled as state machines. The module managing the expressiveness receives requests for performing gestures, schedules their execution in order to avoid any possible conflict that might arise, loads them, and ensures that their execution goes without problems. The proposed approach is also able to generate expressions in runtime based on a list of unimodal actions (an utterance, the motion of a limb, etc...). One of the key features of the proposed expressiveness management approach is the integration of a series of modulation techniques that can be used to modify the robot’s expressions in runtime. This would allow the robot to adapt them to the particularities of a given situation (which would also increase the variability of the robot expressiveness), and to display different internal states with the same expressions. Considering that being recognized as a living being is a requirement for engaging in social encounters, the perception of a social robot as a living entity is a key requirement to foster human-robot interactions. In this dissertation, two approaches have been proposed. The first method generates actions for the different interfaces of the robot at certain intervals. The frequency and intensity of these actions are defined by a signal that represents the pulse of the robot, which can be adapted to the context of the interaction or the internal state of the robot. The second method enhances the robot’s utterance by predicting the appropriate non-verbal expressions that should accompany them, according to the content of the robot’s message, as well as its communicative intention. A deep learning model receives the transcription of the robot’s utterances, predicts which expressions should accompany it, and synchronizes them, so each gesture selected starts at the appropriate time. The model has been developed using a combination of a Long-Short Term Memory network-based encoder and a Conditional Random Field for generating a sequence of gestures that are combined with the robot’s utterance. All the elements presented above conform the core of a modular Human-Robot Interaction architecture that has been integrated in multiple platforms, and tested under different conditions.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fernando Torres Medina.- Secretario: Concepción Alicia Monje Micharet.- Vocal: Amirabdollahian Farshi

    Proceedings of the 2021 DigitalFUTURES

    Get PDF
    This open access book is a compilation of selected papers from 2021 DigitalFUTURES—The 3rd International Conference on Computational Design and Robotic Fabrication (CDRF 2021). The work focuses on novel techniques for computational design and robotic fabrication. The contents make valuable contributions to academic researchers, designers, and engineers in the industry. As well, readers encounter new ideas about understanding material intelligence in architecture

    Motion-Based Video Games for Stroke Rehabilitation with Reduced Compensatory Motions

    Get PDF
    Stroke is the leading cause of long-term disability among adults in industrialized nations, with 80% of people who survive strokes experiencing motor disabilities. Recovery requires daily exercise with a high number of repetitions, often without therapist supervision. Motion-based video games can help motivate people with stroke to perform the necessary exercises to recover. We explore the design space of video games for stroke rehabilitation using Wii remotes and webcams as input devices, and share the lessons we learned about what makes games therapeutically useful. We demonstrate the feasibility of using games for home-based stroke therapy with a six-week case study. We show that exercise with games can help recovery even 17 years after the stroke, and share the lessons that we learned for game systems to be used at home as a part of outpatient therapy. As a major issue with home-based therapy, we identify that unsupervised exercises lead to compensatory motions that can impede recovery and create new health issues. We reliably detect torso compensation in shoulder exercises using a custom harness, and develop a game that meaningfully uses both exercise and compensation as inputs. We provide in-game feedback that reduces compensation in a number of ways. We evaluate alternative ways for reducing compensation in controlled experiments and show that using techniques from operant conditioning are effective in significantly reducing compensatory behavior compared to existing approaches
    corecore