544 research outputs found
Adaptable videogame platform for interactive upper extremity rehabilitation
The primary objective of this work is to design a recreational rehabilitation videogame platform for customizing motivating games that interactively encourage purposeful upper extremity gross motor movements. Virtual reality (VR) technology is a popular application for rehabilitation therapies but there is a constant need for more accessible and affordable systems. We have developed a recreational VR game platform can be used as an independent therapy supplement without laboratory equipment and is inexpensive, motivating, and adaptable. The behaviors and interactive features can be easily modified and customized based on players\u27 limitations or progress.
A real-time method of capturing hand movements using programmed color detection mechanisms to create the simulated virtual environments (VEs) is implemented. Color markers are tracked and simultaneously given coordinates in the VE where the player sees representations of their hands and other interacting objects whose behaviors can be customized and adapted to fit therapeutic objectives and players\u27 interests. After gross motor task repetition and involvement in the adaptable games, mobility of the upper extremities may improve. The videogame platform is expanded and optimized to allow modifications to base inputs and algorithms for object interactions through graphical user interfaces, thus providing the adaptable need in VR rehabilitation
Robotic Platforms for Assistance to People with Disabilities
People with congenital and/or acquired disabilities constitute a great number of dependents today. Robotic platforms to help people with disabilities are being developed with the aim of providing both rehabilitation treatment and assistance to improve their quality of life. A high demand for robotic platforms that provide assistance during rehabilitation is expected because of the health status of the world due to the COVID-19 pandemic. The pandemic has resulted in countries facing major challenges to ensure the health and autonomy of their disabled population. Robotic platforms are necessary to ensure assistance and rehabilitation for disabled people in the current global situation. The capacity of robotic platforms in this area must be continuously improved to benefit the healthcare sector in terms of chronic disease prevention, assistance, and autonomy. For this reason, research about human–robot interaction in these robotic assistance environments must grow and advance because this topic demands sensitive and intelligent robotic platforms that are equipped with complex sensory systems, high handling functionalities, safe control strategies, and intelligent computer vision algorithms. This Special Issue has published eight papers covering recent advances in the field of robotic platforms to assist disabled people in daily or clinical environments. The papers address innovative solutions in this field, including affordable assistive robotics devices, new techniques in computer vision for intelligent and safe human–robot interaction, and advances in mobile manipulators for assistive tasks
Space Science Opportunities Augmented by Exploration Telepresence
Since the end of the Apollo missions to the lunar surface in December 1972, humanity has exclusively conducted scientific studies on distant planetary surfaces using teleprogrammed robots. Operations and science return for all of these missions are constrained by two issues related to the great distances between terrestrial scientists and their exploration targets: high communication latencies and limited data bandwidth.
Despite the proven successes of in-situ science being conducted using teleprogrammed robotic assets such as Spirit, Opportunity, and Curiosity rovers on the surface of Mars, future planetary field research may substantially overcome latency and bandwidth constraints by employing a variety of alternative strategies that could involve: 1) placing scientists/astronauts directly on planetary surfaces, as was done in the Apollo era; 2) developing fully autonomous robotic systems capable of conducting in-situ field science research; or 3) teleoperation of robotic assets by humans sufficiently proximal to the exploration targets to drastically reduce latencies and significantly increase bandwidth, thereby achieving effective human telepresence.
This third strategy has been the focus of experts in telerobotics, telepresence, planetary science, and human spaceflight during two workshops held from October 3–7, 2016, and July 7–13, 2017, at the Keck Institute for Space Studies (KISS). Based on findings from these workshops, this document describes the conceptual and practical foundations of low-latency telepresence (LLT), opportunities for using derivative approaches for scientific exploration of planetary surfaces, and circumstances under which employing telepresence would be especially productive for planetary science. An important finding of these workshops is the conclusion that there has been limited study of the advantages of planetary science via LLT. A major recommendation from these workshops is that space agencies such as NASA should substantially increase science return with greater investments in this promising strategy for human conduct at distant exploration sites
Space Science Opportunities Augmented by Exploration Telepresence
Since the end of the Apollo missions to the lunar surface in December 1972, humanity has exclusively conducted scientific studies on distant planetary surfaces using teleprogrammed robots. Operations and science return for all of these missions are constrained by two issues related to the great distances between terrestrial scientists and their exploration targets: high communication latencies and limited data bandwidth.
Despite the proven successes of in-situ science being conducted using teleprogrammed robotic assets such as Spirit, Opportunity, and Curiosity rovers on the surface of Mars, future planetary field research may substantially overcome latency and bandwidth constraints by employing a variety of alternative strategies that could involve: 1) placing scientists/astronauts directly on planetary surfaces, as was done in the Apollo era; 2) developing fully autonomous robotic systems capable of conducting in-situ field science research; or 3) teleoperation of robotic assets by humans sufficiently proximal to the exploration targets to drastically reduce latencies and significantly increase bandwidth, thereby achieving effective human telepresence.
This third strategy has been the focus of experts in telerobotics, telepresence, planetary science, and human spaceflight during two workshops held from October 3–7, 2016, and July 7–13, 2017, at the Keck Institute for Space Studies (KISS). Based on findings from these workshops, this document describes the conceptual and practical foundations of low-latency telepresence (LLT), opportunities for using derivative approaches for scientific exploration of planetary surfaces, and circumstances under which employing telepresence would be especially productive for planetary science. An important finding of these workshops is the conclusion that there has been limited study of the advantages of planetary science via LLT. A major recommendation from these workshops is that space agencies such as NASA should substantially increase science return with greater investments in this promising strategy for human conduct at distant exploration sites
Recommended from our members
Soft Morphological Computation
Soft Robotics is a relatively new area of research, where progress in material science has powered the next generation of robots, exhibiting biological-like properties such as soft/elastic tissues, compliance, resilience and more besides. One of the issues when employing soft robotics technologies is the soft nature of the interactions arising between the robot and its environment. These interactions are complex, and the their dynamics are non-linear and hard to capture with known models. In this thesis we argue that complex soft interactions
can actually be beneficial to the robot, and give rise to rich stimuli which can be used for the resolution of robot tasks. We further argue that the usefulness of these interactions depends on statistical regularities, or structure, that appear in the stimuli. To this end, robots should appropriately employ their morphology and their actions, to influence the system-environment interactions such that structure can arise in the stimuli. In this thesis we show that learning processes can be used to perform such a task. Following this rationale, this thesis proposes and supports the theory of Soft Morphological Computation (SoMComp), by which a soft robot should appropriately condition, or ‘affect’, the soft interactions to improve the quality of the physical stimuli arising from it. SoMComp is composed of four main principles, i.e.: Soft Proprioception, Soft Sensing, Soft Morphology and Soft Actuation. Each of these principles is explored in the context of haptic object recognition or object handling in soft robots. Finally, this thesis provides an overview of this research and its future directions.AHDB CP17
Perception and intelligent localization for autonomous driving
Mestrado em Engenharia de Computadores e TelemáticaVisão por computador e fusão sensorial são temas relativamente recentes, no entanto largamente adoptados no desenvolvimento de robôs autónomos que exigem adaptabilidade ao seu ambiente envolvente. Esta dissertação foca-se numa abordagem a estes dois temas para alcançar percepção no contexto de condução autónoma. O uso de câmaras para atingir este fim é um
processo bastante complexo. Ao contrário dos meios sensoriais clássicos que fornecem sempre o mesmo tipo de informação precisa e atingida de forma determinÃstica, as sucessivas imagens adquiridas por uma câmara estão repletas
da mais variada informação e toda esta ambÃgua e extremamente difÃcil de extrair. A utilização de câmaras como meio sensorial em robótica
é o mais próximo que chegamos na semelhança com aquele que é o de maior importância no processo de percepção humana, o sistema de visão. Visão por computador é uma disciplina cientÃfica que engloba à reas como: processamento
de sinal, inteligência artificial, matemática, teoria de controlo, neurobiologia e fÃsica.
A plataforma de suporte ao estudo desenvolvido no âmbito desta dissertação é o ROTA (RObô Triciclo Autónomo) e todos os elementos que consistem
o seu ambiente. No contexto deste, são descritas abordagens que foram introduzidas com fim de desenvolver soluções para todos os desafios que o
robô enfrenta no seu ambiente: detecção de linhas de estrada e consequente percepção desta, detecção de obstáculos, semáforos, zona da passadeira e zona de obras. É também descrito um sistema de calibração e aplicação da remoção da perspectiva da imagem, desenvolvido de modo a mapear os elementos percepcionados em distâncias reais. Em consequência do sistema
de percepção, é ainda abordado o desenvolvimento de auto-localização integrado
numa arquitectura distribuÃda incluindo navegação com planeamento inteligente. Todo o trabalho desenvolvido no decurso da dissertação é essencialmente centrado no desenvolvimento de percepção robótica no contexto de condução autónoma.Computer vision and sensor fusion are subjects that are quite recent, however widely adopted in the development of autonomous robots that require
adaptability to their surrounding environment. This thesis gives an approach on both in order to achieve perception in the scope of autonomous driving.
The use of cameras to achieve this goal is a rather complex subject.
Unlike the classic sensorial devices that provide the same type of information with precision and achieve this in a deterministic way, the successive
images acquired by a camera are replete with the most varied information, that this ambiguous and extremely dificult to extract. The use of cameras
for robotic sensing is the closest we got within the similarities with what is of most importance in the process of human perception, the vision system. Computer vision is a scientific discipline that encompasses areas such as signal processing, artificial intelligence, mathematics, control theory,
neurobiology and physics.
The support platform in which the study within this thesis was developed, includes ROTA (RObô Triciclo Autónomo) and all elements comprising its
environment. In its context, are described approaches that introduced in the platform in order to develop solutions for all the challenges facing the robot in its environment: detection of lane markings and its consequent perception, obstacle detection, trafic lights, crosswalk and road maintenance area. It is also described a calibration system and implementation for the removal of the image perspective, developed in order to map the
elements perceived in actual real world distances. As a result of the perception system development, it is also addressed self-localization integrated in
a distributed architecture that allows navigation with long term planning.
All the work developed in the course of this work is essentially focused on robotic perception in the context of autonomous driving
Improving Human-Robot Cooperation and Safety In The Shared Automated Workplace
Modern industries take advantage of human-robot interaction to facilitate better manufacturing processes, particularly in applications where a human is working in a shared workplace with robots. In manufacturing settings where separation barriers, such as fences, are not used to protect human workers, approaches should be implemented for guaranteeing human safety. Despite existing methods, which define specifications and scenarios for human-robot cooperation in industry, new approaches are needed to provide a safer workplace while enhancing productivity. This thesis provides collision-free techniques for safe human-robot collaboration in an industrial setting. Human-robot interaction in the industry is studied to develop novel solutions and provide a secure and productive industrial environment. Providing a safe distance between a human worker and a manipulating robot, to prevent a collision, is an important subject of this work. This thesis presents a safe workplace by proposing an effective human-tracking method using a sensor network. The proposed technique utilizes a non-linear Kalman filter and Gaussian optimization to reduce the risk of collision between humans and robots. In this regard, selecting the most sensitive sensors to update the Kalman filter’s gain in a noisy environment is crucial. To this end, reliable sensor selection schemes are investigated, and a strategy based on multi-objective optimization is implemented.Finally, safe human-robot cooperation is investigated where humans work close to the robot or directly manipulate it in a shared task. In this case, the human’s hand is the most vulnerable limb and should be protected to achieve safe interaction. In this thesis, force myography (FMG) is used to detect the human hand activities to recognize hand gestures, detect the exerted force by a worker\u27s hand, and predict human intention. This information is then used to control the robot parameters, such as the gripper’s force. Furthermore, a human intention prediction scheme using FMG features and based on recurrent neural network (RNN) topology is proposed, to ensure safety during several industrial collaboration scenarios.The validity of the proposed approaches and the performance of the suggested control techniques are demonstrated through extensive simulation and practical experimentation. The results show that the proposed approaches will reduce the collision risk in human-robo
Design of a Multiple-User Intelligent Feeding Robot for Elderly and Disabled
The number of elderly people around the world is growing rapidly. This has led to an increase in the number of people who are seeking assistance and adequate service either at home or in long-term- care institutions to successfully accomplish their daily activities. Responding to these needs has been a burden to the health care system in terms of labour and associated costs and has motivated research in developing alternative services using new technologies.
Various intelligent, and non-intelligent, machines and robots have been developed to meet the needs of elderly and people with upper limb disabilities or dysfunctions in gaining independence in eating, which is one of the most frequent and time-consuming everyday tasks. However, in almost all cases, the proposed systems are designed only for the personal use of one individual and little effort to design a multiple-user feeding robot has been previously made. The feeding requirements of elderly in environments such as senior homes, where many elderly residents dine together at least three times per day, have not been extensively researched before.
The aim of this research was to develop a machine to feed multiple elderly people based on their characteristics and feeding needs, as determined through observations at a nursing home. Observations of the elderly during meal times have revealed that almost 40% of the population was totally dependent on nurses or caregivers to be fed. Most of those remaining, suffered from hand tremors, joint pain or lack of hand muscle strength, which made utensil manipulation and coordination very difficult and the eating process both messy and lengthy. In addition, more than 43% of the elderly were very slow in eating because of chewing and swallowing problems and most of the rest were slow in scooping and directing utensils toward their mouths. Consequently, one nurse could only respond to a maximum of two diners simultaneously. In order to manage the needs of all elderly diners, they required the assistance of additional staff members. The limited time allocated for each meal and the daily progression of the seniors’ disabilities also made mealtime very challenging.
Based on the caregivers’ opinion, many of the elderly in such environments can benefit from a machine capable of feeding multiple users simultaneously. Since eating is a slow procedure, the idle state of the robot during one user’s chewing and swallowing time can be allotted for feeding another person who is sitting at the same table.
The observations and studies have resulted in the design of a food tray, and selection of an appropriate robot and applicable user interface. The proposed system uses a 6-DOF serial articulated robot in the center of a four-seat table along with a specifically designed food tray to feed one to four people. It employs a vision interface for food detection and recognition. Building the dynamic equations of the robotic system and simulation of the system were used to verify its dynamic behaviour before any prototyping and real-time testing
Rehabilitation Engineering
Population ageing has major consequences and implications in all areas of our daily life as well as other important aspects, such as economic growth, savings, investment and consumption, labour markets, pensions, property and care from one generation to another. Additionally, health and related care, family composition and life-style, housing and migration are also affected. Given the rapid increase in the aging of the population and the further increase that is expected in the coming years, an important problem that has to be faced is the corresponding increase in chronic illness, disabilities, and loss of functional independence endemic to the elderly (WHO 2008). For this reason, novel methods of rehabilitation and care management are urgently needed. This book covers many rehabilitation support systems and robots developed for upper limbs, lower limbs as well as visually impaired condition. Other than upper limbs, the lower limb research works are also discussed like motorized foot rest for electric powered wheelchair and standing assistance device
- …