782 research outputs found
Recommended from our members
Mobile assistive technologies for the visually impaired
There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes)
A simple 5-DOF walking robot for space station application
Robots on the NASA space station have a potential range of applications from assisting astronauts during EVA (extravehicular activity), to replacing astronauts in the performance of simple, dangerous, and tedious tasks; and to performing routine tasks such as inspections of structures and utilities. To provide a vehicle for demonstrating the pertinent technologies, a simple robot is being developed for locomotion and basic manipulation on the proposed space station. In addition to the robot, an experimental testbed was developed, including a 1/3 scale (1.67 meter modules) truss and a gravity compensation system to simulate a zero-gravity environment. The robot comprises two flexible links connected by a rotary joint, with a 2 degree of freedom wrist joints and grippers at each end. The grippers screw into threaded holes in the nodes of the space station truss, and enable it to walk by alternately shifting the base of support from one foot (gripper) to the other. Present efforts are focused on mechanical design, application of sensors, and development of control algorithms for lightweight, flexible structures. Long-range research will emphasize development of human interfaces to permit a range of control modes from teleoperated to semiautonomous, and coordination of robot/astronaut and multiple-robot teams
Haptic Media Scenes
The aim of this thesis is to apply new media phenomenological and enactive embodied cognition approaches to explain the role of haptic sensitivity and communication in personal computer environments for productivity. Prior theory has given little attention to the role of haptic senses in influencing cognitive processes, and do not frame the richness of haptic communication in interaction designâas haptic interactivity in HCI has historically tended to be designed and analyzed from a perspective on communication as transmissions, sending and receiving haptic signals. The haptic sense may not only mediate contact confirmation and affirmation, but also rich semiotic and affective messagesâyet this is a strong contrast between this inherent ability of haptic perception, and current day support for such haptic communication interfaces. I therefore ask: How do the haptic senses (touch and proprioception) impact our cognitive faculty when mediated through digital and sensor technologies? How may these insights be employed in interface design to facilitate rich haptic communication? To answer these questions, I use theoretical close readings that embrace two research fields, new media phenomenology and enactive embodied cognition. The theoretical discussion is supported by neuroscientific evidence, and tested empirically through case studies centered on digital art. I use these insights to develop the concept of the haptic figura, an analytical tool to frame the communicative qualities of haptic media. The concept gauges rich machine- mediated haptic interactivity and communication in systems with a material solution supporting active haptic perception, and the mediation of semiotic and affective messages that are understood and felt. As such the concept may function as a design tool for developers, but also for media critics evaluating haptic media. The tool is used to frame a discussion on opportunities and shortcomings of haptic interfaces for productivity, differentiating between media systems for the hand and the full body. The significance of this investigation is demonstrating that haptic communication is an underutilized element in personal computer environments for productivity and providing an analytical framework for a more nuanced understanding of haptic communication as enabling the mediation of a range of semiotic and affective messages, beyond notification and confirmation interactivity
Collaborative robot control with hand gestures
Mestrado de dupla diplomação com a Université Libre de TunisThis thesis focuses on hand gesture recognition by proposing an architecture to control a collaborative robot in real-time vision based on hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bar e hand in a cluttered background using skin detection and contour comparison. The second stage allows recognizing hand gestures using a Machine learning method algorithm. Finally an interface has been developed to control the robot over.
Our hand gesture recognition system consists of two parts, in the first part for every frame captured from a camera we extract the keypoints for every training image using a machine learning algorithm, and we appoint the keypoints from every image into a keypoint map. This map is treated as an input for our processing algorithm which uses several methods to recognize the fingers in each hand.
In the second part, we use a 3D camera with Infrared capabilities to get a 3D model of the hand to implement it in our system, after that we track the fingers in each hand and recognize them which made it possible to count the extended fingers and to distinguish each finger pattern.
An interface to control the robot has been made that utilizes the previous steps that gives a real-time process and a dynamic 3D representation.Esta dissertação trata do reconhecimento de gestos realizados com a mĂŁo humana, propondo uma arquitetura para interagir com um robĂŽ colaborativo, baseado em visĂŁo computacional, rastreamento e reconhecimento de gestos. O primeiro estĂĄgio do sistema desenvolvido permite detectar e rastrear a presença de uma mĂŁo em um fundo desordenado usando detecção de pele e comparação de contornos. A segunda fase permite reconhecer os gestos das mĂŁos usando um algoritmo do mĂ©todo de aprendizado de mĂĄquina. Finalmente, uma interface foi desenvolvida para interagir com robĂŽ. O sistema de reconhecimento de gestos manuais estĂĄ dividido em duas partes. Na primeira parte, para cada quadro capturado de uma cĂąmera, foi extraĂdo os pontos-chave de cada imagem de treinamento usando um algoritmo de aprendizado de mĂĄquina e nomeamos os pontos-chave de cada imagem em um mapa de pontos-chave. Este mapa Ă© tratado como uma entrada para o algoritmo de processamento que usa vĂĄrios mĂ©todos para reconhecer os dedos em cada mĂŁo. Na segunda parte, foi utilizado uma cĂąmera 3D com recursos de infravermelho para obter um modelo 3D da mĂŁo para implementĂĄ-lo em no sistema desenvolvido, e entĂŁo, foi realizado os rastreio dos dedos de cada mĂŁo seguido pelo reconhecimento que possibilitou contabilizar os dedos estendidos e para distinguir cada padrĂŁo de dedo. Foi elaborado uma interface para interagir com o robĂŽ manipulador que utiliza as etapas anteriores que fornece um processo em tempo real e uma representação 3D dinĂąmica
Tangible user interfaces : past, present and future directions
In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research
Recommended from our members
A demonstration and comparative analysis of haptic performance using a Gough-Stewart platform as a wearable haptic feedback device
In many hazardous work environments, contact tasks ranging from manufacturing to disassembly to emergency response are performed by industrial manipulators. Due to the hazardous and complex nature of these environments, teleoperation is often employed. When such is the case, the operator is left to interpret a large amount of data during task completion due to the complexity of modern robotic systems and the possible complexity of the tasks. This information is usually processed visually but can lead to sensory overload. To mitigate this, the information processing can also be distributed through other modes of sensory such as auditory or haptic. The University of Texas at Austin's TeMoto hands-free interface reduces the burden on the operator of commanding remote systems by enabling the use of gestural and verbal commands to complete a range of tasks, but the removal of a mechanical interactive device from the operator interface complicates the inclusion of haptic feedback. In this work, a standalone Gough-Stewart platform previously configured as a wearable haptic feedback device for the Nuclear and Applied Robotics Group at the University of Texas at Austin provides real-time haptic feedback to the unconstrained hand(s) of the operator. In doing so, this haptic interface can be employed with the intent of enhancing situational awareness and minimizing operator stress by imparting forces and torques to the user based on those imparted on the end-effector of the industrial manipulator. While multiple technical issues and human factor issues must be addressed, this effort focuses on integrating the system and evaluating its performance for various industrial manipulator designs and sensor modalities. After testing various digital signal processing techniques, functionality was demonstrated among one series-elastic and two rigid industrial manipulators, each with different force/torque data acquisition characteristics and a comparative analysis in haptic performance was performed. Furthermore, it was demonstrated with the TeMoto hands-free teleoperation system. Overall, the demonstrations and experiments performed in this work prove the system to be a viable, hardware agnostic means of haptic feedback and a strong basis for future effortsMechanical Engineerin
Nutzerorientierte Evaluation zweier altersgerechter Assistenzroboter zur UnterstĂŒtzung von AlltagsaktivitĂ€ten (âAmbient Assisted Living-Roboterâ) bei Ă€lteren Menschen mit funktionellen EinschrĂ€nkungen: MOBOT-Rollator und I-SUPPORT-Duschroboter
Ziel der vorliegenden Arbeit ist die nutzerorientierte Evaluation zweier Prototypen fĂŒr altersgerechte Assistenzroboter zur UnterstĂŒtzung von AlltagsaktivitĂ€ten (âAmbient Assisted Livingâ [AAL]-Roboter) bei Ă€lteren Menschen mit funktionellen EinschrĂ€nkungen. Bei den Prototypen handelt es sich dabei um (1) einen robotergestĂŒtzten Rollator zur UnterstĂŒtzung der MobilitĂ€t (MOBOT) und (2) einen Assistenzroboter zur UnterstĂŒtzung von DuschaktivitĂ€ten (I-SUPPORT).
Manuskript I dokumentiert eine systematische Literaturanalyse des methodischen Vorgehens bisheriger Studien zur Evaluation robotergestĂŒtzter Rollatoren aus der Nutzerperspektive. Die meisten Studien zeigen erhebliche methodische MĂ€ngel, wie unzureichende StichprobengröĂen/-beschreibungen; Teilnehmer nicht reprĂ€sentativ fĂŒr die Nutzergruppe der robotergestĂŒtzten Rollatoren; keine geeigneten, standardisierten und validierten Assessmentmethoden und/oder keine Inferenzstatistik. Ein generisches methodisches Vorgehen fĂŒr die Evaluation robotergestĂŒtzter Rollatoren konnte nicht identifiziert werden. FĂŒr die Konzeption und DurchfĂŒhrung zukĂŒnftiger Studien zur Evaluation robotergestĂŒtzter Rollatoren, aber auch anderer AAL-Systeme werden in Manuskript I abschlieĂend Handlungsempfehlungen formuliert.
Manuskript II analysiert die Untersuchungsergebnisse der in Manuskript I identifizierten Studien. Es zeigen sich sehr heterogene Ergebnisse hinsichtlich des Mehrwerts der innovativen Assistenzfunktionen von robotergestĂŒtzten Rollatoren. Im Allgemeinen werden sie jedoch als positiv von den Nutzern wahrgenommen. Die groĂe HeterogenitĂ€t und methodischen MĂ€ngel der Studien schrĂ€nken die Interpretierbarkeit ihre Untersuchungsergebnisse stark ein. Insgesamt verdeutlicht Manuskript II, dass die Evidenz zur EffektivitĂ€t und positiven Wahrnehmung robotergestĂŒtzter Rollatoren aus der Nutzerperspektive noch unzureichend ist.
Basierend auf den Erkenntnissen und Handlungsempfehlungen der systematischen Literaturanalysen aus Manuskript I und II wurden die nutzerorientierten Evaluationsstudien des MOBOT-Rollators konzipiert und durchgefĂŒhrt (Manuskript III-VI).
Manuskript III ĂŒberprĂŒft die EffektivitĂ€t des in den MOBOT-Rollator integrierten Navigationssystems bei potentiellen Nutzern (= Ă€ltere Personen mit Gangstörungen bzw. Rollator als Gehhilfe im Alltag). Es liefert erstmals einen statistischen Nachweis dafĂŒr, dass eine solche Assistenzfunktion effektiv ist, um die Navigationsleistung der Nutzer (z. B. geringer Stoppzeit, kĂŒrzere Wegstrecke) â insbesondere derjenigen mit kognitiven EinschrĂ€nkungen â in einem realitĂ€tsnahen Anwendungsszenario zu verbessern.
Manuskript IV untersucht die konkurrente ValiditĂ€t des MOBOT-integrierten Ganganalysesystems bei potentiellen Nutzern. Im Vergleich zu einem etablierten Referenzstandard (GAITRiteÂź-System) zeigt es eine hohe konkurrente ValiditĂ€t fĂŒr die Erfassung zeitlicher, nicht jedoch raumbezogener Gangparameter. Diese können zwar ebenfalls mit hoher Konsistenz gemessen werden, aber lediglich mit einer begrenzten absoluten Genauigkeit.
Manuskript V umfasst die nutzerorientierte Evaluation der im MOBOT-Rollator integrierten Assistenzfunktion zur Hindernisvermeidung und belegt erstmals die EffektivitĂ€t einer solchen Funktionen bei potentiellen Nutzern. Unter Verwendung des fĂŒr den MOBOT-Rollator neu entwickelten technischen Ansatzes fĂŒr die Hindernisvermeidung zeigten die Teilnehmer signifikante Verbesserungen bei der BewĂ€ltigung eines Hindernisparcours (weniger Kollisionen und geringere AnnĂ€herungsgeschwindigkeit an die Hindernisse).
Manuskript VI dokumentiert die EffektivitĂ€t und Zufriedenheit mit der Aufstehhilfe des MOBOT-Rollators von potentiellen Nutzern. Es wird gezeigt, dass die Erfolgsrate fĂŒr den Sitzen-Stehen-Transfer Ă€lterer Personen mit motorischen EinschrĂ€nkungen durch die Aufstehhilfe signifikant verbessert werden kann. Die Ergebnisse belegen zudem eine hohe Nutzerzufriedenheit mit dieser Assistenzfunktion, insbesondere bei Personen mit höherem Body-Mass-Index.
Manuskript VII untersucht die Mensch-Roboter-Interaktion zwischen dem I-SUPPORT-Duschroboter und seiner potentiellen Nutzer (= Ă€ltere Personen mit Problemen bei Baden/Duschen) und ĂŒberprĂŒft deren EffektivitĂ€t sowie Zufriedenheit mit drei unterschiedlich autonomen Betriebsmodi. Die Studienergebnisse dokumentieren, dass sich mit zunehmender Kontrolle des Nutzers (= abnehmende Autonomie des Duschroboters) nicht nur die EffektivitĂ€t fĂŒr das Abduschen eines definierten Körperbereichs verringert, sondern auch die Nutzerzufriedenheit sinkt.
Manuskript VIII umfasst die Evaluation eines spezifischen Nutzertrainings auf die gestenbasierte Mensch-Roboter-Interaktion mit dem I-SUPPORT-Duschroboter. Es wird gezeigt, dass ein solches Training die AusfĂŒhrung der Gesten potentieller Nutzer und sowie die Gestenerkennungsrate des Duschroboters signifikant verbessern, was insgesamt auf eine optimierte Mensch-Roboter-Interaktion in Folge des Trainings schlieĂen lĂ€sst. Teilnehmer mit der schlechtesten Ausgangsleistung in der AusfĂŒhrung der Gesten und mit der gröĂten Angst vor Technologien profitierten am meisten vom Nutzertraining.
Insgesamt belegen die Studienergebnisse zur nutzerorientierten Evaluation des MOBOT-Rollators die EffektivitĂ€t und GĂŒltigkeit seiner innovativen Teilfunktionen. Sie weisen auf ein hohes Potential der Assistenzfunktionen (Navigationssystem, Hindernisvermeidung, Aufstehhilfe) zur Verbesserung der MobilitĂ€t Ă€lterer Menschen mit motorischen EinschrĂ€nkungen hin. Vor dem Hintergrund der methodischen MĂ€ngel und unzureichenden evidenzbasierten Datenlage hierzu, liefert diese Dissertationsschrift erstmals statistische Belege fĂŒr den Mehrwert solcher Teilfunktionen bei potentiellen Nutzern und leistet somit einen wichtigen Beitrag zur SchlieĂung der bisherigen ForschungslĂŒcke hinsichtlich des nutzerorientierten Wirksamkeits- und GĂŒltigkeitsnachweises robotergestĂŒtzter Rollatoren und ihrer innovativen Teilfunktionen.
Die Ergebnisse der Studien des I-SUPPORT-Duschroboters liefern wichtige Erkenntnisse hinsichtlich der Mensch-Roboter-Interaktion im höheren Alter. Sie zeigen, dass bei Ă€lteren Nutzern fĂŒr eine effektive Interaktion Betriebsmodi mit einem hohen MaĂ an Autonomie des Duschroboters notwendig sind. Trotz ihrer eingeschrĂ€nkten Kontrolle ĂŒber den Roboter, waren die Nutzer mit dem autonomsten Betriebsmodus sogar am zufriedensten. DarĂŒber hinaus unterstreichen die Ergebnisse hinsichtlich der gestenbasierten Interaktion mit dem I-SUPPORT-Duschroboter, dass zukĂŒnftige Entwicklungen von altersgerechten Assistenzrobotern mit gestenbasierter Interaktion nicht nur die Verbesserungen technischer Aspekte, sondern auch die Sicherstellung und Verbesserungen der QualitĂ€t der Nutzergesten fĂŒr die Mensch-Roboter-Interaktion durch geeignete Trainings- oder SchulungsmaĂnahmen berĂŒcksichtigen sollten. Das vorgestellte Nutzertraining könnte hierfĂŒr ein mögliches Modell darstellen
I-Support: A robotic platform of an assistive bathing robot for the elderly population
In this paper we present a prototype integrated robotic system, the I-Support bathing robot, that aims at supporting new aspects of assisted daily-living activities on a real-life scenario. The paper focuses on describing and evaluating key novel technological features of the system, with the emphasis on cognitive humanârobot interaction modules and their evaluation through a series of clinical validation studies. The I-Support project on its whole has envisioned the development of an innovative, modular, ICT-supported service robotic system that assists frail seniors to safely and independently complete an entire sequence of physically and cognitively demanding bathing tasks, such as properly washing their back and their lower limbs. A variety of innovative technologies have been researched and a set of advanced modules of sensing, cognition, actuation and control have been developed and seamlessly integrated to enable the system to adapt to the target population abilities. These technologies include: human activity monitoring and recognition, adaptation of a motorized chair for safe transfer of the elderly in and out the bathing cabin, a context awareness system that provides full environmental awareness, as well as a prototype soft robotic arm and a set of user-adaptive robot motion planning and control algorithms. This paper focuses in particular on the multimodal action recognition system, developed to monitor, analyze and predict user actions with a high level of accuracy and detail in real-time, which are then interpreted as robotic tasks. In the same framework, the analysis of human actions that have become available through the projectâs multimodal audioâgestural dataset, has led to the successful modeling of HumanâRobot Communication, achieving an effective and natural interaction between users and the assistive robotic platform. In order to evaluate the I-Support system, two multinational validation studies were conducted under realistic operating conditions in two clinical pilot sites. Some of the findings of these studies are presented and analyzed in the paper, showing good results in terms of: (i) high acceptability regarding the system usability by this particularly challenging target group, the elderly end-users, and (ii) overall task effectiveness of the system in different operating modes
- âŠ