17,831 research outputs found
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
Empowering and assisting natural human mobility: The simbiosis walker
This paper presents the complete development of the Simbiosis Smart Walker. The device is equipped with a set of sensor subsystems to acquire user-machine interaction forces and the temporal evolution of user's feet during gait. The authors present an adaptive filtering technique used for the identification and separation of different components found on the human-machine interaction forces. This technique allowed isolating the components related with the navigational commands and developing a Fuzzy logic controller to guide the device. The Smart Walker was clinically validated at the Spinal Cord Injury Hospital of Toledo - Spain, presenting great acceptability by spinal chord injury patients and clinical staf
Viia-hand: a Reach-and-grasp Restoration System Integrating Voice interaction, Computer vision and Auditory feedback for Blind Amputees
Visual feedback plays a crucial role in the process of amputation patients
completing grasping in the field of prosthesis control. However, for blind and
visually impaired (BVI) amputees, the loss of both visual and grasping
abilities makes the "easy" reach-and-grasp task a feasible challenge. In this
paper, we propose a novel multi-sensory prosthesis system helping BVI amputees
with sensing, navigation and grasp operations. It combines modules of voice
interaction, environmental perception, grasp guidance, collaborative control,
and auditory/tactile feedback. In particular, the voice interaction module
receives user instructions and invokes other functional modules according to
the instructions. The environmental perception and grasp guidance module
obtains environmental information through computer vision, and feedbacks the
information to the user through auditory feedback modules (voice prompts and
spatial sound sources) and tactile feedback modules (vibration stimulation).
The prosthesis collaborative control module obtains the context information of
the grasp guidance process and completes the collaborative control of grasp
gestures and wrist angles of prosthesis in conjunction with the user's control
intention in order to achieve stable grasp of various objects. This paper
details a prototyping design (named viia-hand) and presents its preliminary
experimental verification on healthy subjects completing specific
reach-and-grasp tasks. Our results showed that, with the help of our new
design, the subjects were able to achieve a precise reach and reliable grasp of
the target objects in a relatively cluttered environment. Additionally, the
system is extremely user-friendly, as users can quickly adapt to it with
minimal training
Vision Science and Technology at NASA: Results of a Workshop
A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program
A pathway to independence : wayfinding systems which adapt to a visually impaired person's context
Despite an increased amount of technologies and systems designed to address the navigational requirements of the visually impaired community of approximately 7.4 million in Europe, current research has failed to sufficiently address the human issues associated to their design and use. As more types of sensing technologies are developed to facilitate visually impaired travellers for different navigational purposes (local vs. distant and indoor vs. outdoor), an effective process of synchronisation is required. This synchronisation is represented through context-aware computing, which allows contextual information to not just be sensed (like most current wayfinding systems), but also adapted, discovered and augmented. In this paper, three user studies concerning the suitability of different types of navigational information for visually impaired and sighted people are described. For such systems to be effective, human cognitive maps, models and intentions need to be the focus of further research, in order to provide information that is tailored to a user's task, situation or environment. Methodologies aimed at establishing these issues need to be demonstrated through a multidisciplinary framework
Vision-Based Tactile Paving Detection Method in Navigation Systems for Visually Impaired Persons
In general, a visually impaired person relies on guide canes in order to walk outside besides depending only on a tactile pavement as a warning and directional tool in order to avoid any obstructions or hazardous situations. However, still a lot of training is needed in order to recognize the tactile pattern, and it is quite difficult for persons who have recently become visually impaired. This chapter describes the development and evaluation of vision-based tactile paving detection method for visually impaired persons. Some experiments will be conducted on how it works to detect the tactile pavement and identify the shape of tactile pattern. In this experiment, a vision-based method is proposed by using MATLAB including the Arduino platform and speaker as guidance tools. The output of this system based on the result found from tactile detection in MATLAB then produces auditory output and notifies the visually impaired about the type of tactile detected. Consequently, the development of tactile pavement detection system can be used by visually impaired persons for easy detection and navigation purposes
VANET Applications: Hot Use Cases
Current challenges of car manufacturers are to make roads safe, to achieve
free flowing traffic with few congestions, and to reduce pollution by an
effective fuel use. To reach these goals, many improvements are performed
in-car, but more and more approaches rely on connected cars with communication
capabilities between cars, with an infrastructure, or with IoT devices.
Monitoring and coordinating vehicles allow then to compute intelligent ways of
transportation. Connected cars have introduced a new way of thinking cars - not
only as a mean for a driver to go from A to B, but as smart cars - a user
extension like the smartphone today. In this report, we introduce concepts and
specific vocabulary in order to classify current innovations or ideas on the
emerging topic of smart car. We present a graphical categorization showing this
evolution in function of the societal evolution. Different perspectives are
adopted: a vehicle-centric view, a vehicle-network view, and a user-centric
view; described by simple and complex use-cases and illustrated by a list of
emerging and current projects from the academic and industrial worlds. We
identified an empty space in innovation between the user and his car:
paradoxically even if they are both in interaction, they are separated through
different application uses. Future challenge is to interlace social concerns of
the user within an intelligent and efficient driving
High speed research system study. Advanced flight deck configuration effects
In mid-1991 NASA contracted with industry to study the high-speed civil transport (HSCT) flight deck challenges and assess the benefits, prior to initiating their High Speed Research Program (HSRP) Phase 2 efforts, then scheduled for FY-93. The results of this nine-month effort are presented, and a number of the most significant findings for the specified advanced concepts are highlighted: (1) a no nose-droop configuration; (2) a far forward cockpit location; and (3) advanced crew monitoring and control of complex systems. The results indicate that the no nose-droop configuration is critically dependent upon the design and development of a safe, reliable, and certifiable Synthetic Vision System (SVS). The droop-nose configuration would cause significant weight, performance, and cost penalties. The far forward cockpit location, with the conventional side-by-side seating provides little economic advantage; however, a configuration with a tandem seating arrangement provides a substantial increase in either additional payload (i.e., passengers) or potential downsizing of the vehicle with resulting increases in performance efficiencies and associated reductions in emissions. Without a droop nose, forward external visibility is negated and takeoff/landing guidance and control must rely on the use of the SVS. The technologies enabling such capabilities, which de facto provides for Category 3 all-weather operations on every flight independent of weather, represent a dramatic benefits multiplier in a 2005 global ATM network: both in terms of enhanced economic viability and environmental acceptability
An Orientation & Mobility Aid for People with Visual Impairments
Orientierung&MobilitaÌt (O&M) umfasst eine Reihe von Techniken fuÌr Menschen mit SehschaÌdigungen, die ihnen helfen, sich im Alltag zurechtzufinden. Dennoch benoÌtigen sie einen umfangreichen und sehr aufwendigen Einzelunterricht mit O&M Lehrern, um diese Techniken in ihre taÌglichen AblaÌufe zu integrieren. WaÌhrend einige dieser Techniken assistive Technologien benutzen, wie zum Beispiel den Blinden-Langstock, Points of Interest Datenbanken oder ein Kompass gestuÌtztes Orientierungssystem, existiert eine unscheinbare KommunikationsluÌcke zwischen verfuÌgbaren Hilfsmitteln und Navigationssystemen.
In den letzten Jahren sind mobile Rechensysteme, insbesondere Smartphones, allgegenwaÌrtig geworden. Dies eroÌffnet modernen Techniken des maschinellen Sehens die MoÌglichkeit, den menschlichen Sehsinn bei Problemen im Alltag zu unterstuÌtzen, die durch ein nicht barrierefreies Design entstanden sind. Dennoch muss mit besonderer Sorgfalt vorgegangen werden, um dabei nicht mit den speziellen persoÌnlichen Kompetenzen und antrainierten Verhaltensweisen zu kollidieren, oder schlimmstenfalls O&M Techniken sogar zu widersprechen.
In dieser Dissertation identifizieren wir eine raÌumliche und systembedingte LuÌcke zwischen Orientierungshilfen und Navigationssystemen fuÌr Menschen mit SehschaÌdigung. Die raÌumliche LuÌcke existiert hauptsaÌchlich, da assistive Orientierungshilfen, wie zum Beispiel der Blinden-Langstock, nur dabei helfen koÌnnen, die Umgebung in einem limitierten Bereich wahrzunehmen, waÌhrend Navigationsinformationen nur sehr weitlaÌufig gehalten sind. ZusaÌtzlich entsteht diese LuÌcke auch systembedingt zwischen diesen beiden Komponenten â der Blinden-Langstock kennt die Route nicht, waÌhrend ein Navigationssystem nahegelegene Hindernisse oder O&M Techniken nicht weiter betrachtet. Daher schlagen wir verschiedene AnsaÌtze zum SchlieĂen dieser LuÌcke vor, um die Verbindung und Kommunikation zwischen Orientierungshilfen und Navigationsinformationen zu verbessern und betrachten das Problem dabei aus beiden Richtungen. Um nuÌtzliche relevante Informationen bereitzustellen, identifizieren wir zuerst die bedeutendsten Anforderungen an assistive Systeme und erstellen einige SchluÌsselkonzepte, die wir bei unseren Algorithmen und Prototypen beachten.
Existierende assistive Systeme zur Orientierung basieren hauptsaÌchlich auf globalen Navigationssatellitensystemen. Wir versuchen, diese zu verbessern, indem wir einen auf Leitlinien basierenden Routing Algorithmus erstellen, der auf individuelle BeduÌrfnisse anpassbar ist und diese beruÌcksichtigt. Generierte Routen sind zwar unmerklich laÌnger, aber auch viel sicherer, gemaÌĂ den in Zusammenarbeit mit O&M Lehrern erstellten objektiven Kriterien. AuĂerdem verbessern wir die VerfuÌgbarkeit von relevanten georeferenzierten Datenbanken, die fuÌr ein derartiges bedarfsgerechtes Routing benoÌtigt werden. Zu diesem Zweck erstellen wir einen maschinellen Lernansatz, mit dem wir Zebrastreifen in Luftbildern erkennen, was auch uÌber LaÌndergrenzen hinweg funktioniert, und verbessern dabei den Stand der Technik.
Um den Nutzen von MobilitaÌtsassistenz durch maschinelles Sehen zu optimieren, erstellen wir O&M Techniken nachempfundene AnsaÌtze, um die raÌumliche Wahrnehmung der unmittelbaren Umgebung zu erhoÌhen. Zuerst betrachten wir dazu die verfuÌgbare FreiflaÌche und informieren auch uÌber moÌgliche Hindernisse. Weiterhin erstellen wir einen neuartigen Ansatz, um die verfuÌgbaren Leitlinien zu erkennen und genau zu lokalisieren, und erzeugen virtuelle Leitlinien, welche Unterbrechungen uÌberbruÌcken und bereits fruÌhzeitig Informationen uÌber die naÌchste Leitlinie bereitstellen. AbschlieĂend verbessern wir die ZugaÌnglichkeit von FuĂgaÌngeruÌbergaÌngen, insbesondere Zebrastreifen und FuĂgaÌngerampeln, mit einem Deep Learning Ansatz.
Um zu analysieren, ob unsere erstellten AnsaÌtze und Algorithmen einen tatsaÌchlichen Mehrwert fuÌr Menschen mit SehschaÌdigung erzeugen, vollziehen wir ein kleines Wizard-of-Oz-Experiment zu unserem bedarfsgerechten Routing â mit einem sehr ermutigendem Ergebnis. Weiterhin fuÌhren wir eine umfangreichere Studie mit verschiedenen Komponenten und dem Fokus auf FuĂgaÌngeruÌbergaÌnge durch. Obwohl unsere statistischen Auswertungen nur eine geringfuÌgige Verbesserung aufzeigen, beeinfluĂt durch technische Probleme mit dem ersten Prototypen und einer zu geringen EingewoÌhnungszeit der Probanden an das System, bekommen wir viel versprechende Kommentare von fast allen Studienteilnehmern. Dies zeigt, daĂ wir bereits einen wichtigen ersten Schritt zum SchlieĂen der identifizierten LuÌcke geleistet haben und Orientierung&MobilitaÌt fuÌr Menschen mit SehschaÌdigung damit verbessern konnten
- âŠ