788 research outputs found
Design, modeling and analysis of object localization through acoustical signals for cognitive electronic travel aid for blind people
El objetivo de la tesis consiste en el estudio y análisis de la localización de objetos en el entorno real mediante sonidos, así como la posterior integración y ensayo de un dispositivo real basado en tal técnica y destinado a personas con discapacidad visual.
Con el propósito de poder comprender y analizar la localización de objetos se ha realizado un profundo estado de arte sobre los Sistemas de Navegación desarrollados durante las últimas décadas y orientados a personas con distintos grados de discapacidad visual. En el citado estado del arte, se han analizado y estructurado los dispositivos de navegación existentes, clasificándolos de acuerdo con los componentes de adquisición de datos del entorno utilizados. A este respecto, hay que señalar que, hasta el momento, se conocen tres clases de dispositivos de navegación: 'detectores de obstáculos', que se basan en dispositivos de ultrasonidos y sensores instalados en los dispositivos electrónicos de navegación con el objetivo de detectar los objetos que aparecen en el área de trabajo del sistema; 'sensores del entorno' - que tienen como objetivo la detección del objeto y del usuario. Esta clase de dispositivos se instalan en las estaciones de autobús, metro, tren, pasos de peatones etc., de forma que cuando el sensor del usuario penetra en el área de alcance de los sensores instalados en la estación, éstos informan al usuario sobre la presencia de la misma. Asimismo, el sensor del usuario detecta también los medios de transporte que tienen instalado el correspondiente dispositivo basado en láser o ultrasonidos, ofreciendo al usuario información relativa a número de autobús, ruta etc La tercera clase de sistemas electrónicos de navegación son los 'dispositivos de navegación'. Estos elementos se basan en dispositivos GPS, indicando al usuario tanto su locación, como la ruta que debe seguir para llegar a su punto de destino.
Tras la primera etapa de elaboración del estaDunai ., L. (2010). Design, modeling and analysis of object localization through acoustical signals for cognitive electronic travel aid for blind people [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8441Palanci
Rehabilitation Engineering
Population ageing has major consequences and implications in all areas of our daily life as well as other important aspects, such as economic growth, savings, investment and consumption, labour markets, pensions, property and care from one generation to another. Additionally, health and related care, family composition and life-style, housing and migration are also affected. Given the rapid increase in the aging of the population and the further increase that is expected in the coming years, an important problem that has to be faced is the corresponding increase in chronic illness, disabilities, and loss of functional independence endemic to the elderly (WHO 2008). For this reason, novel methods of rehabilitation and care management are urgently needed. This book covers many rehabilitation support systems and robots developed for upper limbs, lower limbs as well as visually impaired condition. Other than upper limbs, the lower limb research works are also discussed like motorized foot rest for electric powered wheelchair and standing assistance device
Multi-Sensory Interaction for Blind and Visually Impaired People
This book conveyed the visual elements of artwork to the visually impaired through various sensory elements to open a new perspective for appreciating visual artwork. In addition, the technique of expressing a color code by integrating patterns, temperatures, scents, music, and vibrations was explored, and future research topics were presented. A holistic experience using multi-sensory interaction acquired by people with visual impairment was provided to convey the meaning and contents of the work through rich multi-sensory appreciation. A method that allows people with visual impairments to engage in artwork using a variety of senses, including touch, temperature, tactile pattern, and sound, helps them to appreciate artwork at a deeper level than can be achieved with hearing or touch alone. The development of such art appreciation aids for the visually impaired will ultimately improve their cultural enjoyment and strengthen their access to culture and the arts. The development of this new concept aids ultimately expands opportunities for the non-visually impaired as well as the visually impaired to enjoy works of art and breaks down the boundaries between the disabled and the non-disabled in the field of culture and arts through continuous efforts to enhance accessibility. In addition, the developed multi-sensory expression and delivery tool can be used as an educational tool to increase product and artwork accessibility and usability through multi-modal interaction. Training the multi-sensory experiences introduced in this book may lead to more vivid visual imageries or seeing with the mind’s eye
Recommended from our members
Supporting Multi-User Interaction in Co-Located and Remote Augmented Reality by Improving Reference Performance and Decreasing Physical Interference
One of the most fundamental components of our daily lives is social interaction, ranging from simple activities, such as purchasing a donut in a bakery on the way to work, to complex ones, such as instructing a remote colleague how to repair a broken automobile. While we interact with others, various challenges may arise, such as miscommunication or physical interference. In a bakery, a clerk may misunderstand the donut at which a customer was pointing due to the uncertainty of their finger direction. In a repair task, a technician may remove the wrong bolt and accidentally hit another user while replacing broken parts due to unclear instructions and lack of attention while communicating with a remote advisor.
This dissertation explores techniques for supporting multi-user 3D interaction in augmented reality in a way that addresses these challenges. Augmented Reality (AR) refers to interactively overlaying geometrically registered virtual media on the real world. In particular, we address how an AR system can use overlaid graphics to assist users in referencing local objects accurately and remote objects efficiently, and prevent co-located users from physically interfering with each other. My thesis is that our techniques can provide more accurate referencing for co-located and efficient referencing for remote users and lessen interference among users.
First, we present and evaluate an AR referencing technique for shared environments that is designed to improve the accuracy with which one user (the indicator) can point out a real physical object to another user (the recipient). Our technique is intended for use in otherwise unmodeled environments in which objects in the environment, and the hand of the indicator, are interactively observed by a depth camera, and both users wear tracked see-through displays. This technique allows the indicator to bring a copy of a portion of the physical environment closer and indicate a selection in the copy. At the same time, the recipient gets to see the indicator's live interaction represented virtually in another copy that is brought closer to the recipient, and is also shown the mapping between their copy and the actual portion of the physical environment. A formal user study confirms that our technique performs significantly more accurately than comparison techniques in situations in which the participating users have sufficiently different views of the scene.
Second, we extend the idea of using a copy (virtual replica) of physical object to help a remote expert assist a local user in performing a task in the local user's environment. We develop an approach that uses Virtual Reality (VR) or AR for the remote expert, and AR for the local user. It allows the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. The expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains. We compared our approach with another 3D approach that also uses virtual replicas, in which the remote expert identifies corresponding pairs of points to align on a pair of objects, and a 2D approach in which the expert uses a 2D tablet-based drawing system similar to sketching systems developed for prior work by others on remote assistance. The study shows the 3D demonstration approach to be faster than the others.
Third, we present an interference avoidance technique (Redirected Motion) intended to lessen the chance of physical interference among users with tracked hand-held displays, while minimizing their awareness that the technique is being applied. This interaction technique warps virtual space by shifting the virtual location of a user's hand-held display. We conducted a formal user study to evaluate Redirected Motion against other approaches that either modify what a user sees or hears, or restrict the interaction capabilities users have. Our study was performed using a game we developed, in which two players moved their hand-held displays rapidly in the space around a shared gameboard. Our analysis showed that Redirected Motion effectively and imperceptibly kept players further apart physically than the other techniques.
These interaction techniques were implemented using an extensible programming framework we developed for supporting a broad range of multi-user immersive AR applications. This framework, Goblin XNA, integrates a 3D scene graph with support for 6DOF tracking, rigid body physics simulation, networking, shaders, particle systems, and 2D user interface primitives.
In summary, we showed that our referencing approaches can enhance multi-user AR by improving accuracy for co-located users and increasing efficiency for remote users. In addition, we demonstrated that our interference-avoidance approach can lessen the chance of unwanted physical interference between co-located users, without their being aware of its use
A gaze-contingent framework for perceptually-enabled applications in healthcare
Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency.
The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces
“I Want That”: Human-in-the-Loop Control of a Wheelchair-Mounted Robotic Arm
Wheelchair-mounted robotic arms have been commercially available for a decade. In order to operate these robotic arms, a user must have a high level of cognitive function. Our research focuses on replacing a manufacturer-provided, menu-based interface with a vision-based system while adding autonomy to reduce the cognitive load. Instead of manual task decomposition and execution, the user explicitly designates the end goal, and the system autonomously retrieves the object. In this paper, we present the complete system which can autonomously retrieve a desired object from a shelf. We also present the results of a 15-week study in which 12 participants from our target population used our system, totaling 198 trials
A Sound Approach Toward a Mobility Aid for Blind and Low-Vision Individuals
Reduced independent mobility of blind and low-vision individuals (BLVIs) cause considerable societal cost, burden on relatives, and reduced quality of life for the individuals, including increased anxiety, depression symptoms, need of assistance, risk of falls, and mortality. Despite the numerous electronic travel aids proposed since at least the 1940’s, along with ever-advancing technology, the mobility issues persist. A substantial reason for this is likely several and severe shortcomings of the field, both in regards to aid design and evaluation.In this work, these shortcomings are addressed with a generic design model called Desire of Use (DoU), which describes the desire of a given user to use an aid for a given activity. It is then applied on mobility of BLVIs (DoU-MoB), to systematically illuminate and structure possibly all related aspects that such an aid needs to aptly deal with, in order for it to become an adequate aid for the objective. These aspects can then both guide user-centered design as well as choice of test methods and measures.One such measure is then demonstrated in the Desire of Use Questionnaire for Mobility of Blind and Low-Vision Individuals (DoUQ-MoB), an aid-agnostic and comprehensive patient-reported outcome measure. The question construction originates from the DoU-MoB to ensure an encompassing focus on mobility of BLVIs, something that has been missing in the field. Since it is aid-agnostic it facilitates aid comparison, which it also actively promotes. To support the reliability of the DoUQ-MoB, it utilizes the best known practices of questionnaire design and has been validated once with eight orientation and mobility professionals, and six BLVIs. Based on this, the questionnaire has also been revised once.To allow for relevant and reproducible methodology, another tool presented herein is a portable virtual reality (VR) system called the Parrot-VR. It uses a hybrid control scheme of absolute rotation by tracking the user’s head in reality, affording intuitive turning; and relative movement where simple button presses on a controller moves the virtual avatar forward and backward, allowing for large-scale traversal while not walking physically. VR provides excellent reproducibility, making various aggregate movement analysis feasible, while it is also inherently safe. Meanwhile, the portability of the system facilitates testing near the participants, substantially increasing the number of potential blind and low-vision recruits for user tests.The thesis also gives a short account on the state of long-term testing in the field; it being short is mainly due to that there is not much to report. It then provides an initial investigation into possible outcome measures for such tests by taking instruments in use by Swedish orientation and mobility professionals as a starting point. Two of these are also piloted in an initial single-session trial with 19 BLVIs, and could plausibly be used for long-term tests after further evaluation.Finally, a discussion is presented regarding the Audomni project — the development of a primary mobility aid for BLVIs. Audomni is a visuo-auditory sensory supplementation device, which aims to take visual information and translate it to sound. A wide field-of-view, 3D-depth camera records the environment, which is then transformed to audio through the sonification algorithms of Audomni, and finally presented in a pair of open-ear headphones that do not block out environmental sounds. The design of Audomni leverages the DoU-MoB to ensure user-centric development and evaluation, in the aim of reaching an aid with such form and function that it grants the users better mobility, while the users still want to use it.Audomni has been evaluated with user tests twice, once in pilot tests with two BLVIs, and once in VR with a heterogenous set of 19 BLVIs, utilizing the Parrot-VR and the DoUQ-MoB. 76 % of responders (13 / 17) answered that it was very or extremely likely that they would want use Audomni along with their current aid. This might be the first result in the field demonstrating a majority of blind and low-vision participants reporting that they actually want to use a new electronic travel aid. This shows promise that eventual long-term tests will demonstrate an increased mobility of blind and low-vision users — the overarching project aim. Such results would ultimately mean that Audomni can become an aid that alleviates societal cost, reduces burden on relatives, and improves users’ quality of life and independence
Investigando Natural User Interfaces (NUIs) : tecnologias e interação em contexto de acessibilidade
Orientador: Maria Cecília Calani BaranauskasTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Natural User Interfaces (NUIs) representam um novo paradigma de interação, com a promessa de ser mais intuitivo e fácil de usar do que seu antecessor, que utiliza mouse e teclado. Em um contexto no qual as tecnologias estão cada vez mais invisíveis e pervasivas, não só a quantidade mas também a diversidade de pessoas que participam deste contexto é crescente. Nesse caso, é preciso estudar como esse novo paradigma de interação de fato consegue ser acessível a todas as pessoas que podem utilizá-lo no dia-a-dia. Ademais, é preciso também caracterizar o paradigma em si, para entender o que o torna, de fato, natural. Portanto, nesta tese apresentamos o caminho que percorremos em busca dessas duas respostas: como caracterizar NUIs, no atual contexto tecnológico, e como tornar NUIs acessíveis para todos. Para tanto, primeiro apresentamos uma revisão sistemática de literatura com o estado da arte. Depois, mostramos um conjunto de heurísticas para o design e a avaliação de NUIs, que foram aplicadas em estudos de caso práticos. Em seguida, estruturamos as ideias desta pesquisa dentro dos artefatos da Semiótica Organizacional, e obtivemos esclarecimentos sobre como fazer o design de NUIs com Acessibilidade, seja por meio de Design Universal, seja para propor Tecnologias Assistivas. Depois, apresentamos três estudos de caso com sistemas NUI cujo design foi feito por nós. A partir desses estudos de caso, expandimos nosso referencial teórico e conseguimos, por fim, encontrar três elementos que resumem a nossa caracterização de NUI: diferenças, affordances e enaçãoAbstract: Natural User Interfaces (NUIs) represent a new interaction paradigm, with the promise of being more intuitive and easy to use than its predecessor, that utilizes mouse and keyboard. In a context where technology is becoming each time more invisible and pervasive, not only the amount but also the diversity of people who participate in this context is increasing. In this case, it must be studied how this new interaction paradigm can, in fact, be accessible to all the people who may use it on their daily routine. Furthermore, it is also necessary to characterize the paradigm itself, to understand what makes it, in fact, natural. Therefore, in this thesis we present the path we took in search of these two answers: how to characterize NUIs in the current technological context, and how to make NUIs accessible to all. To do so, first we present a systematic literature review with the state of the art. Then, we show a set of heuristics for the design and evaluation of NUIs, which were applied in practical study cases. Afterwards, we structure the ideas of this research into the Organizational Semiotics artifacts, and we obtain insights into how to design NUIs with Accessibility, be it through Universal Design, be it to propose Assistive Technologies. Then, we present three case studies with NUI systems which we designed. From these case studies, we expanded our theoretical references were able to, finally, find three elements that sum up our characterization of NUI: differences, affordances and enactionDoutoradoCiência da ComputaçãoDoutora em Ciência da Computação160911/2015-0CAPESCNP
- …