226 research outputs found

    Poster: Indoor Navigation for Visually Impaired People with Vertex Colored Graphs

    Get PDF
    Visually impaired people face many daily encumbrances. Traditional visual enhancements do not suffice to navigate indoor environments. In this paper, we explore path finding algorithms such as Dijkstra and A* combined with graph coloring to find a safest and shortest path for visual impaired people to navigate indoors. Our mobile application is based on a database which stores the locations of several spots in the building and their corresponding label. Visual impaired people select the start and destination when they want to find their way, and our mobile application will show the appropriate path which guarantees their safety

    Wonder Vision-A Hybrid Way-finding System to assist people with Visual Impairment

    Get PDF
    We use multi-sensory information to find our ways around environments. Among these, vision plays a crucial part in way-finding tasks, such as perceiving landmarks and layouts. People with impaired vision may find it difficult to move around in unfamiliar environments because they are unable to use their eyesight to capture critical information. Limiting vision can affect how people interact with their environment, especially for navigation. Individuals with varying degrees of vision will require a different level of way-finding aids. Blind people rely heavily on white canes, whereas low-vision patients could choose from magnifiers for amplifying signs, or even GPS mobile applications to acquire knowledge before their arrival. The purpose of this study is to investigate the in-situ challenges of way-finding for persons with visual impairments. With the methodologies of Research through Design (RTD) and User-centered Design (UCD), I conducted online user research and created a series of iterative prototypes towards a final one: Wonder Vision. It is a hybrid way-finding system that combines Augmented Reality (AR) and Voice User Interface (VUI) to assist people with visual impairments. The descriptive evaluation method suggests Wonder Vision as a possible solution for helping people with visual impairments to find their way toward their goals

    A Systematic Review of Extended Reality (XR) for Understanding and Augmenting Vision Loss

    Full text link
    Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on augmentation of a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the last decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the suitability and usability of different XR-based accessibility aids. By broadening end-user participation to early stages of the design process and shifting the focus from behavioral performance to qualitative assessments of usability, future research has the potential to develop XR technologies that may not only allow for studying vision loss, but also enable novel visual accessibility aids with the potential to impact the lives of millions of people living with vision loss

    Planetree:An Architectural Toolbox For The Future Of Healing Environments

    Get PDF

    IOT Bracelets for Guiding Blind People in an Indoor Environment

    Get PDF
    Every day, we engage in a variety of activities such as shopping, reading, swimming, and so on. Many people in our community, however, are unable to participate in such activities, due to a variety of eye problems. Directing a blind person to the optimal position (the center of a spot where there is enough space in all directions such that a blind person avoids various obstacles) is a challenge. This paper proposes wireless bracelets that are able to guide a blind person to the optimal position. The proposed system employs ultrasonic sensors in order to detect various obstacles in an indoor environment. It also makes use of the Firebase database and NodeMCU WiFi module to enable real-time communication with a blind individual. Furthermore, the suggested system includes a novel fall-detection mechanism. The proposed Internet of Things (IoT) system is evaluated in an indoor environment. Experiment results showed that the proposed system could efficiently direct a blind person to the optimal position. In comparison to the current state of the art, the proposed system is simpler, less expensive, and more efficient in determining the optimal position to which a blind person must navigate

    Evaluation of Multi-Level Cognitive Maps for Supporting Between-Floor Spatial Behavior in Complex Indoor Environments

    Get PDF
    People often become disoriented when navigating in complex, multi-level buildings. To efficiently find destinations located on different floors, navigators must refer to a globally coherent mental representation of the multi-level environment, which is termed a multi-level cognitive map. However, there is a surprising dearth of research into underlying theories of why integrating multi-level spatial knowledge into a multi-level cognitive map is so challenging and error-prone for humans. This overarching problem is the core motivation of this dissertation. We address this vexing problem in a two-pronged approach combining study of both basic and applied research questions. Of theoretical interest, we investigate questions about how multi-level built environments are learned and structured in memory. The concept of multi-level cognitive maps and a framework of multi-level cognitive map development are provided. We then conducted a set of empirical experiments to evaluate the effects of several environmental factors on users’ development of multi-level cognitive maps. The findings of these studies provide important design guidelines that can be used by architects and help to better understand the research question of why people get lost in buildings. Related to application, we investigate questions about how to design user-friendly visualization interfaces that augment users’ capability to form multi-level cognitive maps. An important finding of this dissertation is that increasing visual access with an X-ray-like visualization interface is effective for overcoming the disadvantage of limited visual access in built environments and assists the development of multi-level cognitive maps. These findings provide important human-computer interaction (HCI) guidelines for visualization techniques to be used in future indoor navigation systems. In sum, this dissertation adopts an interdisciplinary approach, combining theories from the fields of spatial cognition, information visualization, and HCI, addressing a long-standing and ubiquitous problem faced by anyone who navigates indoors: why do people get lost inside multi-level buildings. Results provide both theoretical and applied levels of knowledge generation and explanation, as well as contribute to the growing field of real-time indoor navigation systems

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte RealitĂ€ten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt ĂŒber. Mit der EinfĂŒhrung von leistungsfĂ€higen Mixed-Reality-GerĂ€ten verĂ€ndert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende GerĂ€te sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, mĂŒssen Designer und Entwicklerinnen die kĂŒnftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation prĂ€sentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien fĂŒr die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von ModalitĂ€ten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenĂŒber Mixed Reality in hĂ€uslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der RealitĂ€t oder VirtualitĂ€t untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen fĂŒr Navigationsaufgaben zu einer deutlich höheren FĂ€higkeit fĂŒhrt, SehenswĂŒrdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der RealitĂ€t durch Überlagerung von Echtzeitinformationen, die fĂŒr das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, WĂ€rmestrahlung visuell wahrzunehmen. DarĂŒber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollstĂ€ndig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle RealitĂ€ten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die HĂ€nde und die Tastatur, zeigt diese in der vermischen RealitĂ€t an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der TexteingabequalitĂ€t zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man VirtualitĂ€t berĂŒhren kann, indem wir generisches haptisches Feedback fĂŒr virtuelle RealitĂ€ten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das PrĂ€senzgefĂŒhl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als EingabegerĂ€t mit einem sekundĂ€ren physischen Bildschirm verbunden, um die Ein- und AusgabemodalitĂ€ten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien fĂŒr die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den ArtikulationsfĂ€higkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben

    Design Architecture in Virtual Reality

    Get PDF
    Architectural representation has newly been introduced to Virtual Reality (VR) technology, which provides architects with a medium to showcase unbuilt designs as immersive experiences. Designers can use specialized VR headsets and equipment to provide a client or member of their design team with the illusion of being within the digital space they are presented on screen. This mode of representation is unprecedented to the architectural field, as VR is able to create the sensation of being encompassed in an environment at full scale, potentially eliciting a visceral response from users, similar to the response physical architecture produces. While this premise makes the technology highly applicable towards the architectural practice, it might not be the most practical medium to communicate design intent. Since VR’s conception, the primary software to facilitate VR content creation has been geared towards programmers rather than architects. The practicality of integrating virtual reality within a traditional architectural design workflow is often overlooked in the discussion surrounding the use of VR to represent design projects. This thesis aims to investigate the practicality of VR as part of a design methodology, through the assessment of efficacy and efficiency, while studying the integration of VR into the architectural workflow. This is done by examining the creation of stereoscopic renderings, walkthrough animations, interactive iterations and quick demonstrations as explorations of common architectural visualization techniques using VR. Experimentation with each visualization method is supplemented with a documentation of the VR scene creation process across an approximated time frame to measure efficiency, and a set of evaluation parameters to measure efficacy. Experiments either yielded the creation of a successful experience that exceeded the time constraints a common fast-paced architectural firm might allow for the task (low efficiency) or created a limiting experience where interaction and functionality were not executed to meet the required industry standards (low efficacy). This resultant impracticality based on time and effort, demonstrates that a successfully immersive VR simulation is not produced simplistically in VR; requiring a great deal of thought to be placed into design intent. Although impractical, documentation suggests that the user experience of creating VR content might be able to engage new ways of design thinking and impact the way architects conceptualize space, encouraging further research

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio
    • 

    corecore