1,253 research outputs found

    An Empirical Research on Pilgrims Wayfinding Satisfaction Study: A Consideration for Improving Wayfinding Experience in Al Masjid Al Haram

    Get PDF
    Millions of Muslims visit Makkah Al-Mukarramah every year for Hajj and Umrah. It is a mandatory part of both Hajj and Umrah rituals to visit Al Masjid Al Haram for various activities. Huge crowds and lack of prominent wayfinding signs make Hajis spend more time inside Haram trying to find their way that too in a tense and panicky state of mind. This paper aims at assessing the gravity of wayfinding associated challenges faced by Hajis by applying the well-known Customer Satisfaction Model and proposing possible solutions to minimize potential adverse effects. Convenience sampling was used to collect the data from the proposed sample size of 2000 from various nationalities and regions. A total of 618 responses were received. The structural equation modeling (SEM) method was used for path analysis using AMOS 21 analytical tool and results revealed a reasonable fit between data collected and the model used: chi2 (485.95), chi2 / DF (3.77), RMSEA (0.07), CFI (0.92), and all values of Cronbach's alpha are greater than 0.78. The results substantiated that Hajis face problems in wayfinding inside Haram, which leads to Hajis finding it challenging to navigate in the Haram area. When the respondents were presented with alternative solutions to improve their wayfinding inside Haram, the results showed a statistically significant improvement in the satisfaction level

    A HoloLens Application to Aid People who are Visually Impaired in Navigation Tasks

    Get PDF
    Day-to-day activities such as navigation and reading can be particularly challenging for people with visual impairments. Reading text on signs may be especially difficult for people who are visually impaired because signs have variable color, contrast, and size. Indoors, signage may include office, classroom, restroom, and fire evacuation signs. Outdoors, they may include street signs, bus numbers, and store signs. Depending on the level of visual impairment, just identifying where signs exist can be a challenge. Using Microsoft\u27s HoloLens, an augmented reality device, I designed and implemented the TextSpotting application that helps those with low vision identify and read indoor signs so that they can navigate text-heavy environments. The application can provide both visual information and auditory information. In addition to developing the application, I conducted a user study to test its effectiveness. Participants were asked to find a room in an unfamiliar hallway. Those that used the TextSpotting application completed the task less quickly yet reported higher levels of ease, comfort, and confidence, indicating the application\u27s limitations and potential in providing an effective means to navigate unknown environments via signage

    Improving the acquisition of spatial knowledge when navigating with an augmented reality navigation system

    Full text link
    Navigation is a process humans use whenever they move. There are more complex tasks like finding our way in a new city and easier tasks like getting a cup of coffee. Daniel Montello (2005, p. 2) defines navigation as “the coordinated and goal-directed movement through the environment by organisms or intelligent machines”. When navigating in an unknown environment, humans often rely on assisted wayfinding by some sort of navigation aid. During the last years, the preferred navigation system shifted from printed maps to electronic and thus dynamic navigation systems on our smartphones. Recently, mixed reality and virtual reality approaches such as augmented reality (AR) have become an interesting alternative to the classical smartphone navigation. This although, the first attempts to AR were already made in the middle of the last century. The major advantages of AR navigation systems are that localisation and above all also tracking tasks are made by the system and that the navigation instructions are directly laid into the environment. The main drawback, on the other hand, is that the more tasks are made by the system, the less spatial learning is achieved by a human. The goal of this thesis is to examine ways to improve the process of spatial learning on assisted wayfinding. An experiment where participants are guided through a test environment by an AR system is set up to test these ways. After completing the route, the participants had to fill out a questionnaire about landmarks and intersections, which they had encountered on the route. The concrete goals of the thesis are to find out (1) whether giving more spatial information will improve spatial learning, (2) whether the placement of navigation instructions has an influence (positive or negative) on spatial learning, (3) whether the type of landmark has an influence on how well it is recalled and (4) how well landmark and route knowledge is built after having completed the route once. The results of the experiment suggest that giving background information to certain landmarks do not lead to a significantly different performance in spatial learning (p = .691). The result could also show that there is no difference whether a landmark is highlighted by a navigation instruction or not (p = .330). The analyses of landmark and route knowledge has shown that the participants have built less landmark knowledge than route knowledge after the run, as they have approx. 50 % of the landmarks correct but 67 % of the intersections. Interesting and in this case significant is the difference between the types of landmarks (p = .018). 3D objects are recalled much better than other landmarks. Also significant (p = 6.14e-3) but unfortunately not very robust is the influence of the age on the acquisition of route knowledge. As the age distribution is very unbalanced, these results have to be interpreted with caution. Following the findings of this thesis, it is suggested to conduct a series of experiments with an eye tracker to learn more about how the visual focus of people using AR as a wayfinding assistance behaves

    Intellectual Disability, Digital Technologies, And Independent Transportation – A Scoping Review

    Get PDF
    Transportation is an essential aspect of everyday life. For people with intellectual disabilities transportation is one the largest barriers to community participation and a cause of inequality. However, digital technologies can reduce barriers for transportation use for people with intellectual disabilities and increase community mobility. The aim of this scoping review was to identify and map existing research on digital technology support for independent transport for people with intellectual disabilities and to identify knowledge gaps relevant for further research. The authors conducted a scoping review of articles presenting digital technologies designed to assist in outdoor navigation for people with intellectual disabilities. The search yielded 3195 items, of which 45 were reviewed and 13 included in this study. The results show that while a variation of design elements was utilized, digital technologies can effectively support individuals with intellectual disability in transport. Further research should focus on multiple contexts and types of transportation, different support needs during independent travel, real-world settings, participatory approaches, and the role of user training to enhance the adoption of digital technologies

    Taux : a system for evaluating sound feedback in navigational tasks

    Get PDF
    This thesis presents the design and development of an evaluation system for generating audio displays that provide feedback to persons performing navigation tasks. It first develops the need for such a system by describing existing wayfinding solutions, investigating new electronic location-based methods that have the potential of changing these solutions and examining research conducted on relevant audio information representation techniques. An evaluation system that supports the manipulation of two basic classes of audio display is then described. Based on prior work on wayfinding with audio display, research questions are developed that investigate the viability of different audio displays. These are used to generate hypotheses and develop an experiment which evaluates four variations of audio display for wayfinding. Questions are also formulated that evaluate a baseline condition that utilizes visual feedback. An experiment which tests these hypotheses on sighted users is then described. Results from the experiment suggest that spatial audio combined with spoken hints is the best approach of the approaches comparing spatial audio. The test experiment results also suggest that muting a varying audio signal when a subject is on course did not improve performance. The system and method are then refined. A second experiment is conducted with improved displays and an improved experiment methodology. After adding blindfolds for sighted subjects and increasing the difficulty of navigation tasks by reducing the arrival radius, similar comparisons were observed. Overall, the two experiments demonstrate the viability of the prototyping tool for testing and refining multiple different audio display combinations for navigational tasks. The detailed contributions of this work and future research opportunities conclude this thesis

    Trends and perspectives in augmented reality training

    Get PDF

    An augmented reality sign-reading assistant for users with reduced vision

    Get PDF
    People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.Peer reviewed: YesNRC publication: Ye
    • 

    corecore