1,253 research outputs found
An Empirical Research on Pilgrims Wayfinding Satisfaction Study: A Consideration for Improving Wayfinding Experience in Al Masjid Al Haram
Millions of Muslims visit Makkah Al-Mukarramah every year for Hajj and Umrah. It is a mandatory part of both Hajj and Umrah rituals to visit Al Masjid Al Haram for various activities. Huge crowds and lack of prominent wayfinding signs make Hajis spend more time inside Haram trying to find their way that too in a tense and panicky state of mind. This paper aims at assessing the gravity of wayfinding associated challenges faced by Hajis by applying the well-known Customer Satisfaction Model and proposing possible solutions to minimize potential adverse effects. Convenience sampling was used to collect the data from the proposed sample size of 2000 from various nationalities and regions. A total of 618 responses were received. The structural equation modeling (SEM) method was used for path analysis using AMOS 21 analytical tool and results revealed a reasonable fit between data collected and the model used: chi2 (485.95), chi2 / DF (3.77), RMSEA (0.07), CFI (0.92), and all values of Cronbach's alpha are greater than 0.78. The results substantiated that Hajis face problems in wayfinding inside Haram, which leads to Hajis finding it challenging to navigate in the Haram area. When the respondents were presented with alternative solutions to improve their wayfinding inside Haram, the results showed a statistically significant improvement in the satisfaction level
A HoloLens Application to Aid People who are Visually Impaired in Navigation Tasks
Day-to-day activities such as navigation and reading can be particularly challenging for people with visual impairments. Reading text on signs may be especially difficult for people who are visually impaired because signs have variable color, contrast, and size. Indoors, signage may include office, classroom, restroom, and fire evacuation signs. Outdoors, they may include street signs, bus numbers, and store signs. Depending on the level of visual impairment, just identifying where signs exist can be a challenge. Using Microsoft\u27s HoloLens, an augmented reality device, I designed and implemented the TextSpotting application that helps those with low vision identify and read indoor signs so that they can navigate text-heavy environments. The application can provide both visual information and auditory information. In addition to developing the application, I conducted a user study to test its effectiveness. Participants were asked to find a room in an unfamiliar hallway. Those that used the TextSpotting application completed the task less quickly yet reported higher levels of ease, comfort, and confidence, indicating the application\u27s limitations and potential in providing an effective means to navigate unknown environments via signage
Improving the acquisition of spatial knowledge when navigating with an augmented reality navigation system
Navigation is a process humans use whenever they move. There are more complex tasks like finding our way in a new city and easier tasks like getting a cup of coffee. Daniel Montello (2005, p. 2) defines navigation as âthe coordinated and goal-directed movement through the environment by organisms or intelligent machinesâ. When navigating in an unknown environment, humans often rely on assisted wayfinding by some sort of navigation aid. During the last years, the preferred navigation system shifted from printed maps to electronic and thus dynamic navigation systems on our smartphones. Recently, mixed reality and virtual reality approaches such as augmented reality (AR) have become an interesting alternative to the classical smartphone navigation. This although, the first attempts to AR were already made in the middle of the last century. The major advantages of AR navigation systems are that localisation and above all also tracking tasks are made by the system and that the navigation instructions are directly laid into the environment. The main drawback, on the other hand, is that the more tasks are made by the system, the less spatial learning is achieved by a human.
The goal of this thesis is to examine ways to improve the process of spatial learning on assisted
wayfinding. An experiment where participants are guided through a test environment by an
AR system is set up to test these ways. After completing the route, the participants had to fill
out a questionnaire about landmarks and intersections, which they had encountered on the
route. The concrete goals of the thesis are to find out (1) whether giving more spatial information
will improve spatial learning, (2) whether the placement of navigation instructions has
an influence (positive or negative) on spatial learning, (3) whether the type of landmark has
an influence on how well it is recalled and (4) how well landmark and route knowledge is built
after having completed the route once.
The results of the experiment suggest that giving background information to certain landmarks
do not lead to a significantly different performance in spatial learning (p = .691). The
result could also show that there is no difference whether a landmark is highlighted by a navigation
instruction or not (p = .330). The analyses of landmark and route knowledge has shown
that the participants have built less landmark knowledge than route knowledge after the run,
as they have approx. 50 % of the landmarks correct but 67 % of the intersections. Interesting
and in this case significant is the difference between the types of landmarks (p = .018). 3D
objects are recalled much better than other landmarks. Also significant (p = 6.14e-3) but unfortunately
not very robust is the influence of the age on the acquisition of route knowledge.
As the age distribution is very unbalanced, these results have to be interpreted with caution.
Following the findings of this thesis, it is suggested to conduct a series of experiments with an
eye tracker to learn more about how the visual focus of people using AR as a wayfinding assistance
behaves
Recommended from our members
Technological framework for ubiquitous interactions using contextâaware mobile devices
This report presents research and development of dedicated system architecture, designed to enable its users to interact with each other as well as to access information on Points of Interest that exist in their immediate environment. This is accomplished through managing personal preferences and contextual information in a distributed manner and in real-time. The advantage of this system architecture is that it uses mobile devices, heterogeneous sensors and a selection of user interface paradigms to produce a sociotechnical framework to enhance the perception of the environment and promote intuitive interactions. The thrust of the work has been on software development and component integration. Iterative prototyping was adopted as a development method in order to effectively implement the usersâ feedback and establish a platform for collaboration that closely meets the requirements and aids their decision-making process. The requirement acquisition was followed by the system-modelling phase in order to produce a robust software prototype. The implementation includes component-based development and extensive use of design patterns over native programming. Conclusively, the software product has become the means to evaluate differences in the use of mixed reality technologies in a ubiquitous scenario.
The prototype can query a number of context sources such as sensors, or details of the personal profile, to acquire relevant data. The data (and metadata) is stored in opensource structures, so that they are accessible at every layer of the system architecture and at any time. By proactively processing the acquired context, the system can assist the users in their tasks (e.g. navigation) without explicit input â e.g. by simply creating a gesture with the device. However, advanced interaction with the application via the user interface is available for requests that are more complex.
Representations of the real world objects, their spatial relations and other captured features of interest are visualised on scalable interfaces, ranging from 2D to 3D models and from photorealism to stylised clues and symbols. Two principal modes of operation have been implemented; one, using geo-referenced virtual reality models of the environment, updated in real time, and second, using the overlay of descriptive annotations and graphics on the video images of the surroundings, captured by a video camera. The latter is referred to as augmented reality.
The continuous feed of the device position and orientation data, from the GPS receiver and the digital compass, into the application, makes the framework fit for use in unknown environments and therefore suitable for ubiquitous operation. This is one of the novelties of the proposed framework, because it enables a whole range of social, peer-to-peer interactions to take place. The scenarios of how the system could be employed to pursue these remote interactions and collaborative efforts on mobile devices are addressed in the context of urban navigation. The conceptual design and implementation of the novel location and orientation based algorithm for mobile AR are presented in detail. The system is, however, multifaceted and capable of supporting peer-to-peer exchange of information in a pervasive fashion, usable in various contexts. The modalities of these interactions are explored and laid out in several scenarios, but particularly in the context of user adoption. Two evaluation tasks took place. The preliminary evaluation examined certain aspects that influence user interaction while being immersed in a virtual environment, whereas the second summative evaluation compared the utility and certain usability aspects of the AR and VR interfaces
Intellectual Disability, Digital Technologies, And Independent Transportation â A Scoping Review
Transportation is an essential aspect of everyday life. For people with intellectual disabilities transportation is one the largest barriers to community participation and a cause of inequality. However, digital technologies can reduce barriers for transportation use for people with intellectual disabilities and increase community mobility. The aim of this scoping review was to identify and map existing research on digital technology support for independent transport for people with intellectual disabilities and to identify knowledge gaps relevant for further research. The authors conducted a scoping review of articles presenting digital technologies designed to assist in outdoor navigation for people with intellectual disabilities. The search yielded 3195 items, of which 45 were reviewed and 13 included in this study. The results show that while a variation of design elements was utilized, digital technologies can effectively support individuals with intellectual disability in transport. Further research should focus on multiple contexts and types of transportation, different support needs during independent travel, real-world settings, participatory approaches, and the role of user training to enhance the adoption of digital technologies
Taux : a system for evaluating sound feedback in navigational tasks
This thesis presents the design and development of an evaluation system for generating audio displays that provide feedback to persons performing navigation tasks. It first develops the need for such a system by describing existing wayfinding solutions, investigating new electronic location-based methods that have the potential of changing these solutions and examining research conducted on relevant audio information representation techniques. An evaluation system that supports the manipulation of two basic classes of audio display is then described. Based on prior work on wayfinding with audio display, research questions are developed that investigate the viability of different audio displays. These are used to generate hypotheses and develop an experiment which evaluates four variations of audio display for wayfinding. Questions are also formulated that evaluate a baseline condition that utilizes visual feedback. An experiment which tests these hypotheses on sighted users is then described. Results from the experiment suggest that spatial audio combined with spoken hints is the best approach of the approaches comparing spatial audio. The test experiment results also suggest that muting a varying audio signal when a subject is on course did not improve performance. The system and method are then refined. A second experiment is conducted with improved displays and an improved experiment methodology. After adding blindfolds for sighted subjects and increasing the difficulty of navigation tasks by reducing the arrival radius, similar comparisons were observed. Overall, the two experiments demonstrate the viability of the prototyping tool for testing and refining multiple different audio display combinations for navigational tasks. The detailed contributions of this work and future research opportunities conclude this thesis
An augmented reality sign-reading assistant for users with reduced vision
People typically rely heavily on visual information when finding their way to unfamiliar locations. For individuals with reduced vision, there are a variety of navigational tools available to assist with this task if needed. However, for wayfinding in unfamiliar indoor environments the applicability of existing tools is limited. One potential approach to assist with this task is to enhance visual information about the location and content of existing signage in the environment. With this aim, we developed a prototype software application, which runs on a consumer head-mounted augmented reality (AR) device, to assist visually impaired users with sign-reading. The sign-reading assistant identifies real-world text (e.g., signs and room numbers) on command, highlights the text location, converts it to high contrast AR lettering, and optionally reads the content aloud via text-to-speech. We assessed the usability of this application in a behavioral experiment. Participants with simulated visual impairment were asked to locate a particular office within a hallway, either with or without AR assistance (referred to as the AR group and control group, respectively). Subjective assessments indicated that participants in the AR group found the application helpful for this task, and an analysis of walking paths indicated that these participants took more direct routes compared to the control group. However, participants in the AR group also walked more slowly and took more time to complete the task than the control group. The results point to several specific future goals for usability and system performance in AR-based assistive tools.Peer reviewed: YesNRC publication: Ye
- âŠ