207 research outputs found

    Augmented Reality-based Indoor Navigation: A Comparative Analysis of Handheld Devices vs. Google Glass

    Get PDF
    © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. U. Rehman, & S. Cao. (2017). IEEE Transactions on Human-Machine Systems, 47(1), 140–151. https://doi.org/10.1109/THMS.2016.2620106Navigation systems have been widely used in outdoor environments, but indoor navigation systems are still in early development stages. In this paper, we introduced an augmented-reality-based indoor navigation application to assist people navigate in indoor environments. The application can be implemented on electronic devices such as a smartphone or a head-mounted device. In particular, we examined Google Glass as a wearable head-mounted device in comparison with handheld navigation aids including a smartphone and a paper map. We conducted both a technical assessment study and a human factors study. The technical assessment established the feasibility and reliability of the system. The human factors study evaluated human-machine system performance measures including perceived accuracy, navigation time, subjective comfort, subjective workload, and route memory retention. The results showed that the wearable device was perceived to be more accurate, but other performance and workload results indicated that the wearable device was not significantly different from the handheld smartphone. We also found that both digital navigation aids were better than the paper map in terms of shorter navigation time and lower workload, but digital navigation aids resulted in worse route retention. These results could provide empirical evidence supporting future designs of indoor navigation systems. Implications and future research were also discussed.This work was supported in part by NSERC Discovery Grant RGPIN-2015-04134

    Data Efficient Learning: Towards Reducing Risk and Uncertainty of Data Driven Learning Paradigm

    Get PDF
    The success of Deep Learning in various tasks is highly dependent on the large amount of domain-specific annotated data, which are expensive to acquire and may contain varying degrees of noise. In this doctoral journey, our research goal is first to identify and then tackle the issues relating to data that causes significant performance degradation to real-world applications of Deep Learning algorithms. Human Activity Recognition from RGB data is challenging due to the lack of relative motion parameters. To address this issue, we propose a novel framework that introduces the skeleton information from RGB data for activity recognition. With experimentation, we demonstrate that our RGB-only solution surpasses the state-of-the-art, all exploit RGB-D video streams, by a notable margin. The predictive uncertainty of Deep Neural Networks (DNNs) makes them unreliable for real-world deployment. Moreover, available labeled data may contain noise. We aim to address these two issues holistically by proposing a unified density-driven framework, which can effectively denoise training data as well as avoid predicting uncertain test data points. Our plug-and-play framework is easy to deploy on real-world applications while achieving superior performance over state-of-the-art techniques. To assess effectiveness of our proposed framework in a real-world scenario, we experimented with x-ray images from COVID-19 patients. Supervised learning of DNNs inherits the limitation of a very narrow field of view in terms of known data distributions. Moreover, annotating data is costly. Hence, we explore self-supervised Siamese networks to avoid these constraints. Through extensive experimentation, we demonstrate that self supervised method perform surprisingly comparative to its supervised counterpart in a real world use-case. We also delve deeper with activation mapping and feature distribution visualization to understand the causality of this method. Through our research, we achieve a better understanding of issues relating to data-driven learning while solving some of the core problems of this paradigm and expose some novel and intriguing research questions to the community

    Enhancing Proprioception and Regulating Cognitive Load in Neurodiverse Populations through Biometric Monitoring with Wearable Technologies

    Get PDF
    This paper considers the realm of wearable technologies and their prospective applications for individuals with neurodivergent conditions, specifically Autism Spectrum Disorders (ASDs). The study undertakes a multifaceted analysis that encompasses biomarker sensing technologies, AI-driven biofeedback mechanisms, and haptic devices, focusing on their implications for enhancing proprioception and social interaction among neurodivergent populations. While wearables offer a range of opportunities for societal advancement, a discernable gap remains: a scarcity of consumer-oriented applications tailored to the unique physiological and psychological needs of these individuals. Key takeaways underscore the emergent promise of tailored auditory stimuli in workplace dynamics and the efficacy of haptic feedback in sensory substitution. The investigation concludes with an urgent call for multidisciplinary research aimed at the development of specific consumer applications, rigorous empirical validation, and an ethical framework encompassing data privacy and user consent. As the pervasiveness of technology in daily life continues to expand, the article posits that there is an imperative for future research to shift from generalized solutions to individualized applications, thereby ensuring that the spectrum of wearable technology truly accommodates the full scope of human neurodiversity

    Cognitive Performance Enhancement for Multi-domain Operations

    Get PDF
    Despite its desire to achieve cognitive dominance for multi-domain operations, the Army has yet to develop fully and adopt the concept of cognitive performance enhancement. This article provides a comprehensive assessment of the Army’s efforts in this area, explores increasing demands on soldier cognition, and compares the Army’s current approach to its adversaries. Its conclusions will help US military and policy practitioners establish the culture and behaviors that promote cognitive dominance and success across multiple domains

    Memory Manipulations in Extended Reality

    Full text link
    Human memory has notable limitations (e.g., forgetting) which have necessitated a variety of memory aids (e.g., calendars). As we grow closer to mass adoption of everyday Extended Reality (XR), which is frequently leveraging perceptual limitations (e.g., redirected walking), it becomes pertinent to consider how XR could leverage memory limitations (forgetting, distorting, persistence) to induce memory manipulations. As memories highly impact our self-perception, social interactions, and behaviors, there is a pressing need to understand XR Memory Manipulations (XRMMs). We ran three speculative design workshops (n=12), with XR and memory researchers creating 48 XRMM scenarios. Through thematic analysis, we define XRMMs, present a framework of their core components and reveal three classes (at encoding, pre-retrieval, at retrieval). Each class differs in terms of technology (AR, VR) and impact on memory (influencing quality of memories, inducing forgetting, distorting memories). We raise ethical concerns and discuss opportunities of perceptual and memory manipulations in XR

    The Effectiveness of Monitor-Based Augmented Reality Paradigms for Learning Space-Related Technical Tasks

    Get PDF
    Currently today there are many types of media that can help individuals learn and excel in the on going effort to acquire knowledge for a specific trait or function in a workplace, laboratory, or learning facility. Technology has advanced in the fields of transportation, information gathering, and education. The need for better recall of information is in demand in a wide variety of areas. Augmented reality (AR) is a technology that may help meet this demand. AR is a hybrid of reality and virtual reality (VR) that uses the three-dimensional location viewed through a video or optical see-through media to capture the object\u27s coordinates and add virtual images, objects, or text superimposed on the scene (Azuma, 1997). The purpose of this research is to investigate four different modes of presentation and the effect of those modes on learning and recall of information using monitor-based Augmented Reality. The four modes of presentation are Select, Observe, Interact and Print modes. Each mode possesses different attributes that may affect learning and recall. The Select mode can be described as a mode of presentation that allows movement of the work piece in front of the tracking camera. The Observe mode involves information presentation using a pre-recorded video scene presented with no interaction with the work piece. The Interact mode allows the user to view a pre-recorded video scene that allows the user to point and click on the component of the work piece with a computer mouse on the monitor. The Print mode consists of printed material of each work piece component. It was hypothesized that the Select mode would provide the user with the richest presentation of information due to information access capabilities helping to decrease work time, reduce the amount of error likelihood during usage, enhance the user\u27s motivation for learning tasks, and increase concurrent learning and performances due to recall and retention. It was predicted that the Select mode would result in trainees that would recall the greatest amount of information even after extended periods of time had elapsed. This hypothesis was not supported. No significant differences between the four groups were found

    The role of context in human memory augmentation

    Get PDF
    Technology has always had a direct impact on what humans remember. In the era of smartphones and wearable devices, people easily capture on a daily basis information and videos, which can help them remember past experiences and attained knowledge, or simply evoke memories for reminiscing. The increasing use of such ubiquitous devices and technologies produces a sheer volume of pictures and videos that, in combination with additional contextual information, could potentially significantly improve one’s ability to recall a past experience and prior knowledge. Calendar entries, application use logs, social media posts, and activity logs comprise only a few examples of such potentially memory-supportive additional information. This work explores how such memory-supportive information can be collected, filtered, and eventually utilized, for generating memory cues, fragments of past experience or prior knowledge, purposed for triggering one’s memory recall. In this thesis, we showcase how we leverage modern ubiquitous technologies as a vessel for transferring established psychological methods from the lab into the real world, for significantly and measurably augmenting human memory recall in a diverse set of often challenging contexts. We combine experimental evidence garnered from numerous field and lab studies, with knowledge amassed from an extensive literature review, for substantially informing the design and development of future pervasive memory augmentation systems. Ultimately, this work contributes to the fundamental understanding of human memory and how today’s modern technologies can be actuated for augmenting it

    An open learning system for special needs education

    Get PDF
    The field of special needs education in case of speech and language deficiencies has seen great success, utilizing a number of paper-based systems, to help young children experiencing difficulty in language acquisition and the understanding of languages. These systems employ card and paper-based illustrations, which are combined to create scenarios for children in order to expose them to new vocabulary in context. While this success has encouraged the use of such systems for a long time, problems have been identified that need addressing. This paper presents research toward the application of an Open Learning system for special needs education that aims to provide an evolution in language learning in the context of understanding spoken instruction. Users of this Open Learning system benefit from open content with novel presentation of keywords and associated context. The learning algorithm is derived from the field of applied computing in human biology using the concept of spaced repetition and providing a novel augmentation of the memorization process for special needs education in a global Open Education setting

    A cognitive ego-vision system for interactive assistance

    Get PDF
    With increasing computational power and decreasing size, computers nowadays are already wearable and mobile. They become attendant of peoples' everyday life. Personal digital assistants and mobile phones equipped with adequate software gain a lot of interest in public, although the functionality they provide in terms of assistance is little more than a mobile databases for appointments, addresses, to-do lists and photos. Compared to the assistance a human can provide, such systems are hardly to call real assistants. The motivation to construct more human-like assistance systems that develop a certain level of cognitive capabilities leads to the exploration of two central paradigms in this work. The first paradigm is termed cognitive vision systems. Such systems take human cognition as a design principle of underlying concepts and develop learning and adaptation capabilities to be more flexible in their application. They are embodied, active, and situated. Second, the ego-vision paradigm is introduced as a very tight interaction scheme between a user and a computer system that especially eases close collaboration and assistance between these two. Ego-vision systems (EVS) take a user's (visual) perspective and integrate the human in the system's processing loop by means of a shared perception and augmented reality. EVSs adopt techniques of cognitive vision to identify objects, interpret actions, and understand the user's visual perception. And they articulate their knowledge and interpretation by means of augmentations of the user's own view. These two paradigms are studied as rather general concepts, but always with the goal in mind to realize more flexible assistance systems that closely collaborate with its users. This work provides three major contributions. First, a definition and explanation of ego-vision as a novel paradigm is given. Benefits and challenges of this paradigm are discussed as well. Second, a configuration of different approaches that permit an ego-vision system to perceive its environment and its user is presented in terms of object and action recognition, head gesture recognition, and mosaicing. These account for the specific challenges identified for ego-vision systems, whose perception capabilities are based on wearable sensors only. Finally, a visual active memory (VAM) is introduced as a flexible conceptual architecture for cognitive vision systems in general, and for assistance systems in particular. It adopts principles of human cognition to develop a representation for information stored in this memory. So-called memory processes continuously analyze, modify, and extend the content of this VAM. The functionality of the integrated system emerges from their coordinated interplay of these memory processes. An integrated assistance system applying the approaches and concepts outlined before is implemented on the basis of the visual active memory. The system architecture is discussed and some exemplary processing paths in this system are presented and discussed. It assists users in object manipulation tasks and has reached a maturity level that allows to conduct user studies. Quantitative results of different integrated memory processes are as well presented as an assessment of the interactive system by means of these user studies
    • …
    corecore