1,098 research outputs found

    Directing Attention in an Augmented Reality Environment: An Attentional Tunneling Evaluation

    Get PDF
    Augmented Reality applications use explicit cuing to support visual search. Explicit cues can help improve visual search performance but they can also cause perceptual issues such as attentional tunneling. An experiment was conducted to evaluate the relationship between directing attention and attentional tunneling, in a dual task structure. One task was tracking a target in motion and the other was detection of non-target elements. Three conditions were tested: baseline without cuing the target, cuing the target with the average scene color, and using a red cue. A different color for the cue was used to vary the attentional tunneling level. The results show that directing attention induced attentional tunneling only the in red condition and that effect is attributable to the color used for the cue

    Integrative visual augmentation content and its optimization based on human visual processing

    Get PDF
    In many daily visual tasks, our brain is remarkably good at prioritizing visual information. Nonetheless, it is undoubtedly not always capable of performing optimally, and all the more so in the ever-evolving demanding world. Supplementary visual guidance could enrich our lives from many perspectives on the individual and population scales. Through rapid technological advancements such as VR and AR systems, diverse visual cues demonstrate a powerful potential to deliberately guide attention and improve users’ performance in daily tasks. Currently, existing solutions are confronting the challenge of overloading and overruling the natural strategy of the user with excessive visual information once digital content is superimposed on the real-world environment. The subtle nature of augmentation content, which considers human visual processing factors, is an essential milestone towards developing adaptive, supportive, and not overwhelming AR systems. The focus of the present thesis was, thus, to investigate how the manipulation of spatial and temporal properties of visual cues affects human performance. Based on the findings of three studies published in peer-reviewed journals, I consider various everyday challenging settings and propose perceptually optimal augmentation solutions. I furthermore discuss possible extensions of the present work and recommendations for future research in this exciting field

    Virtual visual cues:vice or virtue?

    Get PDF

    Neural Imaginings: experiential and enactive approaches to contemporary psychologies, philosophies, and visual art as imagined navigations of the mind

    Get PDF
    Neural Imaginings is a Masters of Fine Arts project that culminated in this research paper, which accompanied the Post-Graduate Show held in December 2014 at Sydney College of the Arts Gallery, University of Sydney, Australia. The cluster of large ceramic sculptures presented a network – Mind Labyrinth (visceral ingress) – on which sat various conical objects – Mind Flowerings. A wall-mounted sculpture accompanied the installation, titled Cosmic Dance of the Dendrite. This paper asked, Why does art move us? This trans-disciplinary paper and my creative process-led practice examine the contemporary role of the art object, first-person perspective as the reality of the Virtual, and the dialogic functioning that occurs during an art encounter. An art encounter is an engagement of audiences aimed at invoking an individual’s bodily sense experience and concomitant emotions and thoughts. Artists may harness these somato-sensory communications to activate a viewer’s awareness of their own self-agency and dialogic encounter with sculpture, that are self-evident in a viewer’s visual, tactile, somatic and/or kinaesthetic responses. The art work employed aesthetic means to activate sensorial engagement from a viewer’s art encounter, to enact perceptual responses as part of a self-authenticating, meaning‐making process. Neural Imaginings is both a presentation of, and a metaphor for, individual agency – in sculpting oneself into existence, within one’s own mental space. This paper draws on the work of contemporary theorists from art, science and philosophy, striking at the core process of consciousness. A trans‐disciplinary approach pivoted around a neuro‐physiological paradigm, including the following theorists: American philosopher Alva Noë; American neuroscientist Antonio Damasio; New Zealand art theorist Gregory Minissale; and professors on Gilles Deleuze theory, Brian Massumi and Peter Gaffney. Symbiotically, the paper’s explanation and art themes oscillated between intrapersonal and disciplinary narratives in art and science, in the pursuit of current approaches of trans‐disciplinary confluence about the functions of the mind. This paper references artworks by contemporary artists from Australia, including Julie Rrap, Stelarc, Bill Henson, Helen Pynor and Jill Orr. Contemporary international artists included: from Taiwan, Hsu Yunghsu; New York, Marc Leuthold, and Mexico, Gabriel Oroszco

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    Augmented reality and scene examination

    Get PDF
    The research presented in this thesis explores the impact of Augmented Reality on human performance, and compares this technology with Virtual Reality using a head-mounted video-feed for a variety of tasks that relate to scene examination. The motivation for the work was the question of whether Augmented Reality could provide a vehicle for training in crime scene investigation. The Augmented Reality application was developed using fiducial markers in the Windows Presentation Foundation, running on a wearable computer platform; Virtual Reality was developed using the Crytek game engine to present a photo-realistic 3D environment; and a video-feed was provided through head-mounted webcam. All media were presented through head-mounted displays of similar resolution to provide the sole source of visual information to participants in the experiments. The experiments were designed to increase the amount of mobility required to conduct the search task, i.e., from rotation in the horizontal or vertical plane through to movement around a room. In each experiment, participants were required to find objects and subsequently recall their location. It is concluded that human performance is affected not merely via the medium through which the world is perceived but moreover, the constraints governing how movement in the world is controlled

    Visualizing a Task Performer’s Gaze to Foster Observers’ Performance and Learning : a Systematic Literature Review on Eye Movement Modeling Examples

    Get PDF
    Eye movement modeling examples (EMMEs) are instructional videos (e.g., tutorials) that visualize another person’s gaze location while they demonstrate how to perform a task. This systematic literature review provides a detailed overview of studies on the effects of EMME to foster observers’ performance and learning and highlights their differences in EMME designs. Through a broad, systematic search on four relevant databases, we identified 72 EMME studies (78 experiments). First, we created an overview of the different study backgrounds. Studies most often taught tasks from the domains of sports/physical education, medicine, aviation, and STEM areas and had different rationales for displaying EMME. Next, we outlined how studies differed in terms of participant characteristics, task types, and the design of the EMME materials, which makes it hard to infer how these differences affect performance and learning. Third, we concluded that the vast majority of the experiments showed at least some positive effects of EMME during learning, on tests directly after learning, and tests after a delay. Finally, our results provide a first indication of which EMME characteristics may positively influence learning. Future research should start to more systematically examine the effects of specific EMME design choices for specific participant populations and task types

    ‘Subtle’ Technology: Design for Facilitating Face-to-Face Interaction for Socially Anxious People

    Get PDF
    PhD thesisShy people have a desire for social interaction but fear being scrutinised and rejected. This conflict results in attention deficits during face-to-face situations. It can cause the social atmosphere to become ‘frozen’ and shy persons to appear reticent. Many of them avoid such challenges, taking up the ‘electronic extroversion’ route and experiencing real-world social isolation. This research is aimed at improving the social skills and experience of shy people. It establishes conceptual frameworks and guidelines for designing computer-mediated tools to amplify shy users’ social cognition while extending conversational resources. Drawing on the theories of Social Objects, ‘natural’ HCI and unobtrusive Ubiquitous Computing, it proposes the Icebreaker Cognitive-Behavioural Model for applying user psychology to the systems’ features and functioning behaviour. Two initial design approaches were developed in forms of Wearable Computer and evaluated in a separate user-centred study. One emphasised the users’ privacy concerns in the form of a direct but covert display of the Vibrosign Armband. Another focused on low-attention demand and low-key interaction preferences – rendered through a peripheral but overt visual display of the Icebreaker T-shirt, triggered by the users’ handshake and disguised in the system’s subtle operation. Quantitative feedback by vibrotactile experts indicated the armband effective in signalling various types of abstract information. However, it added to the mental load and needed a disproportionate of training time. In contrast, qualitative-based feedback from shy users revealed unexpected benefits of the information display made public on the shirt front. It encouraged immediate and fluid interaction by providing a mutual ‘ticket to talk’ and an interpretative gap in the users’ relationship, although the rapid prototype compromised the technology’s subtle characteristics and impeded the users’ social experience. An iterative design extended the Icebreaker approach through a systematic refinement and resulted in the Subtle Design Principle implemented in the Icebreaker Jacket. Its subtle interaction and display modalities were compared to those of a focal-demand social aid, using a mixed-method evaluation. Inferential analysis results indicated the subtle technology more engaging with users’ social aspirations and facilitating a higher degree of unobtrusive experience. Through the Icebreaker model and Subtle Design Principle, together with the exploratory research framework and study outcome, this thesis demonstrates the advantages of using subtle technology to help shy users cope with the challenges of face-to-face interaction and improve their social experience.RCUK under the Digital Economy Doctoral Training scheme, through MAT programme, EPSRC Doctoral Training Centre EP/G03723X/1
    corecore