2,134 research outputs found
Beaming Displays
Existing near-eye display designs struggle to balance between multiple trade-offs such as form factor, weight, computational
requirements, and battery life. These design trade-offs are major obstacles on the path towards an all-day usable near-eye display.
In this work, we address these trade-offs by, paradoxically, removing the display from near-eye displays. We present the beaming
displays, a new type of near-eye display system that uses a projector and an all passive wearable headset. We modify an off-the-shelf
projector with additional lenses. We install such a projector to the environment to beam images from a distance to a passive wearable
headset. The beaming projection system tracks the current position of a wearable headset to project distortion-free images with
correct perspectives. In our system, a wearable headset guides the beamed images to a user’s retina, which are then perceived as an
augmented scene within a user’s field of view. In addition to providing the system design of the beaming display, we provide a physical
prototype and show that the beaming display can provide resolutions as high as consumer-level near-eye displays. We also discuss the
different aspects of the design space for our proposal
Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment
abstract: Parents fulfill a pivotal role in early childhood development of social and communication
skills. In children with autism, the development of these skills can be delayed. Applied
behavioral analysis (ABA) techniques have been created to aid in skill acquisition.
Among these, pivotal response treatment (PRT) has been empirically shown to foster
improvements. Research into PRT implementation has also shown that parents can be
trained to be effective interventionists for their children. The current difficulty in PRT
training is how to disseminate training to parents who need it, and how to support and
motivate practitioners after training.
Evaluation of the parents’ fidelity to implementation is often undertaken using video
probes that depict the dyadic interaction occurring between the parent and the child during
PRT sessions. These videos are time consuming for clinicians to process, and often result
in only minimal feedback for the parents. Current trends in technology could be utilized to
alleviate the manual cost of extracting data from the videos, affording greater
opportunities for providing clinician created feedback as well as automated assessments.
The naturalistic context of the video probes along with the dependence on ubiquitous
recording devices creates a difficult scenario for classification tasks. The domain of the
PRT video probes can be expected to have high levels of both aleatory and epistemic
uncertainty. Addressing these challenges requires examination of the multimodal data
along with implementation and evaluation of classification algorithms. This is explored
through the use of a new dataset of PRT videos.
The relationship between the parent and the clinician is important. The clinician can
provide support and help build self-efficacy in addition to providing knowledge and
modeling of treatment procedures. Facilitating this relationship along with automated
feedback not only provides the opportunity to present expert feedback to the parent, but
also allows the clinician to aid in personalizing the classification models. By utilizing a
human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the
classification models by providing additional labeled samples. This will allow the system
to improve classification and provides a person-centered approach to extracting
multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201
Creating Bio-adaptive Visual Cues for a Social Virtual Reality Meditation Environment
This thesis examines designing and implementing adaptive visual cues for a social virtual reality meditation environment. The system described here adapts into user’s bio- and neurofeedback and uses that data in visual cues to convey information of physiological and affective states during meditation exercises supporting two simultaneous users.
The thesis shows the development process of different kinds of visual cues and attempts to pinpoint best practices, design principles and pitfalls regarding the visual cue development in this context. Also examined are the questions regarding criteria for selecting correct visual cues and how to convey information of biophysical synchronization between users.
The visual cues examined here are created especially for a virtual reality environment which differs as a platform from traditional two dimensional content such as user interfaces on a computer display. Points of interests are how to embody the visual cues into the virtual reality environment so that the user experience remains immersive and the visual cues convey information correctly and in an intuitive manner
Need Finding for an Embodied Coding Platform: Educators’ Practices and Perspectives
Eight middle- and high-school Computer Science (CS) teachers in San Diego County were interviewed about the major challenges their students commonly encounter in learning computer programming. We identified strategic design opportunities -- that is, challenges and needs that can be addressed in innovative ways through the affordances of Augmented and Virtual Reality (AR/VR). Thematic Analysis of the interviews yielded six thematic clusters: Tools for Learning, Visualization and Representation, Pedagogical Approaches, Classroom Culture, Motivation, and Community Connections. Within the theme of visualization, focal clusters centered on visualizing problem spaces and using metaphors to explain computational concepts, indicating that an AR/VR coding system could help users to represent computational problems by allowing them to build from existing embodied experiences and knowledge. Additionally, codes clustered within the theme of learning tools reflected educators’ preference for web-based IDEs, which involve minimal start-up costs, as well as concern over the degree of transfer in learning between block- and text-based interfaces. Finally, themes related to motivation, community, and pedagogical practices indicated that the design of an AR coding platform should support collaboration, self-expression, and autonomy in learning. It should also foster selfefficacy and learners’ ability to address lived experience and real-world problems through computational means
Invisible but Understandable: In Search of the Sweet Spot between Technology Invisibility and Transparency in Smart Spaces and Beyond
Smart technology is already present in many areas of everyday life. People rely on algorithms in crucial life domains such as finance and healthcare, and the smart car promises a more relaxed driving experience—all the while, the technology recedes further into the background. The smarter the technology, the more intransparent it tends to become. Users no longer understand how the technology works, what its limits are, and what consequences regarding autonomy and privacy emerge. Both extremes, total invisibility and total transparency, come with specific challenges and do not form reasonable design goals. This research explores the potential tension between smart and invisible versus transparent and understandable technology. We discuss related theories from the fields of explainable AI (XAI) as well as trust psychology, and then introduce transparency in smart spaces as a special field of application. A case study explores specific challenges and design approaches through the example of a so-called room intelligence (RI), i.e., a special kind of smart living room. We conclude with research perspectives for more general design approaches and implications for future research
A mixed reality telepresence system for collaborative space operation
This paper presents a Mixed Reality system that results from the integration of a telepresence system and an application to improve collaborative space exploration. The system combines free viewpoint video with immersive projection technology to support non-verbal communication, including eye gaze, inter-personal distance and facial expression. Importantly, these can be interpreted together as people move around the simulation, maintaining natural social distance. The application is a simulation of Mars, within which the collaborators must come to agreement over, for example, where the Rover should land and go.
The first contribution is the creation of a Mixed Reality system supporting contextualization of non-verbal communication. Tw technological contributions are prototyping a technique to subtract a person from a background that may contain physical objects and/or moving images, and a light weight texturing method for multi-view rendering which provides balance in terms of visual and temporal quality. A practical contribution is the demonstration of pragmatic approaches to sharing space between display systems of distinct levels of immersion. A research tool contribution is a system that allows comparison of conventional authored and video based reconstructed avatars, within an environment that encourages exploration and social interaction. Aspects of system quality, including the communication of facial expression and end-to-end latency are reported
DualStream: Spatially Sharing Selves and Surroundings using Mobile Devices and Augmented Reality
In-person human interaction relies on our spatial perception of each other
and our surroundings. Current remote communication tools partially address each
of these aspects. Video calls convey real user representations but without
spatial interactions. Augmented and Virtual Reality (AR/VR) experiences are
immersive and spatial but often use virtual environments and characters instead
of real-life representations. Bridging these gaps, we introduce DualStream, a
system for synchronous mobile AR remote communication that captures, streams,
and displays spatial representations of users and their surroundings.
DualStream supports transitions between user and environment representations
with different levels of visuospatial fidelity, as well as the creation of
persistent shared spaces using environment snapshots. We demonstrate how
DualStream can enable spatial communication in real-world contexts, and support
the creation of blended spaces for collaboration. A formative evaluation of
DualStream revealed that users valued the ability to interact spatially and
move between representations, and could see DualStream fitting into their own
remote communication practices in the near future. Drawing from these findings,
we discuss new opportunities for designing more widely accessible spatial
communication tools, centered around the mobile phone.Comment: 10 pages, 4 figures, 1 table; To appear in the proceedings of the
IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 202
Perceptual Allowances of Anamorphic Interaction Cues in Spatial Augmented Reality
Spatial Augmented Reality (SAR) enables the projection of digital content directly on the physical environment without the use of wearable displays. In spaces where viewers are encouraged to explore different locations, perspective anamorphosis techniques can be used to guide them through the physical environment. We propose a design space for describing anamorphic SAR interaction cues based on the continuity of the image when projected onto the environment, and the need for movement in order to understand the cue. We conduct two perceptual studies using virtual reality (VR) to simulate a SAR environment, to explore how well viewers identify ocular points on various surface geometries. We also present a system approach and experiment design for a future study to compare participants' ability to find the ocular point in a VR setting versus a SAR setting. This work can enable designers to create anamorphic content that takes advantage of the geometry in their physical space
- …