24,620 research outputs found
Emerging Linguistic Functions in Early Infancy
This paper presents results from experimental
studies on early language acquisition in infants and
attempts to interpret the experimental results within
the framework of the Ecological Theory of
Language Acquisition (ETLA) recently proposed
by (Lacerda et al., 2004a). From this perspective,
the infant’s first steps in the acquisition of the
ambient language are seen as a consequence of the
infant’s general capacity to represent sensory input
and the infant’s interaction with other actors in its
immediate ecological environment. On the basis of
available experimental evidence, it will be argued
that ETLA offers a productive alternative to
traditional descriptive views of the language
acquisition process by presenting an operative
model of how early linguistic function may emerge
through interaction
ARTIFICIAL REALITY INTERACTION MODELS
In some implementations, the technology can render a dense layout of interactive mechanisms (e.g., selectable text and/or graphics) that are responsive to low accuracy input methods on the XR device. In some implementations, an XR device can associate a shortcut with the physical object (e.g., an action relative to the physical object, an option to perform an action relative to the physical object, etc.). In some implementations, a workload manager can select an augmentation workload using one or more of captured environment data (e.g., captured visual frames, audio, etc.), output from the initial stage model(s), any other suitable data
The Effects of Sharing Awareness Cues in Collaborative Mixed Reality
Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues
Ocular attention-sensing interface system
The purpose of the research was to develop an innovative human-computer interface based on eye movement and voice control. By eliminating a manual interface (keyboard, joystick, etc.), OASIS provides a control mechanism that is natural, efficient, accurate, and low in workload
- …