1,649 research outputs found
Ambient Gestures
We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous âin the environmentâ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing
Recommended from our members
Gesture production and comprehension in children with specific language impairment
Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups
Ambient hues and audible cues: An approach to automotive user interface design using multi-modal feedback
The use of touchscreen interfaces for in-vehicle information, entertainment, and for the control of comfort settings is proliferating. Moreover, using these interfaces requires the same visual and manual resources needed for safe driving. Guided by much of the prevalent research in the areas of the human visual system, attention, and multimodal redundancy the Hues and Cues design paradigm was developed to make touchscreen automotive user interfaces more suitable to use while driving. This paradigm was applied to a prototype of an automotive user interface and evaluated with respects to driver performance using the dual-task, Lane Change Test (LCT). Each level of the design paradigm was evaluated in light of possible gender differences. The results of the repeated measures experiment suggests that when compared to interfaces without both the Hues and the Cues paradigm applied, the Hues and Cues interface requires less mental effort to operate, is more usable, and is more preferred. However, the results differ in the degradation in driver performance with interfaces that only have visual feedback resulting in better task times and significant gender differences in the driving task with interfaces that only have auditory feedback. Overall, the results reported show that the presentation of multimodal feedback can be useful in design automotive interfaces, but must be flexible enough to account for individual differences
Accessibility and dimensionality: enhanced real time creative independence for digital musicians with quadriplegic cerebral palsy
Inclusive music activities for people with physical disabilities commonly emphasise facilitated processes, based both on constrained gestural capabilities, and on the simplicity of the available interfaces. Inclusive music processes employ consumer controllers, computer access tools and/or specialized digital musical instruments (DMIs). The first category reveals a design ethos identified by the authors as artefact multiplication -- many sliders, buttons, dials and menu layers; the latter types offer ergonomic accessibility through artefact magnification. We present a prototype DMI that eschews artefact multiplication in pursuit of enhanced real time creative independence. We reconceptualise the universal click-drag interaction model via a single sensor type, which affords both binary and continuous performance control. Accessibility is optimized via a familiar interaction model and through customized ergonomics, but it is the mapping strategy that emphasizes transparency and sophistication in the hierarchical correspondences between the available gesture dimensions and expressive musical cues. Through a participatory and progressive methodology we identify an ostensibly simple targeting gesture rich in dynamic and reliable features: (1) contact location; (2) contact duration; (3) momentary force; (4) continuous force, and; (5) dyad orientation. These features are mapped onto dynamic musical cues, most notably via new mappings for vibrato and arpeggio execution
Wii are out of Control: Bodies, Game Screens and the Production of Gestural Excess
This paper looks at the ways that the Nintendo Wii might shift the locus of game analysis away from the screen and more towards playersâ corporeal relationship to the screen. The Wii hardware and software, the television screen, the physical space and playersâ bodies constitute an intriguing form of kinaesthetic play that borrows from cultural fantasies about virtual reality. This play, while conditioned by the goal driven and control logics of gameplay nevertheless leads to a production of âgestural excessâ as bodies twist, contort and perform in ways that the game as such neither demands nor necessarily accommodates
Neural correlates of the processing of co-speech gestures
In communicative situations, speech is often accompanied by gestures. For example, speakers tend to illustrate certain contents of speech by means of iconic gestures which are hand movements that bear a formal relationship to the contents of speech. The meaning of an iconic gesture is determined both by its form as well as the speech context in which it is performed. Thus, gesture and speech interact in comprehension. Using fMRI, the present study investigated what brain areas are involved in this interaction process. Participants watched videos in which sentences containing an ambiguous word (e.g. She touched the mouse) were accompanied by either a meaningless grooming movement, a gesture supporting the more frequent dominant meaning (e.g. animal) or a gesture supporting the less frequent subordinate meaning (e.g. computer device). We hypothesized that brain areas involved in the interaction of gesture and speech would show greater activation to gesture-supported sentences as compared to sentences accompanied by a meaningless grooming movement. The main results are that when contrasted with grooming, both types of gestures (dominant and subordinate) activated an array of brain regions consisting of the left posterior superior temporal sulcus (STS), the inferior parietal lobule bilaterally and the ventral precentral sulcus bilaterally. Given the crucial role of the STS in audiovisual integration processes, this activation might reflect the interaction between the meaning of gesture and the ambiguous sentence. The activations in inferior frontal and inferior parietal regions may reflect a mechanism of determining the goal of co-speech hand movements through an observation-execution matching process
The evolution of human-dog communication mechanisms during the domestication process
Two theory tried to explain the divergences between the dogs and their ancestral progenitors: the âDomestication hypothesisâ, which claims that the origin of most of the dog's behaviors is linked to the genetic processes involved in the domestication, and the âTwo-stage hypothesisâ, which emphasizes the role of behaviors acquired through individual experiences. This research project has had the purpose of examining the ontogenetic mechanisms that underlie dog-human relationship and communication in the most ancient domestic species. The first aim was to assess if the water rescue training affects the human-dog attachment bond using an adapted version of the âStrange Situation Testâ. The second aim was to clarify if following human gestures could be influenced by living in a low socialization regime. The third aim was to evaluate how much the dogs weigh the information given by human (familiar and unfamiliar) posture and voice when they were asked to perform transitive and intransitive actions, and how much this was related to the domestication process. The fourth aim was that of assess whether emotional chemosignals contained in human sweat could affect dogsâ physiology and behavior. Finally, an overview on dogâs sex differences in personality traits as well as cognitive and perceptual processes have been made to explore whether such dissimilarities were affected by the domestication process or the sex-specific differences existing in wild animals have been maintained. All the results presented in this doctoral dissertation converge in emphasising the heavy role of the ontogenetic processes in acquiring socio-cognitive skills, cognitive processes and perception in dogs
Struggling for Structure: cognitive origins of grammatical diversity and their implications for the Human Faculty of Language
There are between 5,000 and 8,000 distinct living languages spoken in the world today that are characterized by both exceptional diversity as well as significant similarities. Many researchers believe that at least part of this ability to communicate with language arises from a uniquely human Faculty of Language (c.f. Hauser, Chomsky, & Fitch, 2002; Pinker & Jackendoff, 2005)
Recommended from our members
Constraints on Distribution of Palatalized Stops: Evidence for Licensing by Cue
- âŠ