29,863 research outputs found
Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction
This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl
Design and semantics of form and movement (DeSForM 2006)
Design and Semantics of Form and Movement (DeSForM) grew from applied research exploring emerging design methods and practices to support new generation product and interface design. The products and interfaces are concerned with: the context of ubiquitous computing and ambient technologies and the need for greater empathy in the pre-programmed behaviour of the âmachinesâ that populate our lives. Such explorative research in the CfDR has been led by Young, supported by Kyffin, Visiting Professor from Philips Design and sponsored by Philips Design over a period of four years (research funding ÂŁ87k). DeSForM1 was the first of a series of three conferences that enable the presentation and debate of international work within this field: âą 1st European conference on Design and Semantics of Form and Movement (DeSForM1), Baltic, Gateshead, 2005, Feijs L., Kyffin S. & Young R.A. eds. âą 2nd European conference on Design and Semantics of Form and Movement (DeSForM2), Evoluon, Eindhoven, 2006, Feijs L., Kyffin S. & Young R.A. eds. âą 3rd European conference on Design and Semantics of Form and Movement (DeSForM3), New Design School Building, Newcastle, 2007, Feijs L., Kyffin S. & Young R.A. eds. Philips sponsorship of practice-based enquiry led to research by three teams of research students over three years and on-going sponsorship of research through the Northumbria University Design and Innovation Laboratory (nuDIL). Young has been invited on the steering panel of the UK Thinking Digital Conference concerning the latest developments in digital and media technologies. Informed by this research is the work of PhD student Yukie Nakano who examines new technologies in relation to eco-design textiles
Recommended from our members
Gesture production and comprehension in children with specific language impairment
Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups
Do you understand what I want to tell you? Early sensitivity in bilinguals' iconic gesture perception and production
Previous research has shown differences in monolingual and bilingual communication. We explored whether monolingual and bilingual preâschoolers (N = 80) differ in their ability to understand others' iconic gestures (gesture perception) and produce intelligible iconic gestures themselves (gesture production) and how these two abilities are related to differences in parental iconic gesture frequency. In a gesture perception task, the experimenter replaced the last word of every sentence with an iconic gesture. The child was then asked to choose one of four pictures that matched the gesture as well as the sentence. In a gesture production task, children were asked to indicate âwith their handsâ to a deaf puppet which objects to select. Finally, parental gesture frequency was measured while parents answered three different questions. In the iconic gesture perception task, monolingual and bilingual children did not differ. In contrast, bilinguals produced more intelligible gestures than their monolingual peers. Finally, bilingual children's parents gestured more while they spoke than monolingual children's parents. We suggest that bilinguals' heightened sensitivity to their interaction partner supports their ability to produce intelligible gestures and results in a bilingual advantage in iconic gesture production
Speech and gesture in spatial language and cognition among the Yucatec Mayas
In previous analyses of the influence of language on cognition, speech has been the main channel examined. In studies conducted among Yucatec Mayas, efforts to determine the preferred frame of reference in use in this community have failed to reach an agreement (Bohnemeyer & Stolz, 2006; Levinson, 2003 vs. Le Guen, 2006, 2009). This paper argues for a multimodal analysis of language that encompasses gesture as well as speech, and shows that the preferred frame of reference in Yucatec Maya is only detectable through the analysis of co-speech gesture and not through speech alone. A series of experiments compares knowledge of the semantics of spatial terms, performance on nonlinguistic tasks and gestures produced by men and women. The results show a striking gender difference in the knowledge of the semantics of spatial terms, but an equal preference for a geocentric frame of reference in nonverbal tasks. In a localization task, participants used a variety of strategies in their speech, but they all exhibited a systematic preference for a geocentric frame of reference in their gestures
Safe Driving using Vision-based Hand Gesture Recognition System in Non-uniform Illumination Conditions
Nowadays, there is tremendous growth in in-car interfaces for driver safety and comfort, but controlling these devices while driving requires the driver's attention. One of the solutions to reduce the number of glances at these interfaces is to design an advanced driver assistance system (ADAS). A vision-based touch-less hand gesture recognition system is proposed here for in-car human-machine interfaces (HMI). The performance of such systems is unreliable under ambient illumination conditions, which change during the course of the day. Thus, the main focus of this work was to design a system that is robust towards changing lighting conditions. For this purpose, a homomorphic filter with adaptive thresholding binarization is used. Also, gray-level edge-based segmentation ensures that it is generalized for users of different skin tones and background colors. This work was validated on selected gestures from the Cambridge Hand Gesture Database captured in five sets of non-uniform illumination conditions that closely resemble in-car illumination conditions, yielding an overall system accuracy of 91%, an average frame-by-frame accuracy of 81.38%, and a latency of 3.78 milliseconds. A prototype of the proposed system was implemented on a Raspberry Pi 3 interface together with an Android application, which demonstrated its suitability for non-critical in-car interfaces like infotainment systems
Playing with Literacy
This is a qualitative research study that examines how the act of play influences the development of literacy skills. The research is formatted as a case study of a single five-year-old child from Western New York. Data were collected over the course of five weeks, with ten total observations of the focal child in his home environment. Data collection methods include interviews, observations of the focal child playing independently, observations of the focal child playing collaboratively, as well as the use of double entry journals and a cell phone for audio recordings. From the data, three major findings were discovered: a) play situations are created through imitation; b) self-directed speech is used during independent play; and c) imaginative play is promoted through social interaction with peers. Based on these major findings, conclusions and implications were made based on the way in which the focal child develops his literacy skills through various play interactions
The Intersection of Culture and ICF-CY Personal and Environmental Factors for Alternative and Augmentative Communication
Clinicians facilitate successful use of Alternative and Augmentative Communication (AAC). The most clinically competent providers, however, address needs that extend beyond technical AAC use to help clients experience full participation. This can only be achieved
for all clients by considering individual cultural factors that affect their participation. This article describes how Personal and Environmental Factors of the World Health Organizationâs (WHOâs) International Classification of Functioning, Disability and Health: Children & Youth Version (ICF-CY; WHO, 2007) encompass how cultural characteristics (e.g., family/ home, school, recreational, social, or spiritual) impact participation. The ICF-CY can provide a structured way for Speech-Language Pathologists to consider culture to maximize childrenâs full participation in activities
Doctor of Philosophy
dissertationPrior to the onset of spoken words, infants acquire gestures through early social interactions with their parents. Research on typically developing children has demonstrated an important relationship between maternal gesture use and child gesture and language development. Specifically, the variety and frequency of maternal gesture use has been shown to function as a scaffold for the development of language and an infant's own gesture development. This study examined gesture use in mothers of toddlers with expressive and receptive language delay during a naturalistic interaction with their young children. Maternal gestures were coded using a detailed coding scheme, according to category, specific type, and the presence or absence of co-occurring speech. The relationship between maternal gesture, child language, child gesture, and autism spectrum disorder (ASD) risk status was also examined. Participants included 54 parents of toddlers enrolled in a longitudinal study of language delay as a risk factor for ASD (language delay (LD) = 27, typically developing (TD) = 27). Results suggested similar gesture profiles across groups of mothers. Mothers of toddlers in the LD and TD groups were found to use gestures at the same frequency and convey a similar number of meanings though gesture (Wilks' Π= 0.99, F (2, 51) = 0.273, p = 0.76, partial η2 = 0.01). Mothers in both groups used more deictic gestures than other gesture types F (1.39, 72.23) = 88.63, p 70%) and the gestures tended to emphasize the message conveyed in speech. Results for mothers in the language delay group revealed a significant negative relationship between maternal gesture and concurrent child receptive language ( p = 0.04) as well as a significant negative relationship to a change in expressive language over time ( p = 0.02). Maternal gesture in the TD group was positively related to concurrent child gesture ( p = 0.04). This research demonstrated that mothers of toddlers with severe language delays are similar in their gestural communication to mothers of typically developing infants
May the Force Be with You: Ultrasound Haptic Feedback for Mid-Air Gesture Interaction in Cars
The use of ultrasound haptic feedback for mid-air gestures in cars has been proposed to provide a sense of control over the user's intended actions and to add touch to a touchless interaction. However, the impact of ultrasound feedback to the gesturing hand regarding lane deviation, eyes-off-the-road time (EORT) and perceived mental demand has not yet been measured. This paper investigates the impact of uni- and multimodal presentation of ultrasound feedback on the primary driving task and the secondary gesturing task in a simulated driving environment. The multimodal combinations of ultrasound included visual, auditory, and peripheral lights. We found that ultrasound feedback presented uni-modally and bi-modally resulted in significantly less EORT compared to visual feedback. Our results suggest that multimodal ultrasound feedback for mid-air interaction decreases EORT whilst not compromising driving performance nor mental demand and thus can increase safety while driving
- âŠ