230 research outputs found

    Towards an in-vehicle sonically-enhanced gesture control interface: A pilot study

    Get PDF
    A pilot study was conducted to explore the potential of sonically-enhanced gestures as controls for future in-vehicle information systems (IVIS). Four concept menu systems were developed using a LEAP Motion and Pure Data: (1) 2x2 with auditory feedback, (2) 2x2 without auditory feedback, (3) 4x4 with auditory feedback, and (4) 4x4 without auditory feedback. Seven participants drove in a simulator while completing simple target-acquisition tasks using each of the four prototype systems. Driving performance and eye glance behavior were collected as well as subjective ratings of workload and system preference. Results from driving performance and eye tracking measures strongly indicate that the 2x2 grids yield better driving safety outcomes than 4x4 grids. Subjective ratings show similar patterns for driver workload and preferences. Auditory feedback led to similar improvements in driving performance and eye glance behavior as well as subjective ratings of workload and preference, compared to visual-only

    May the Force Be with You: Ultrasound Haptic Feedback for Mid-Air Gesture Interaction in Cars

    Get PDF
    The use of ultrasound haptic feedback for mid-air gestures in cars has been proposed to provide a sense of control over the user's intended actions and to add touch to a touchless interaction. However, the impact of ultrasound feedback to the gesturing hand regarding lane deviation, eyes-off-the-road time (EORT) and perceived mental demand has not yet been measured. This paper investigates the impact of uni- and multimodal presentation of ultrasound feedback on the primary driving task and the secondary gesturing task in a simulated driving environment. The multimodal combinations of ultrasound included visual, auditory, and peripheral lights. We found that ultrasound feedback presented uni-modally and bi-modally resulted in significantly less EORT compared to visual feedback. Our results suggest that multimodal ultrasound feedback for mid-air interaction decreases EORT whilst not compromising driving performance nor mental demand and thus can increase safety while driving

    The cyber-guitar system: a study in technologically enabled performance practice

    Get PDF
    A thesis submitted to the Faculty of Humanities, University of the Witwatersrand, in fulfilment of the requirements for the degree of Doctor of Philosophy, March 2017This thesis documents the development and realisation of an augmented instrument, expressed through the processes of artistic practice as research. The research project set out to extend my own creative practice on the guitar by technologically enabling and extending the instrument. This process was supported by a number of creative outcomes (performances, compositions and recordings), running parallel to the interrogation of theoretical areas emerging out of the research. In the introduction I present a timeline for the project and situate the work in the field of artistic practice as research, explaining relationship between the traditional and creative practices. Following on from this chapter one, Notation, Improvisation and the Cyber-Guitar System discusses the impact of notation on my own education as a musician, unpacking how the nature of notation impacted on improvisation both historically and within my own creative work. Analysis of fields such as graphic notation led to the creation of the composition Hymnus Caesus Obcessiones, a central work in this research. In chapter two, Noise, Music and the Creative Boundary I consider the boundary and relationship between noise and music, beginning with the futurist composer Luigi Russolo. The construction of the augmented instrument was informed by this boundary and aimed to bring the lens onto this in my own practice, recognising what I have termed the ephemeral noise boundary. I argue that the boundary line between them yields the most fertile place of sonic and technological engagement. Chapter three focuses on the instrumental development and a new understanding of organology. It locates an understanding of the position of the musical instrument historically with reference to the values emerging from the studies of notation and noise. It also considers the impacts of technology and gestural interfacing. Chapter four documents the physical process of designing and building the guitar. Included in the Appendix are three CDs and a live DVD of the various performances undertaken across the years of research.XL201

    Skyler and Bliss

    Get PDF
    Hong Kong remains the backdrop to the science fiction movies of my youth. The city reminds me of my former training in the financial sector. It is a city in which I could have succeeded in finance, but as far as art goes it is a young city, and I am a young artist. A frustration emerges; much like the mould, the artist also had to develop new skills by killing off his former desires and manipulating technology. My new series entitled HONG KONG surface project shows a new direction in my artistic research in which my technique becomes ever simpler, reducing the traces of pixelation until objects appear almost as they were found and photographed. Skyler and Bliss presents tectonic plates based on satellite images of the Arctic. Working in a hot and humid Hong Kong where mushrooms grow ferociously, a city artificially refrigerated by climate control, this series provides a conceptual image of a imaginary typographic map for survival. (Laurent Segretier

    Multimodal feedback for mid-air gestures when driving

    Get PDF
    Mid-air gestures in cars are being used by an increasing number of drivers on the road. Us-ability concerns mean good feedback is important, but a balance needs to be found between supporting interaction and reducing distraction in an already demanding environment. Visual feedback is most commonly used, but takes visual attention away from driving. This thesis investigates novel non-visual alternatives to support the driver during mid-air gesture interaction: Cutaneous Push, Peripheral Lights, and Ultrasound feedback. These modalities lack the expressive capabilities of high resolution screens, but are intended to allow drivers to focus on the driving task. A new form of haptic feedback — Cutaneous Push — was defined. Six solenoids were embedded along the rim of the steering wheel, creating three bumps under each palm. Studies 1, 2, and 3 investigated the efficacy of novel static and dynamic Cutaneous Push patterns, and their impact on driving performance. In simulated driving studies, the cutaneous patterns were tested. The results showed pattern identification rates of up to 81.3% for static patterns and 73.5% for dynamic patterns and 100% recognition of directional cues. Cutaneous Push notifications did not impact driving behaviour nor workload and showed very high user acceptance. Cutaneous Push patterns have the potential to make driving safer by providing non-visual and instantaneous messages, for example to indicate an approaching cyclist or obstacle. Studies 4 & 5 looked at novel uni- and bimodal feedback combinations of Visual, Auditory, Cutaneous Push, and Peripheral Lights for mid-air gestures and found that non-visual feedback modalities, especially when combined bimodally, offered just as much support for interaction without negatively affecting driving performance, visual attention and cognitive demand. These results provide compelling support for using non-visual feedback from in-car systems, supporting input whilst letting drivers focus on driving.Studies 6 & 7 investigated the above bimodal combinations as well as uni- and bimodal Ultrasound feedback during the Lane Change Task to assess the impact of gesturing and feedback modality on car control during more challenging driving. The results of study Seven suggests that Visual and Ultrasound feedback are not appropriate for in-car usage,unless combined multimodally. If Ultrasound is used unimodally it is more useful in a binary scenario.Findings from Studies 5, 6, and 7 suggest that multimodal feedback significantly reduces eyes-off-the-road time compared to Visual feedback without compromising driving performance or perceived user workload, thus it can potentially reduce crash risks. Novel design recommendations for providing feedback during mid-air gesture interaction in cars are provided, informed by the experiment findings

    Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools

    Get PDF
    Questa tesi affronta una varietà di temi di ricerca, spaziando dalla interazione uomo-macchina alla modellizzazione fisica. Ciò che unisce queste ampie aree di interesse è l'idea di utilizzare simulazioni numeriche di fenomeni acustici basate sulla fisica, al fine di implementare interfacce uomo-macchina che offrano feedback sonoro coerente con l'interazione dell'utente. A questo proposito, negli ultimi anni sono nate numerose nuove discipline che vanno sotto il nome di -- per citarne alcune -- auditory display, sonificazione e sonic interaction design. In questa tesi vengono trattate la progettazione e la realizzazione di algoritmi audio efficienti per la sonificazione interattiva. A tale scopo si fa uso di tecniche di modellazione fisica di suoni ecologici (everyday sounds), ovvero suoni che non rientrano nelle famiglie del parlato e dei suoni musicali.The work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds

    Arguably augmented reality : relationships between the virtual and the real

    Get PDF
    This thesis is about augmented reality (AR). AR is commonly considered a technology that integrates virtual images into a user’s view of the real world. Yet, this thesis is not about such technologies. We believe a technology-based notion of AR is incomplete. In this thesis, we challenge the technology-oriented view, provide new perspectives on AR and propose a different understanding. We argue that AR is characterized by the relationships between the virtual and the real and approach AR from a fundamental, experience-focused view. By doing so, we create an unusually broad and diverse image of what AR is, or arguably could be. We discuss the fundamental characteristics of AR and the many possible manifestations it can take and propose new, imaginative AR environments that have no counterpart in a purely physical world. Computer Systems, Imagery and Medi
    corecore