30 research outputs found

    Violin Augmentation Techniques for Learning Assistance

    Get PDF
    PhDLearning violin is a challenging task requiring execution of pitch tasks with the left hand using a strong aural feedback loop for correctly adjusting pitch, concurrent with the right hand moving a bow precisely with correct pressure across strings. Real-time technological assistance can help a student gain feedback and understanding helpful for learning and maintaining motivation. This thesis presents real-time low-cost low-latency violin augmentations that can be used to assist learning the violin along with other real-time performance tasks. To capture bow performance, we demonstrate a new means of bow tracking by measuring bow hair de ection from the bow hair being pressed against the string. Using near- eld optical sensors placed along the bow we are able to estimate bow position and pressure through linear regression from training samples. For left hand pitch tracking, we introduce low cost means for tracking nger position and illustrate the combination of sensed results with audio processing to achieve high accuracy low-latency pitch tracking. We subsequently verify our new tracking methods' e ectiveness and usefulness demonstrating low-latency note onset detection and control of real-time performance visuals. To help tackle the challenge of intonation, we used our pitch estimation to develop low latency pitch correction. Using expert performers, we veri ed that fully correcting pitch is not only disconcerting but breaks a violinist's learned pitch feedback loop resulting in worse asplayed performance. However, partial pitch correction, though also linked to worse as-played performance, did not lead to a signi cantly negative experience con rming its potential for use to temporarily reduce barriers to success. Subsequently, in a study with beginners, we veri ed that when the pitch feedback loop is underdeveloped, automatic pitch correction did not signi cantly hinder performance, but o ered an enjoyable low-pitch error experience and that providing an automatic target guide pitch was helpful in correcting performed pitch error

    Universal design of an automatic page-turner

    Get PDF
    This thesis deals with the effectiveness of automatic page-turners as one form of assistive technology. It examines several of the existing commercially available products with a view to developing a universal system that would have the potential to satisfy both the special needs and musician sectors. It explores the current trends regarding the collection of statistical data on people with a physical disability, which is intended to identify the= present and future needs for such assistive technology devices. The project utilizes a usercentric approach to document the requirements of the end users of such a device, before conceptualising a model which would have the potential to satisfy the expanded target market. It explains in detail the development process of the working model, which employs two anthropomorphic finger-like mechanisms, both of which incorporate force feedback. These finger-mimetic components are used to separate and turn the pages of the reading material. A functional prototype was built and a report of the preliminary testing carried out, together with a fully documented illustration of the final working engineering model is included. The test results reveal that the system has shown great potential for the successful development of a more universal Automatic page turner that could satisfy both identified markets

    Presence studies as an evaluation method for user experiences in multimodal virtual environments

    Get PDF

    Imagining & Sensing: Understanding and Extending the Vocalist-Voice Relationship Through Biosignal Feedback

    Get PDF
    The voice is body and instrument. Third-person interpretation of the voice by listeners, vocal teachers, and digital agents is centred largely around audio feedback. For a vocalist, physical feedback from within the body provides an additional interaction. The vocalistā€™s understanding of their multi-sensory experiences is through tacit knowledge of the body. This knowledge is difficult to articulate, yet awareness and control of the body are innate. In the ever-increasing emergence of technology which quantifies or interprets physiological processes, we must remain conscious also of embodiment and human perception of these processes. Focusing on the vocalist-voice relationship, this thesis expands knowledge of human interaction and how technology influences our perception of our bodies. To unite these different perspectives in the vocal context, I draw on mixed methods from cog- nitive science, psychology, music information retrieval, and interactive system design. Objective methods such as vocal audio analysis provide a third-person observation. Subjective practices such as micro-phenomenology capture the experiential, first-person perspectives of the vocalists them- selves. Quantitative-qualitative blend provides details not only on novel interaction, but also an understanding of how technology influences existing understanding of the body. I worked with vocalists to understand how they use their voice through abstract representations, use mental imagery to adapt to altered auditory feedback, and teach fundamental practice to others. Vocalists use multi-modal imagery, for instance understanding physical sensations through auditory sensations. The understanding of the voice exists in a pre-linguistic representation which draws on embodied knowledge and lived experience from outside contexts. I developed a novel vocal interaction method which uses measurement of laryngeal muscular activations through surface electromyography. Biofeedback was presented to vocalists through soni- fication. Acting as an indicator of vocal activity for both conscious and unconscious gestures, this feedback allowed vocalists to explore their movement through sound. This formed new perceptions but also questioned existing understanding of the body. The thesis also uncovers ways in which vocalists are in control and controlled by, work with and against their bodies, and feel as a single entity at times and totally separate entities at others. I conclude this thesis by demonstrating a nuanced account of human interaction and perception of the body through vocal practice, as an example of how technological intervention enables exploration and influence over embodied understanding. This further highlights the need for understanding of the human experience in embodied interaction, rather than solely on digital interpretation, when introducing technology into these relationships

    Bringing the Physical to the Digital

    Get PDF
    This dissertation describes an exploration of digital tabletop interaction styles, with the ultimate goal of informing the design of a new model for tabletop interaction. In the context of this thesis the term digital tabletop refers to an emerging class of devices that afford many novel ways of interaction with the digital. Allowing users to directly touch information presented on large, horizontal displays. Being a relatively young field, many developments are in flux; hardware and software change at a fast pace and many interesting alternative approaches are available at the same time. In our research we are especially interested in systems that are capable of sensing multiple contacts (e.g., fingers) and richer information such as the outline of whole hands or other physical objects. New sensor hardware enable new ways to interact with the digital. When embarking into the research for this thesis, the question which interaction styles could be appropriate for this new class of devices was a open question, with many equally promising answers. Many everyday activities rely on our hands ability to skillfully control and manipulate physical objects. We seek to open up different possibilities to exploit our manual dexterity and provide users with richer interaction possibilities. This could be achieved through the use of physical objects as input mediators or through virtual interfaces that behave in a more realistic fashion. In order to gain a better understanding of the underlying design space we choose an approach organized into two phases. First, two different prototypes, each representing a specific interaction style ā€“ namely gesture-based interaction and tangible interaction ā€“ have been implemented. The flexibility of use afforded by the interface and the level of physicality afforded by the interface elements are introduced as criteria for evaluation. Each approachesā€™ suitability to support the highly dynamic and often unstructured interactions typical for digital tabletops is analyzed based on these criteria. In a second stage the learnings from these initial explorations are applied to inform the design of a novel model for digital tabletop interaction. This model is based on the combination of rich multi-touch sensing and a three dimensional environment enriched by a gaming physics simulation. The proposed approach enables users to interact with the virtual through richer quantities such as collision and friction. Enabling a variety of fine-grained interactions using multiple fingers, whole hands and physical objects. Our model makes digital tabletop interaction even more ā€œnaturalā€. However, because the interaction ā€“ the sensed input and the displayed output ā€“ is still bound to the surface, there is a fundamental limitation in manipulating objects using the third dimension. To address this issue, we present a technique that allows users to ā€“ conceptually ā€“ pick objects off the surface and control their position in 3D. Our goal has been to define a technique that completes our model for on-surface interaction and allows for ā€œas-direct-as possibleā€ interactions. We also present two hardware prototypes capable of sensing the usersā€™ interactions beyond the tableā€™s surface. Finally, we present visual feedback mechanisms to give the users the sense that they are actually lifting the objects off the surface. This thesis contributes on various levels. We present several novel prototypes that we built and evaluated. We use these prototypes to systematically explore the design space of digital tabletop interaction. The flexibility of use afforded by the interaction style is introduced as criterion alongside the user interface elementsā€™ physicality. Each approachesā€™ suitability to support the highly dynamic and often unstructured interactions typical for digital tabletops are analyzed. We present a new model for tabletop interaction that increases the fidelity of interaction possible in such settings. Finally, we extend this model so to enable as direct as possible interactions with 3D data, interacting from above the tableā€™s surface

    More playful user interfaces:interfaces that invite social and physical interaction

    Get PDF

    Development of an augmented reality guided computer assisted orthopaedic surgery system

    Get PDF
    Previously held under moratorium from 1st December 2016 until 1st December 2021.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system ā€“ ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55Ā±0.04 and 0.34Ā±0.04 mm, respectively.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system ā€“ ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55Ā±0.04 and 0.34Ā±0.04 mm, respectively
    corecore