13 research outputs found

    A Soft touch: wearable dielectric elastomer actuated multi-finger soft tactile displays

    Get PDF
    PhDThe haptic modality in human-computer interfaces is significantly underutilised when compared to that of vision and sound. A potential reason for this is the difficulty in turning computer-generated signals into realistic sensations of touch. Moreover, wearable solutions that can be mounted onto multiple fingertips whilst still allowing for the free dexterous movements of the user’s hand, brings an even higher level of complexity. In order to be wearable, such devices should not only be compact, lightweight and energy efficient; but also, be able to render compelling tactile sensations. Current solutions are unable to meet these criteria, typically due to the actuation mechanisms employed. Aimed at addressing these needs, this work presents research into non-vibratory multi-finger wearable tactile displays, through the use of an improved configuration of a dielectric elastomer actuator. The described displays render forces through a soft bubble-like interface worn on the fingertip. Due to the improved design, forces of up to 1N can be generated in a form factor of 20 x 12 x 23 mm, with a weight of only 6g, demonstrating a significant performance increase in force output and wearability over existing tactile rendering systems. Furthermore, it is shown how these compact wearable devices can be used in conjunction with low-cost commercial optical hand tracking sensors, to cater for simple although accurate tactile interactions within virtual environments, using affordable instrumentation. The whole system makes it possible for users to interact with virtually generated soft body objects with programmable tactile properties. Through a 15-participant study, the system has been validated for three distinct types of touch interaction, including palpation and pinching of virtual deformable objects. Through this investigation, it is believed that this approach could have a significant impact within virtual and augmented reality interaction for purposes of medical simulation, professional training and improved tactile feedback in telerobotic control systems.Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Centre EP/G03723X/

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8Ëš to 6.4Ëš compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Creativity, Exploration and Control in Musical Parameter Spaces.

    Get PDF
    PhDThis thesis investigates the use of multidimensional control of synthesis parameters in electronic music, and the impact of controller mapping techniques on creativity. The theoretical contribution of this work, the EARS model, provides a rigorous application of creative cognition research to this topic. EARS provides a cognitive model of creative interaction with technology, retrodicting numerous prior findings in musical interaction research. The model proposes four interaction modes, and characterises them in terms of parameter-space traversal mechanisms. Recommendations for properties of controller-synthesiser mappings that support each of the modes are given. This thesis proposes a generalisation of Fitts' law that enables throughput-based evaluation of multi-dimensional control devices. Three experiments were run that studied musicians performing sound design tasks with various interfaces. Mappings suited to three of the four EARS modes were quantitatively evaluated. Experiment one investigated the notion of a `divergent interface'. A mapping geometry that caters to early-stage exploratory creativity was developed, and evaluated via a publicly available tablet application. Dimension reduction of a 10D synthesiser parameter space to 2D surface was achieved using Hilbert space-filling curves. Interaction data indicated that this divergent mapping was used for early-stage creativity, and that the traditional sliders were used for late-stage one tuning. Experiment two established a `minimal experimental paradigm' for sound design interface evaluation. This experiment showed that multidimensional controllers were faster than 1D sliders for locating a target sound in two and three timbre dimensions. iv The final study tested a novel embodied interaction technique: ViBEAMP. This system utilised a hand tracker and a 3D visualisation to train users to control 6 synthesis parameters simultaneously. Throughput was recorded as triple that of six sliders, and working memory load was signiffcantly reduced. This experiment revealed that musical, time-targeted interactions obey a different speed-accuracy trade-of law from accuracy-targeted interactions.Electronic Engineering and Computer Science at Queen Mar

    Sonic Interactions in Virtual Environments

    Get PDF
    This open access book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments
    corecore