142 research outputs found

    ELECTRO-MECHANICAL DATA FUSION FOR HEART HEALTH MONITORING

    Get PDF
    Heart disease is a major public health problem and one of the leading causes of death worldwide. Therefore, cardiac monitoring is of great importance for the early detection and prevention of adverse conditions. Recently, there has been extensive research interest in long-term, continuous, and non-invasive cardiac monitoring using wearable technology. Here we introduce a wearable device for monitoring heart health. This prototype consists of three sensors to monitor electrocardiogram (ECG), phonocardiogram (PCG), and seismocardiogram (SCG) signals, integrated with a microcontroller module with Bluetooth wireless connectivity. We also created a custom printed circuit board (PCB) to integrate all the sensors into a compact design. Then, flexible housing for the electronic components was 3D printed using thermoplastic polyurethane (TPU). In addition, we developed peak detection algorithms and filtering programs to analyze the recorded cardiac signals. Our preliminary results show that the device can record all three signals in real-time. Initial results for signal interpretation come from a recurrent neural network (RNN) based machine learning algorithm, Long Short-Term Memory (LSTM), which is used to monitor and identify key features in the ECG data. The next phase of our research will include cross-examination of all three sensor signals, development of machine learning algorithms for PCG and SCG signals, and continuous improvement of the wearable device

    The integration of vision and touch for locating objects

    Get PDF
    The ability of the sensory system to create a stable representation of the world from an ever-changing stream of multi-modal information is still not well understood. The aim of this thesis was to investigate the underlying rules the sensory system uses to achieve this in the context of locating objects using vision and touch (haptics). We tested the wellestablished “optimal” combination model (Maximum Likelihood Estimation, MLE) against four other plausible combination strategies for locating objects in threedimensional space. We used a novel methodology that combined immersive Virtual Reality with spatially coaligned haptic robotics and real-world objects. Participants were asked to judge the depth of a target sphere relative to a plane defined by three reference spheres in a two-alternative forced choice discrimination task. A robotic arm was used to vary the depth of the target relative to a plane defined by the reference spheres. Spatially coincident virtual renderings of the spheres were presented on the Head Mounted Display (HMD). Haptic feedback was provided when participants reached out and touched real world objects that were aligned with the virtual objects. The variability of the single modality estimates (vision alone, haptics alone) were used to calculate predictions for performance in the combined-cue condition using five cue combination models. We find that none of the models predict the data well nor is any one model substantially better than the others. Thresholds for the combined-cue condition generally fell between the values of the single-cue thresholds rather than following the minimum variance or MLE prediction. Similarly, biases in the combined-cue case did not fall in the range between those for the individual cues as would be predicted by most cue combination models. The failure of the MLE model in this task has important implications for cue combination theory more widely

    XR, music and neurodiversity: design and application of new mixed reality technologies that facilitate musical intervention for children with autism spectrum conditions

    Get PDF
    This thesis, accompanied by the practice outputs,investigates sensory integration, social interaction and creativity through a newly developed VR-musical interface designed exclusively for children with a high-functioning autism spectrum condition (ASC).The results aim to contribute to the limited expanse of literature and research surrounding Virtual Reality (VR) musical interventions and Immersive Virtual Environments (IVEs) designed to support individuals with neurodevelopmental conditions. The author has developed bespoke hardware, software and a new methodology to conduct field investigations. These outputs include a Virtual Immersive Musical Reality Intervention (ViMRI) protocol, a Supplemental Personalised, immersive Musical Experience(SPiME) programme, the Assisted Real-time Three-dimensional Immersive Musical Intervention System’ (ARTIMIS) and a bespoke (and fully configurable) ‘Creative immersive interactive Musical Software’ application (CiiMS). The outputs are each implemented within a series of institutional investigations of 18 autistic child participants. Four groups are evaluated using newly developed virtual assessment and scoring mechanisms devised exclusively from long-established rating scales. Key quantitative indicators from the datasets demonstrate consistent findings and significant improvements for individual preferences (likes), fear reduction efficacy, and social interaction. Six individual case studies present positive qualitative results demonstrating improved decision-making and sensorimotor processing. The preliminary research trials further indicate that using this virtual-reality music technology system and newly developed protocols produces notable improvements for participants with an ASC. More significantly, there is evidence that the supplemental technology facilitates a reduction in psychological anxiety and improvements in dexterity. The virtual music composition and improvisation system presented here require further extensive testing in different spheres for proof of concept

    Neuronal representation of sound source location in the auditory cortex during active navigation

    Get PDF
    The ability to localize sounds is crucial for the survival of both predators as well as prey. The former rely on their senses to lead them to the latter, which in turn also benefit from locating a predator in the vicinity to escape accordingly. In such cases, the sound localization process typically takes place while the animals are in motion. Since the cues that the brain uses to localize sounds are head-centered (egocentric), they can change very rapidly when an animal moves and rotates. This constitutes an even bigger challenge than sound localization in a static environment. Up to now, however, both aspects have mostly been studied separately in neuroscience, thus limiting our understanding of active sound localization during navigation. This thesis reports on the development of a novel behavioral paradigm – the Sensory Island Task (SIT) – to promote sound localization during unrestricted motion. By attributing a different behavioral meaning (associated to different outcomes) to two spatially separated sound sources, Mongolian gerbils (Meriones unguiculatus) were trained to forage for an area (target island) in the arena that triggered a change in the active sound source to the target loudspeaker and to report its detection by remaining within the island for a duration of 6 s. Importantly, the two loudspeakers played identical sounds and the location of the target island in the arena was changed randomly every trial. When the probability of successfully identifying the target island exceeded the chance level, a tetrode bundle was implanted in the primary auditory cortex of the gerbils to record neuronal responses during task performance. Canonically, the auditory cortex (AC) is described as possessing neurons with a broad hemispheric tuning. Nonetheless, context and behavioral state have been shown to modulate the neuronal responses in the AC. The experiments described in this thesis demonstrate the existence of a large variety of additional, previously unreported (or underreported) spatial tuning types. In particular, neurons that were sensitive to the midline and, most intriguingly, neurons that were sensitive to the task identity of the active loudspeaker were observed. The latter comprise neurons that were spatially tuned to only one of the two loudspeakers, neurons that exhibited a large difference in the preferred egocentric sound-source location for the two loudspeakers as well as spatially untuned neurons whose firing rate changed depending on the active loudspeaker. Additionally, temporal complexity in the neuronal responses was observed, with neurons changing their preferred egocentric sound-source location throughout their response to a sound. Corroborating earlier studies, also here it was found that the task-specific choice of the animal was reflected in the neuronal responses. Specifically, the neuronal firing rate decreased before the animal successfully finished a trial in comparison to situations in which the gerbil incorrectly left the target island before trial completion. Furthermore, the differential behavioral meaning between the two loudspeakers was found to be represented in the neuronal tuning acuity, with neurons being more sharply tuned to sounds coming from the target than from the background loudspeaker. Lastly, by implementing an artificial neural network, all of the observed phenomena could be studied in a common framework, enabling a better and more comprehensive understanding of the computational relevance of the diversity of observed neuronal responses. Strikingly, the algorithm was capable of predicting not only the egocentric sound-source location but also which sound source was active – both with high accuracy. Taken together, the results presented in this thesis suggest the existence of an interlaced coding of egocentric and allocentric information in the neurons of the primary auditory cortex. These novel findings thus contribute towards a better understanding of how sound sources are perceptually stable during self-motion, an effect that could be advantageous for selective hearing

    Aircraft Attitude Estimation Using Panoramic Images

    Full text link
    This thesis investigates the problem of reliably estimating attitude from panoramic imagery in cluttered environments. Accurate attitude is an essential input to the stabilisation systems of autonomous aerial vehicles. A new camera system which combines a CCD camera, UltraViolet (UV) filters and a panoramic mirror-lens is designed. Drawing on biological inspiration from the Ocelli organ possessed by certain insects, UV filtered images are used to enhance the contrast between the sky and ground and mitigate the effect of the sun. A novel method for real–time horizon-based attitude estimation using panoramic image that is capable of estimating an aircraft pitch and roll at a low altitude in the presence of sun, clouds and occluding features such as tree, building, is developed. Also, a new method for panoramic sky/ground thresholding, consisting of a horizon– and a sun–tracking system which works effectively even when the horizon line is difficult to detect by normal thresholding methods due to flares and other effects from the presence of the sun in the image, is proposed. An algorithm for estimating the attitude from three–dimensional mapping of the horizon projected onto a 3D plane is developed. The use of optic flow to determine pitch and roll rates is investigated using the panoramic image and image interpolation algorithm (I2A). Two methods which employ sensor fusion techniques, Extended Kalman Filter (EKF) and Artificial Neural Networks (ANNs), are used to fuse unfiltered measurements from inertial sensors and the vision system. The EKF estimates gyroscope biases and also the attitude. The ANN fuses the optic flow and horizon–based attitude to provide smooth attitude estimations. The results obtained from different parts of the research are tested and validated through simulations and real flight tests

    Sonic interactions in virtual environments

    Get PDF
    This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments

    Sonic Interactions in Virtual Environments

    Get PDF

    Ubiquitous Integration and Temporal Synchronisation (UbilTS) framework : a solution for building complex multimodal data capture and interactive systems

    Get PDF
    Contemporary Data Capture and Interactive Systems (DCIS) systems are tied in with various technical complexities such as multimodal data types, diverse hardware and software components, time synchronisation issues and distributed deployment configurations. Building these systems is inherently difficult and requires addressing of these complexities before the intended and purposeful functionalities can be attained. The technical issues are often common and similar among diverse applications. This thesis presents the Ubiquitous Integration and Temporal Synchronisation (UbiITS) framework, a generic solution to address the technical complexities in building DCISs. The proposed solution is an abstract software framework that can be extended and customised to any application requirements. UbiITS includes all fundamental software components, techniques, system level layer abstractions and reference architecture as a collection to enable the systematic construction of complex DCISs. This work details four case studies to showcase the versatility and extensibility of UbiITS framework’s functionalities and demonstrate how it was employed to successfully solve a range of technical requirements. In each case UbiITS operated as the core element of each application. Additionally, these case studies are novel systems by themselves in each of their domains. Longstanding technical issues such as flexibly integrating and interoperating multimodal tools, precise time synchronisation, etc., were resolved in each application by employing UbiITS. The framework enabled establishing a functional system infrastructure in these cases, essentially opening up new lines of research in each discipline where these research approaches would not have been possible without the infrastructure provided by the framework. The thesis further presents a sample implementation of the framework on a device firmware exhibiting its capability to be directly implemented on a hardware platform. Summary metrics are also produced to establish the complexity, reusability, extendibility, implementation and maintainability characteristics of the framework.Engineering and Physical Sciences Research Council (EPSRC) grants - EP/F02553X/1, 114433 and 11394
    • 

    corecore