41 research outputs found
Using a novel visualization tool for rapid survey of long-duration acoustic recordings for ecological studies of frog chorusing
Continuous recording of environmental sounds could allow long-term monitoring of vocal wildlife, and scaling of ecological studies to large temporal and spatial scales. However, such opportunities are currently limited by constraints in the analysis of large acoustic data sets. Computational methods and automation of call detection require specialist expertise and are time consuming to develop, therefore most biological researchers continue to use manual listening and inspection of spectrograms to analyze their sound recordings. False-color spectrograms were recently developed as a tool to allow visualization of long-duration sound recordings, intending to aid ecologists in navigating their audio data and detecting species of interest. This paper explores the efficacy of using this visualization method to identify multiple frog species in a large set of continuous sound recordings and gather data on the chorusing activity of the frog community. We found that, after a phase of training of the observer, frog choruses could be visually identified to species with high accuracy. We present a method to analyze such data, including a simple R routine to interactively select short segments on the false-color spectrogram for rapid manual checking of visually identified sounds. We propose these methods could fruitfully be applied to large acoustic data sets to analyze calling patterns in other chorusing species
Analysing Interlinked Frequency Dynamics of the Urban Acoustic Environment
As sustainable metropolitan regions require more densely built-up areas, a comprehensive understanding of the urban acoustic environment (AE) is needed. However, comprehensive datasets of the urban AE and well-established research methods for the AE are scarce. Datasets of audio recordings tend to be large and require a lot of storage space as well as computationally expensive analyses. Thus, knowledge about the long-term urban AE is limited. In recent years, however, these limitations have been steadily overcome, allowing a more comprehensive analysis of the urban AE. In this respect, the objective of this work is to contribute to a better understanding of the time-frequency domain of the urban AE, analysing automatic audio recordings from nine urban settings over ten months. We compute median power spectra as well as normalised spectrograms for all settings. Additionally, we demonstrate the use of frequency correlation matrices (FCMs) as a novel approach to access large audio datasets. Our results show site-dependent patterns in frequency dynamics. Normalised spectrograms reveal that frequency bins with low power hold relevant information and that the AE changes considerably over a year. We demonstrate that this information can be captured by using FCMs, which also unravel communities of interlinked frequency dynamics for all settings
A ClassiïŹcation Scheme Based on Directed Acyclic Graphs for Acoustic Farm Monitoring
Intelligent farming as part of the green revolution is advancing the world of agriculture in such a way that farms become evolving, with the scope being the optimization of animal production in an eco-friendly way. In this direction, we propose exploiting the acoustic modality for farm monitoring. Such infor- mation could be used in a stand-alone or complimentary mode to monitor constantly animal population and behavior. To this end, we designed a scheme classifying the vocalizations produced by farm animals. More precisely, we propose a directed acyclic graph, where each node carries out a binary classiïŹcation task using hidden Markov models. The topological ordering follows a criterion derived from the Kullback-Leibler divergence. During the experimental phase, we employed a publicly available dataset including vocalizations of seven animals typically encountered in farms, where we report promising recognition rates outperform- ing state of the art classiïŹers
The assessment and development of methods in (spatial) sound ecology
As vital ecosystems across the globe enter unchartered pressure from climate change industrial land use, understanding the processes driving ecosystem viability has never been more critical. Nuanced ecosystem understanding comes from well-collected field data and a wealth of associated interpretations. In recent years the most popular methods of ecosystem monitoring have revolutionised from often damaging and labour-intensive manual data collection to automated methods of data collection and analysis. Sound ecology describes the school of research that uses information transmitted through sound to infer properties about an area's species, biodiversity, and health. In this thesis, we explore and develop state-of-the-art automated monitoring with sound, specifically relating to data storage practice and spatial acoustic recording and data analysis.
In the first chapter, we explore the necessity and methods of ecosystem monitoring, focusing on acoustic monitoring, later exploring how and why sound is recorded and the current state-of-the-art in acoustic monitoring. Chapter one concludes with us setting out the aims and overall content of the following chapters. We begin the second chapter by exploring methods used to mitigate data storage expense, a widespread issue as automated methods quickly amass vast amounts of data which can be expensive and impractical to manage. Importantly I explain how these data management practices are often used without known consequence, something I then address. Specifically, I present evidence that the most used data reduction methods (namely compression and temporal subsetting) have a surprisingly small impact on the information content of recorded sound compared to the method of analysis. This work also adds to the increasing evidence that deep learning-based methods of environmental sound quantification are more powerful and robust to experimental variation than more traditional acoustic indices.
In the latter chapters, I focus on using multichannel acoustic recording for sound-source localisation. Knowing where a sound originated has a range of ecological uses, including counting individuals, locating threats, and monitoring habitat use. While an exciting application of acoustic technology, spatial acoustics has had minimal uptake owing to the expense, impracticality and inaccessibility of equipment. In my third chapter, I introduce MAARU (Multichannel Acoustic Autonomous Recording Unit), a low-cost, easy-to-use and accessible solution to this problem. I explain the software and hardware necessary for spatial recording and show how MAARU can be used to localise the direction of a sound to within ±10Ë accurately. In the fourth chapter, I explore how MAARU devices deployed in the field can be used for enhanced ecosystem monitoring by spatially clustering individuals by calling directions for more accurate abundance approximations and crude species-specific habitat usage monitoring. Most literature on spatial acoustics cites the need for many accurately synced recording devices over an area. This chapter provides the first evidence of advances made with just one recorder.
Finally, I conclude this thesis by restating my aims and discussing my success in achieving them. Specifically, in the thesisâ conclusion, I reiterate the contributions made to the field as a direct result of this work and outline some possible development avenues.Open Acces
Exploring Robot Teleoperation in Virtual Reality
This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality.
A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices.
Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation.
Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload.
The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators.
Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework.
The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operatorsâ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours
Sonic Skills
It is common for us today to associate the practice of science primarily with the act of seeingâwith staring at computer screens, analyzing graphs, and presenting images. We may notice that physicians use stethoscopes to listen for disease, that biologists tune into sound recordings to understand birds, or that engineers have created Geiger tellers warning us for radiation through sound. But in the sciences overall, we think, seeing is believing. This open access book explains why, indeed, listening for knowledge plays an ambiguous, if fascinating, role in the sciences. For what purposes have scientists, engineers and physicians listened to the objects of their interest? How did they listen exactly? And why has listening often been contested as a legitimate form of access to scientific knowledge? This concise monograph combines historical and ethnographic evidence about the practices of listening on shop floors, in laboratories, field stations, hospitals, and conference halls, between the 1920s and today. It shows how scientists have used sonic skillsâskills required for making, recording, storing, retrieving, and listening to soundâin ensembles: sets of instruments and techniques for particular situations of knowledge making. Yet rather than pleading for the emancipation of hearing at the expense of seeing, this essay investigates when, how, and under which conditions the ear has contributed to science dynamics, either in tandem with or without the eye
Exploring Robot Teleoperation in Virtual Reality
This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operatorsâ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours
Recommended from our members
EVA London 2022: Electronic Visualisation and the Arts
The Electronic Visualisation and the Arts London 2022 Conference (EVA London 2022) is co-sponsored by the Computer Arts Society (CAS) and BCS, the Chartered Institute for IT, of which the CAS is a Specialist Group. Of course, this has been a difficult time for all conferences, with the Covid-19 pandemic. For the first time since 2019, the EVA London 2022 Conference is a physical conference. It is also an online conference, as it was in the previous two years. We continue with publishing the proceedings, both online, with open access via ScienceOpen, and also in our traditional printed form, for the second year in full colour. Over recent decades, the EVA London Conference on Electronic Visualisation and the Arts has established itself as one of the United Kingdomâs most innovative and interdisciplinary conferences. It brings together a wide range of research domains to celebrate a diverse set of interests, with a specialised focus on visualisation. The long and short papers in this volume cover varied topics concerning the arts, visualisations, and IT, including 3D graphics, animation, artificial intelligence, creativity, culture, design, digital art, ethics, heritage, literature, museums, music, philosophy, politics, publishing, social media, and virtual reality, as well as other related interdisciplinary areas.
The EVA London 2022 proceedings presents a wide spectrum of papers, demonstrations, Research Workshop contributions, other workshops, and for the seventh year, the EVA London Symposium, in the form of an opening morning session, with three invited contributors. The conference includes a number of other associated evening events including ones organised by the Computer Arts Society, Art in Flux, and EVA International. As in previous years, there are Research Workshop contributions in this volume, aimed at encouraging participation by postgraduate students and early-career artists, accepted either through the peer-review process or directly by the Research Workshop chair. The Research Workshop contributors are offered bursaries to aid participation. In particular, EVA London liaises with Art in Flux, a London-based group of digital artists. The EVA London 2022 proceedings includes long papers and short âposterâ papers from international researchers inside and outside academia, from graduate artists, PhD students, industry professionals, established scholars, and senior researchers, who value EVA London for its interdisciplinary community. The conference also features keynote talks. A special feature this year is support for Ukrainian culture after its invasion earlier in the year. This publication has resulted from a selective peer review process, fitting as many excellent submissions as possible into the proceedings.
This year, submission numbers were lower than previous years, mostly likely due to the pandemic and a new requirement to submit drafts of long papers for review as well as abstracts. It is still pleasing to have so many good proposals from which to select the papers that have been included. EVA London is part of a larger network of EVA international conferences. EVA events have been held in Athens, Beijing, Berlin, Brussels, California, Cambridge (both UK and USA), Canberra, Copenhagen, Dallas, Delhi, Edinburgh, Florence, Gifu (Japan), Glasgow, Harvard, Jerusalem, Kiev, Laval, London, Madrid, Montreal, Moscow, New York, Paris, Prague, St Petersburg, Thessaloniki, and Warsaw. Further venues for EVA conferences are very much encouraged by the EVA community. As noted earlier, this volume is a record of accepted submissions to EVA London 2022. Associated online presentations are in general recorded and made available online after the conference
A Multidimensional Sketching Interface for Visual Interaction with Corpus-Based Concatenative Sound Synthesis
The present research sought to investigate the correspondence between auditory and visual feature dimensions and to utilise this knowledge in order to inform the design of audio-visual mappings for visual control of sound synthesis. The first stage of the research involved the design and implementation of Morpheme, a novel interface for interaction with corpus-based concatenative synthesis. Morpheme uses sketching as a model for interaction between the user and the computer. The purpose of the system is to facilitate the expression of sound design ideas by describing the qualities of the sound to be synthesised in visual terms, using a set of perceptually meaningful audio-visual feature associations. The second stage of the research involved the preparation of two multidimensional mappings for the association between auditory and visual dimensions.The third stage of this research involved the evaluation of the Audio-Visual (A/V) mappings and of Morphemeâs user interface. The evaluation comprised two controlled experiments, an online study and a user study. Our findings suggest that the strength of the perceived correspondence between the A/V associations prevails over the timbre characteristics of the sounds used to render the complementary polar features. Hence, the empirical evidence gathered by previous research is generalizable/ applicable to different contexts and the overall dimensionality of the sound used to render should not have a very significant effect on the comprehensibility and usability of an A/V mapping. However, the findings of the present research also show that there is a non-linear interaction between the harmonicity of the corpus and the perceived correspondence of the audio-visual associations. For example, strongly correlated cross-modal cues such as size-loudness or vertical position-pitch are affected less by the harmonicity of the audio corpus in comparison to weaker correlated dimensions (e.g. texture granularity-sound dissonance). No significant differences were revealed as a result of musical/audio training. The third study consisted of an evaluation of Morphemeâs user interface were participants were asked to use the system to design a sound for a given video footage. The usability of the system was found to be satisfactory.An interface for drawing visual queries was developed for high level control of the retrieval and signal processing algorithms of concatenative sound synthesis. This thesis elaborates on previous research findings and proposes two methods for empirically driven validation of audio-visual mappings for sound synthesis. These methods could be applied to a wide range of contexts in order to inform the design of cognitively useful multi-modal interfaces and representation and rendering of multimodal data. Moreover this research contributes to the broader understanding of multimodal perception by gathering empirical evidence about the correspondence between auditory and visual feature dimensions and by investigating which factors affect the perceived congruency between aural and visual structures