37,462 research outputs found

    Sensing and mapping for interactive performance

    Get PDF
    This paper describes a trans-domain mapping (TDM) framework for translating meaningful activities from one creative domain onto another. The multi-disciplinary framework is designed to facilitate an intuitive and non-intrusive interactive multimedia performance interface that offers the users or performers real-time control of multimedia events using their physical movements. It is intended to be a highly dynamic real-time performance tool, sensing and tracking activities and changes, in order to provide interactive multimedia performances. From a straightforward definition of the TDM framework, this paper reports several implementations and multi-disciplinary collaborative projects using the proposed framework, including a motion and colour-sensitive system, a sensor-based system for triggering musical events, and a distributed multimedia server for audio mapping of a real-time face tracker, and discusses different aspects of mapping strategies in their context. Plausible future directions, developments and exploration with the proposed framework, including stage augmenta tion, virtual and augmented reality, which involve sensing and mapping of physical and non-physical changes onto multimedia control events, are discussed

    Spatial audio in small display screen devices

    Get PDF
    Our work addresses the problem of (visual) clutter in mobile device interfaces. The solution we propose involves the translation of technique-from the graphical to the audio domain-for expliting space in information representation. This article presents an illustrative example in the form of a spatialisedaudio progress bar. In usability tests, participants performed background monitoring tasks significantly more accurately using this spatialised audio (a compared with a conventional visual) progress bar. Moreover, their performance in a simultaneously running, visually demanding foreground task was significantly improved in the eye-free monitoring condition. These results have important implications for the design of multi-tasking interfaces for mobile devices

    Sonification of Network Traffic Flow for Monitoring and Situational Awareness

    Get PDF
    Maintaining situational awareness of what is happening within a network is challenging, not least because the behaviour happens within computers and communications networks, but also because data traffic speeds and volumes are beyond human ability to process. Visualisation is widely used to present information about the dynamics of network traffic dynamics. Although it provides operators with an overall view and specific information about particular traffic or attacks on the network, it often fails to represent the events in an understandable way. Visualisations require visual attention and so are not well suited to continuous monitoring scenarios in which network administrators must carry out other tasks. Situational awareness is critical and essential for decision-making in the domain of computer network monitoring where it is vital to be able to identify and recognize network environment behaviours.Here we present SoNSTAR (Sonification of Networks for SiTuational AwaReness), a real-time sonification system to be used in the monitoring of computer networks to support the situational awareness of network administrators. SoNSTAR provides an auditory representation of all the TCP/IP protocol traffic within a network based on the different traffic flows between between network hosts. SoNSTAR raises situational awareness levels for computer network defence by allowing operators to achieve better understanding and performance while imposing less workload compared to visual techniques. SoNSTAR identifies the features of network traffic flows by inspecting the status flags of TCP/IP packet headers and mapping traffic events to recorded sounds to generate a soundscape representing the real-time status of the network traffic environment. Listening to the soundscape allows the administrator to recognise anomalous behaviour quickly and without having to continuously watch a computer screen.Comment: 17 pages, 7 figures plus supplemental material in Github repositor

    Towards musical interaction : 'Schismatics' for e-violin and computer.

    Get PDF
    This paper discusses the evolution of the Max/MSP patch used in schismatics (2007, rev. 2010) for electric violin (Violectra) and computer, by composer Sam Hayden in collaboration with violinist Mieko Kanno. schismatics involves a standard performance paradigm of a fixed notated part for the e-violin with sonically unfixed live computer processing. Hayden was unsatisfied with the early version of the piece: the use of attack detection on the live e-violin playing to trigger stochastic processes led to an essentially reactive behaviour in the computer, resulting in a somewhat predictable one-toone sonic relationship between them. It demonstrated little internal relationship between the two beyond an initial e-violin ‘action’ causing a computer ‘event’. The revisions in 2010, enabled by an AHRC Practice-Led research award, aimed to achieve 1) a more interactive performance situation and 2) a subtler and more ‘musical’ relationship between live and processed sounds. This was realised through the introduction of sound analysis objects, in particular machine listening and learning techniques developed by Nick Collins. One aspect of the programming was the mapping of analysis data to synthesis parameters, enabling the computer transformations of the e-violin to be directly related to Kanno’s interpretation of the piece in performance

    CABE : a cloud-based acoustic beamforming emulator for FPGA-based sound source localization

    Get PDF
    Microphone arrays are gaining in popularity thanks to the availability of low-cost microphones. Applications including sonar, binaural hearing aid devices, acoustic indoor localization techniques and speech recognition are proposed by several research groups and companies. In most of the available implementations, the microphones utilized are assumed to offer an ideal response in a given frequency domain. Several toolboxes and software can be used to obtain a theoretical response of a microphone array with a given beamforming algorithm. However, a tool facilitating the design of a microphone array taking into account the non-ideal characteristics could not be found. Moreover, generating packages facilitating the implementation on Field Programmable Gate Arrays has, to our knowledge, not been carried out yet. Visualizing the responses in 2D and 3D also poses an engineering challenge. To alleviate these shortcomings, a scalable Cloud-based Acoustic Beamforming Emulator (CABE) is proposed. The non-ideal characteristics of microphones are considered during the computations and results are validated with acoustic data captured from microphones. It is also possible to generate hardware description language packages containing delay tables facilitating the implementation of Delay-and-Sum beamformers in embedded hardware. Truncation error analysis can also be carried out for fixed-point signal processing. The effects of disabling a given group of microphones within the microphone array can also be calculated. Results and packages can be visualized with a dedicated client application. Users can create and configure several parameters of an emulation, including sound source placement, the shape of the microphone array and the required signal processing flow. Depending on the user configuration, 2D and 3D graphs showing the beamforming results, waterfall diagrams and performance metrics can be generated by the client application. The emulations are also validated with captured data from existing microphone arrays.</jats:p

    Interactive digital art

    Get PDF
    In this paper, we present DNArt in general, our work in DNArt’s lab including a detailed presentation of the first artwork that has come out of our lab in September 2011, entitled “ENCOUNTERS #3”, and the use of DNArt for digital art conservation. Research into the use of DNArt for digital art conservation is currently conducted by the Netherlands Institute for Media art (Nederlands Instituut voor Mediakunst, NIMk). The paper describes this research and presents preliminary results. At the end, it will offer the reader the possibility to participate in DNArt’s development

    Affective Medicine: a review of Affective Computing efforts in Medical Informatics

    Get PDF
    Background: Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as “computing that relates to, arises from, or deliberately influences emotions”. AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. Objectives: 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Methods: Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. Results: The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. Conclusions: A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field
    • 

    corecore