45,117 research outputs found

    Live Coding, Live Notation, Live Performance

    Get PDF
    This paper/demonstration explores relationships between code, notation including representation, visualisation and performance. Performative aspects of live coding activities are increasingly being investigated as the live coding movement continues to grow and develop. Although live instrumental performance is sometimes included as an accompaniment to live coding, it is often not a fully integrated part of the performance, relying on improvisation and/or basic indicative forms of notation with varying levels of sophistication and universality. Technologies are developing which enable the use of fully explicit music notations as well as more graphic ones, allowing more fully integrated systems of code in and as performance which can also include notations of arbitrary complexity. This itself allows the full skills of instrumental musicians to be utilised and synchronised in the process. This presentation/demonstration presents work and performances already undertaken with these technologies, including technologies for body sensing and data acquisition in the translation of the movements of dancers and musicians into synchronously performable notation, integrated by live and prepared coding. The author together with clarinetist Ian Mitchell present a short live performance utilising these techniques, discuss methods for the dissemination and interpretation of live generated notations and investigate how they take advantage of instrumental musicians’ training-related neuroplasticity skills

    CacophonyViz: Visualisation of Birdsong Derived Ecological Health Indicators

    Get PDF
    The purpose of this work was to create an easy to interpret visualisation of a simple index that represents the quantity and quality of bird life in New Zealand. The index was calculated from an algorithm that assigned various weights to each species of bird. This work is important as it forms a part of the ongoing work by the Cacophony Project which aims to eradicate pests that currently destroy New Zealand native birds and their habitat. The map will be used to promote the Cacophony project to a wide public audience and encourage their participation by giving relevant feedback on the effects of intervention such as planting and trapping in their communities. The Design Science methodology guided this work through the creation of a series of prototypes that through their evaluation built on lessons learnt at each stage resulting in a final artifact that successfully displayed the index at various locations across a map of New Zealand. It is concluded that the artifact is ready and suitable for deployment once the availability of real data from the automatic analysis of audio recordings from multiple locations becomes available

    CacophonyViz : Visualisation of birdsong derived ecological health indicators

    Get PDF
    The purpose of this work was to create an easy to interpret visualisation of a simple index that represents the quantity and quality of bird life in New Zealand. The index was calculated from an algorithm that assigned various weights to each species of bird. This work is important as it forms a part of the ongoing work by the Cacophony Project which aims to eradicate pests that currently destroy New Zealand native birds and their habitat. The map will be used to promote the Cacophony project to a wide public audience and encourage their participation by giving relevant feedback on the effects of intervention such as planting and trapping in their communities. The Design Science methodology guided this work through the creation of a series of prototypes that through their evaluation built on lessons learnt at each stage resulting in a final artifact that successfully displayed the index at various locations across a map of New Zealand. It is concluded that the artifact is ready and suitable for deployment once the availability of real data from the automatic analysis of audio recordings from multiple locations becomes available

    Visualising Music with Impromptu

    Get PDF
    This paper discusses our experiments with a method of creating visual representations of music using a graphical library for Impromptu that emulates and builds on Logo’s turtle graphics. We explore the potential and limitations of this library for visualising music, and describe some ways in which this simple system can be utilised to assist the musician by revealing musical structure are demonstrated

    Simulation modelling and visualisation: toolkits for building artificial worlds

    Get PDF
    Simulations users at all levels make heavy use of compute resources to drive computational simulations for greatly varying applications areas of research using different simulation paradigms. Simulations are implemented in many software forms, ranging from highly standardised and general models that run in proprietary software packages to ad hoc hand-crafted simulations codes for very specific applications. Visualisation of the workings or results of a simulation is another highly valuable capability for simulation developers and practitioners. There are many different software libraries and methods available for creating a visualisation layer for simulations, and it is often a difficult and time-consuming process to assemble a toolkit of these libraries and other resources that best suits a particular simulation model. We present here a break-down of the main simulation paradigms, and discuss differing toolkits and approaches that different researchers have taken to tackle coupled simulation and visualisation in each paradigm

    InfiniTAM v3: A Framework for Large-Scale 3D Reconstruction with Loop Closure

    Get PDF
    Volumetric models have become a popular representation for 3D scenes in recent years. One breakthrough leading to their popularity was KinectFusion, which focuses on 3D reconstruction using RGB-D sensors. However, monocular SLAM has since also been tackled with very similar approaches. Representing the reconstruction volumetrically as a TSDF leads to most of the simplicity and efficiency that can be achieved with GPU implementations of these systems. However, this representation is memory-intensive and limits applicability to small-scale reconstructions. Several avenues have been explored to overcome this. With the aim of summarizing them and providing for a fast, flexible 3D reconstruction pipeline, we propose a new, unifying framework called InfiniTAM. The idea is that steps like camera tracking, scene representation and integration of new data can easily be replaced and adapted to the user's needs. This report describes the technical implementation details of InfiniTAM v3, the third version of our InfiniTAM system. We have added various new features, as well as making numerous enhancements to the low-level code that significantly improve our camera tracking performance. The new features that we expect to be of most interest are (i) a robust camera tracking module; (ii) an implementation of Glocker et al.'s keyframe-based random ferns camera relocaliser; (iii) a novel approach to globally-consistent TSDF-based reconstruction, based on dividing the scene into rigid submaps and optimising the relative poses between them; and (iv) an implementation of Keller et al.'s surfel-based reconstruction approach.Comment: This article largely supersedes arxiv:1410.0925 (it describes version 3 of the InfiniTAM framework
    • …
    corecore