45,117 research outputs found
Live Coding, Live Notation, Live Performance
This paper/demonstration explores relationships between code, notation including representation, visualisation and performance. Performative aspects of live coding activities are increasingly being investigated as the live coding movement continues to grow and develop. Although live instrumental performance is sometimes included as an accompaniment to live coding, it is often not a fully integrated part of the performance, relying on improvisation and/or basic indicative forms of notation with varying levels of sophistication and universality. Technologies are developing which enable the use of fully explicit music notations as well as more graphic ones, allowing more fully integrated systems of code in and as performance which can also include notations of arbitrary complexity. This itself allows the full skills of instrumental musicians to be utilised and synchronised in the process.
This presentation/demonstration presents work and performances already undertaken with these technologies, including technologies for body sensing and data acquisition in the translation of the movements of dancers and musicians into synchronously performable notation, integrated by live and prepared coding. The author together with clarinetist Ian Mitchell present a short live performance utilising these techniques, discuss methods for the dissemination and interpretation of live generated notations and investigate how they take advantage of instrumental musicians’ training-related neuroplasticity skills
CacophonyViz: Visualisation of Birdsong Derived Ecological Health Indicators
The purpose of this work was to create an easy to interpret visualisation of a simple index that represents the quantity and quality of bird life in New Zealand. The index was calculated from an algorithm that assigned various weights to each species of bird.
This work is important as it forms a part of the ongoing work by the Cacophony Project which aims to eradicate pests that currently destroy New Zealand native birds and their habitat. The map will be used to promote the Cacophony project to a wide public audience and encourage their participation by giving relevant feedback on the effects of intervention such as planting and trapping in their communities.
The Design Science methodology guided this work through the creation of a series of prototypes that through their evaluation built on lessons learnt at each stage resulting in a final artifact that successfully displayed the index at various locations across a map of New Zealand.
It is concluded that the artifact is ready and suitable for deployment once the availability of real data from the automatic analysis of audio recordings from multiple locations becomes available
CacophonyViz : Visualisation of birdsong derived ecological health indicators
The purpose of this work was to create an easy to interpret visualisation of a simple index that represents the quantity and quality of bird life in New Zealand. The index
was calculated from an algorithm that assigned various weights to each species of
bird.
This work is important as it forms a part of the ongoing work by the Cacophony Project which aims to eradicate pests that currently destroy New Zealand native birds and their habitat. The map will be used to promote the Cacophony project to a wide public audience and encourage their participation by giving relevant feedback on the
effects of intervention such as planting and trapping in their communities.
The Design Science methodology guided this work through the creation of a series of prototypes that through their evaluation built on lessons learnt at each stage resulting
in a final artifact that successfully displayed the index at various locations across a map of New Zealand.
It is concluded that the artifact is ready and suitable for deployment once the availability of real data from the automatic analysis of audio recordings from multiple
locations becomes available
Visualising Music with Impromptu
This paper discusses our experiments with a method of creating visual representations of music using a graphical library for Impromptu that emulates and builds on Logo’s turtle graphics. We explore the potential and limitations of this library for visualising music, and describe some ways in which this simple system can be utilised to assist the musician by revealing musical structure are demonstrated
Recommended from our members
BioSAVE: display of scored annotation within a sequence context.
BACKGROUND: Visualization of sequence annotation is a common feature in many bioinformatics tools. For many applications it is desirable to restrict the display of such annotation according to a score cutoff, as biological interpretation can be difficult in the presence of the entire data. Unfortunately, many visualisation solutions are somewhat static in the way they handle such score cutoffs. RESULTS: We present BioSAVE, a sequence annotation viewer with on-the-fly selection of visualisation thresholds for each feature. BioSAVE is a versatile OS X program for visual display of scored features (annotation) within a sequence context. The program reads sequence and additional supplementary annotation data (e.g., position weight matrix matches, conservation scores, structural domains) from a variety of commonly used file formats and displays them graphically. Onscreen controls then allow for live customisation of these graphics, including on-the-fly selection of visualisation thresholds for each feature. CONCLUSION: Possible applications of the program include display of transcription factor binding sites in a genomic context or the visualisation of structural domain assignments in protein sequences and many more. The dynamic visualisation of these annotations is useful, e.g., for the determination of cutoff values of predicted features to match experimental data. Program, source code and exemplary files are freely available at the BioSAVE homepage.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
Simulation modelling and visualisation: toolkits for building artificial worlds
Simulations users at all levels make heavy use of compute resources to drive computational
simulations for greatly varying applications areas of research using different simulation
paradigms. Simulations are implemented in many software forms, ranging from highly standardised
and general models that run in proprietary software packages to ad hoc hand-crafted
simulations codes for very specific applications. Visualisation of the workings or results of a
simulation is another highly valuable capability for simulation developers and practitioners.
There are many different software libraries and methods available for creating a visualisation
layer for simulations, and it is often a difficult and time-consuming process to assemble a
toolkit of these libraries and other resources that best suits a particular simulation model. We
present here a break-down of the main simulation paradigms, and discuss differing toolkits and
approaches that different researchers have taken to tackle coupled simulation and visualisation
in each paradigm
InfiniTAM v3: A Framework for Large-Scale 3D Reconstruction with Loop Closure
Volumetric models have become a popular representation for 3D scenes in
recent years. One breakthrough leading to their popularity was KinectFusion,
which focuses on 3D reconstruction using RGB-D sensors. However, monocular SLAM
has since also been tackled with very similar approaches. Representing the
reconstruction volumetrically as a TSDF leads to most of the simplicity and
efficiency that can be achieved with GPU implementations of these systems.
However, this representation is memory-intensive and limits applicability to
small-scale reconstructions. Several avenues have been explored to overcome
this. With the aim of summarizing them and providing for a fast, flexible 3D
reconstruction pipeline, we propose a new, unifying framework called InfiniTAM.
The idea is that steps like camera tracking, scene representation and
integration of new data can easily be replaced and adapted to the user's needs.
This report describes the technical implementation details of InfiniTAM v3,
the third version of our InfiniTAM system. We have added various new features,
as well as making numerous enhancements to the low-level code that
significantly improve our camera tracking performance. The new features that we
expect to be of most interest are (i) a robust camera tracking module; (ii) an
implementation of Glocker et al.'s keyframe-based random ferns camera
relocaliser; (iii) a novel approach to globally-consistent TSDF-based
reconstruction, based on dividing the scene into rigid submaps and optimising
the relative poses between them; and (iv) an implementation of Keller et al.'s
surfel-based reconstruction approach.Comment: This article largely supersedes arxiv:1410.0925 (it describes version
3 of the InfiniTAM framework
- …