1,664 research outputs found
A general theory for preferential sampling in environmental networks
This paper presents a general model framework for detecting the preferential
sampling of environmental monitors recording an environmental process across
space and/or time. This is achieved by considering the joint distribution of an
environmental process with a site--selection process that considers where and
when sites are placed to measure the process. The environmental process may be
spatial, temporal or spatio--temporal in nature. By sharing random effects
between the two processes, the joint model is able to establish whether site
placement was stochastically dependent of the environmental process under
study. The embedding into a spatio--temporal framework also allows for the
modelling of the dynamic site---selection process itself. Real--world factors
affecting both the size and location of the network can be easily modelled and
quantified. Depending upon the choice of population of locations to consider
for selection across space and time under the site--selection process,
different insights about the precise nature of preferential sampling can be
obtained. The general framework developed in the paper is designed to be easily
and quickly fit using the R-INLA package. We apply this framework to a case
study involving particulate air pollution over the UK where a major reduction
in the size of a monitoring network through time occurred. It is demonstrated
that a significant response--biased reduction in the air quality monitoring
network occurred. We also show that the network was consistently
unrepresentative of the levels of particulate matter seen across much of GB
throughout the operating life of the network. Finally we show that this may
have led to a severe over-reporting of the population--average exposure levels
experienced across GB. This could have great impacts on estimates of the health
effects of black smoke levels.Comment: 33 pages of main text, 48 including the supplementary materia
Recommended from our members
Growing cybernetic ears: transduction and performativity in the analogue and digital what have you
At a time when digital technologies have become ubiquitous in music making, and where the majority of research into music technology happens at the computational ‘cutting edge’, this practice-based PhD explores analogue technologies deemed, in the main, obsolete, anachronistic, or as quaint nostalgic throwbacks, and asks how a combination of technological, historical and practice-based research, focused through commitment to artistic outputs in the domain of music technology, might shed new light on the terms analogue and digital, and on the nature of the analogue-digital relationship.
Underlying much contemporary enthusiasm for ‘the digital’ are progress narratives that rely on both a succession logic (old analogue technology gets replaced by new digital technology) and an assumption of isomorphism (the digital technology does all the same things as the replaced technology, though often with ‘enhanced’ affordances). This thesis questions such assumptions along historical, philosophical and practice-based trajectories.
Key to these research trajectories is the trans-discipline cybernetics, in particular the second-order cybernetics of Gordon Pask, whose self-designation ‘philosophical mechanic’ indicates the importance he placed on a cyclical, mutually accommodating thinking-designing-making. Pask presented a powerful practical methodology for the examination and creation of dynamical systems in flux, systems that evolve as a result of participant interaction, systems that can be seen to manifest self-organisation. Second-order cybernetics puts the emphasis on processes in interaction rather than positing pre-existing objects (including concepts) in a world ‘out there’. Cybernetics helps us to explore systems whose complexity and interdependence precludes the separation out into constituent parts, systems where control is shared across multiple mutually interacting dimensions, and where the observer is a committed participant whose actions, interests and biases cannot be divorced from the interactions therein.
Two other key concepts are: (1) transduction, which relates energy, information, patterns of growth, or other dynamical processes across media or between domains; (2) performativity, an interventional act that brings forth a world. Transduction is essential to an understanding of recording studio processes and practices: the microphone, signal processing and recording itself all rely on transduction. When viewed from a performative perspective, actions such as recording are found to be carried out very differently when the final stage of transduction is discrete (the case with the now ubiquitous digital audio workstation) or continuous (such as recording to tape). This difference is primarily due to the hyper-plasticity of digital audio, a taking of sound ‘out of time’. Rather than seeing this as an evolution of ‘precursor’ analogue technologies, as most accounts have it, this thesis takes the perspective that this is a difference in kind, rather than one of degree, and explores that difference with a particular focus on emergent and intertwined cultural, embodied and technological systems, rather than on end products.
The second half of the thesis presents the compositional practice, ranging from experimental work on tape music composition and installation, through a series of modular synthesis live performances, to tape-based recording of pop music. The physical, gestural engagement with the resistant materiality of these technologies emphasises a very different cognitive engagement with processes of composition and production to that which happens with supposed ‘successor’ digital technologies; assumptions of isomorphism, buttressed by skeuomorphic emulation, tend to occlude this cognitive distinction.
This thesis is offered as an act of cybernetic musicking – resolutely practical in orientation, with a wide-ranging, trans-disciplinary theoretical framework, and with the emphasis not on things but on ongoing processes in complex interaction with a world in constant becoming
Recommended from our members
The Thing Breathed
The Thing Breathed is a modular synthesis composition for live performance. It explores nested feedback networks instantiated in analogue synthesis, presenting a chaotic complexity that occludes attempts to fully understand the system. It is a ‘black box’ to its performer, who spends performance time searching for rare yet fruitful zones of sonic interest that have been discovered through rehearsal and experiment. As such the nature of the performance is one of risk and commitment, steering rather than commanding, performative rather than pre-programmed
Inferring Smooth Control: Monte Carlo Posterior Policy Iteration with Gaussian Processes
Monte Carlo methods have become increasingly relevant for control of
non-differentiable systems, approximate dynamics models and learning from data.
These methods scale to high-dimensional spaces and are effective at the
non-convex optimizations often seen in robot learning. We look at sample-based
methods from the perspective of inference-based control, specifically posterior
policy iteration. From this perspective, we highlight how Gaussian noise priors
produce rough control actions that are unsuitable for physical robot
deployment. Considering smoother Gaussian process priors, as used in episodic
reinforcement learning and motion planning, we demonstrate how smoother model
predictive control can be achieved using online sequential inference. This
inference is realized through an efficient factorization of the action
distribution and a novel means of optimizing the likelihood temperature to
improve importance sampling accuracy. We evaluate this approach on several
high-dimensional robot control tasks, matching the sample efficiency of prior
heuristic methods while also ensuring smoothness. Simulation results can be
seen at https://monte-carlo-ppi.github.io/.Comment: 43 pages, 37 figures. Conference on Robot Learning 202
Coherent Soft Imitation Learning
Imitation learning methods seek to learn from an expert either through
behavioral cloning (BC) of the policy or inverse reinforcement learning (IRL)
of the reward. Such methods enable agents to learn complex tasks from humans
that are difficult to capture with hand-designed reward functions. Choosing BC
or IRL for imitation depends on the quality and state-action coverage of the
demonstrations, as well as additional access to the Markov decision process.
Hybrid strategies that combine BC and IRL are not common, as initial policy
optimization against inaccurate rewards diminishes the benefit of pretraining
the policy with BC. This work derives an imitation method that captures the
strengths of both BC and IRL. In the entropy-regularized ('soft') reinforcement
learning setting, we show that the behaviour-cloned policy can be used as both
a shaped reward and a critic hypothesis space by inverting the regularized
policy update. This coherency facilities fine-tuning cloned policies using the
reward estimate and additional interactions with the environment. This approach
conveniently achieves imitation learning through initial behaviour cloning,
followed by refinement via RL with online or offline data sources. The
simplicity of the approach enables graceful scaling to high-dimensional and
vision-based tasks, with stable learning and minimal hyperparameter tuning, in
contrast to adversarial approaches.Comment: 51 pages, 47 figures. DeepMind internship repor
- …