40,235 research outputs found
Utilizing Distance Metrics on Lineups to Examine What People Read From Data Plots
Graphics play a crucial role in statistical analysis and data mining. This
paper describes metrics developed to assist the use of lineups for making
inferential statements. Lineups embed the plot of the data among a set of null
plots, and engage a human observer to select the plot that is most different
from the rest. If the data plot is selected it corresponds to the rejection of
a null hypothesis. Metrics are calculated in association with lineups, to
measure the quality of the lineup, and help to understand what people see in
the data plots. The null plots represent a finite sample from a null
distribution, and the selected sample potentially affects the ease or
difficulty of a lineup. Distance metrics are designed to describe how close the
true data plot is to the null plots, and how close the null plots are to each
other. The distribution of the distance metrics is studied to learn how well
this matches to what people detect in the plots, the effect of null generating
mechanism and plot choices for particular tasks. The analysis was conducted on
data that has already been collected from Amazon Turk studies conducted with
lineups for studying an array of data analysis tasks.Comment: 28 pages, lots of figure
Coherence and measurement in quantum thermodynamics
Thermodynamics is a highly successful macroscopic theory widely used across
the natural sciences and for the construction of everyday devices, from car
engines and fridges to power plants and solar cells. With thermodynamics
predating quantum theory, research now aims to uncover the thermodynamic laws
that govern finite size systems which may in addition host quantum effects.
Here we identify information processing tasks, the so-called "projections",
that can only be formulated within the framework of quantum mechanics. We show
that the physical realisation of such projections can come with a non-trivial
thermodynamic work only for quantum states with coherences. This contrasts with
information erasure, first investigated by Landauer, for which a thermodynamic
work cost applies for classical and quantum erasure alike. Implications are
far-reaching, adding a thermodynamic dimension to measurements performed in
quantum thermodynamics experiments, and providing key input for the
construction of a future quantum thermodynamic framework. Repercussions are
discussed for quantum work fluctuation relations and thermodynamic single-shot
approaches.Comment: 6 pages + appendix, 4 figures, v2: changed presentation, critically
discuss interpretation as measurement, added new conclusions; previous title:
"Quantum measurement and its role in thermodynamics
Analysis of the Copenhagen Accord pledges and its global climatic impacts‚ a snapshot of dissonant ambitions
This analysis of the Copenhagen Accord evaluates emission reduction pledges by individual countries against the Accord's climate-related objectives. Probabilistic estimates of the climatic consequences for a set of resulting multi-gas scenarios over the 21st century are calculated with a reduced complexity climate model, yielding global temperature increase and atmospheric CO2 and CO2-equivalent concentrations. Provisions for banked surplus emission allowances and credits from land use, land-use change and forestry are assessed and are shown to have the potential to lead to significant deterioration of the ambition levels implied by the pledges in 2020. This analysis demonstrates that the Copenhagen Accord and the pledges made under it represent a set of dissonant ambitions. The ambition level of the current pledges for 2020 and the lack of commonly agreed goals for 2050 place in peril the Accord's own ambition: to limit global warming to below 2 °C, and even more so for 1.5 °C, which is referenced in the Accord in association with potentially strengthening the long-term temperature goal in 2015. Due to the limited level of ambition by 2020, the ability to limit emissions afterwards to pathways consistent with either the 2 or 1.5 °C goal is likely to become less feasibl
A Sparsity-Aware Adaptive Algorithm for Distributed Learning
In this paper, a sparsity-aware adaptive algorithm for distributed learning
in diffusion networks is developed. The algorithm follows the set-theoretic
estimation rationale. At each time instance and at each node of the network, a
closed convex set, known as property set, is constructed based on the received
measurements; this defines the region in which the solution is searched for. In
this paper, the property sets take the form of hyperslabs. The goal is to find
a point that belongs to the intersection of these hyperslabs. To this end,
sparsity encouraging variable metric projections onto the hyperslabs have been
adopted. Moreover, sparsity is also imposed by employing variable metric
projections onto weighted balls. A combine adapt cooperation strategy
is adopted. Under some mild assumptions, the scheme enjoys monotonicity,
asymptotic optimality and strong convergence to a point that lies in the
consensus subspace. Finally, numerical examples verify the validity of the
proposed scheme, compared to other algorithms, which have been developed in the
context of sparse adaptive learning
Semantic Embedding Space for Zero-Shot Action Recognition
The number of categories for action recognition is growing rapidly. It is
thus becoming increasingly hard to collect sufficient training data to learn
conventional models for each category. This issue may be ameliorated by the
increasingly popular 'zero-shot learning' (ZSL) paradigm. In this framework a
mapping is constructed between visual features and a human interpretable
semantic description of each category, allowing categories to be recognised in
the absence of any training data. Existing ZSL studies focus primarily on image
data, and attribute-based semantic representations. In this paper, we address
zero-shot recognition in contemporary video action recognition tasks, using
semantic word vector space as the common space to embed videos and category
labels. This is more challenging because the mapping between the semantic space
and space-time features of videos containing complex actions is more complex
and harder to learn. We demonstrate that a simple self-training and data
augmentation strategy can significantly improve the efficacy of this mapping.
Experiments on human action datasets including HMDB51 and UCF101 demonstrate
that our approach achieves the state-of-the-art zero-shot action recognition
performance.Comment: 5 page
- …