44,739 research outputs found
An Empirical Approach to Temporal Reference Resolution
This paper presents the results of an empirical investigation of temporal
reference resolution in scheduling dialogs. The algorithm adopted is primarily
a linear-recency based approach that does not include a model of global focus.
A fully automatic system has been developed and evaluated on unseen test data
with good results. This paper presents the results of an intercoder reliability
study, a model of temporal reference resolution that supports linear recency
and has very good coverage, the results of the system evaluated on unseen test
data, and a detailed analysis of the dialogs assessing the viability of the
approach.Comment: 13 pages, latex using aclap.st
In pursuit of satisfaction and the prevention of embarrassment : affective state in group recommender systems
Peer reviewedPostprin
Quantifying the impact and relevance of scientific research
Qualitative and quantitative methods are being developed to measure the impacts of research on society, but they suffer
from serious drawbacks associated with linking a piece of research to its subsequent impacts. We have developed a method to derive impact scores for individual research publications according to their contribution to answering questions of quantified importance to end users of research. To demonstrate the approach, here we evaluate the impacts of research into means of conserving wild bee populations in the UK. For published papers, there is a weak positive correlation between our impact score and the impact factor of the journal. The process identifies publications that provide high quality evidence relating to issues of strong concern. It can also be used to set future research agendas
Recommended from our members
Parametric kernels for structured data analysis
textStructured representation of input physical patterns as a set of local features has been useful for a veriety of robotics and human computer interaction (HCI) applications. It enables a stable understanding of the variable inputs. However, this representation does not fit the conventional machine learning algorithms and distance metrics because they assume vector inputs. To learn from input patterns with variable structure is thus challenging. To address this problem, I propose a general and systematic method to design distance metrics between structured inputs that can be used in conventional learning algorithms. Based on the observation of the stability in the geometric distributions of local features over the physical patterns across similar inputs, this is done combining the local similarities and the conformity of the geometric relationship between local features. The produced distance metrics, called âparametric kernelsâ, are positive semi-definite and require almost linear time to compute. To demonstrate the general applicability and the efficacy of this approach, I designed and applied parametric kernels to handwritten character recognition, on-line face recognition, and object detection from laser range finder sensor data. Parametric kernels achieve recognition rates competitive to state-of-the-art approaches in these tasks.Computer Science
How uncertainty affects career behaviour : a narrative approach
Despite increased uncertainty in the environment, the role of uncertainty in peopleâs careers is poorly understood. Those few theories that account for uncertainty portray it as a negative influence on peopleâs career and should therefore be reduced or avoided. This article presents an empirical study that investigated the impact of uncertainty on peopleâs career behaviour using a narrative approach. The findings reveal that people have different understandings of career uncertainty, which leads to distinct differences in subsequent career behaviour. Specifically, we identified four qualitatively different meanings of career uncertainty we have called Stabiliser, Glider, Energiser and Adventurer. The findings add to the existing literature by showing how each meaning of career uncertainty affects career decision making, criteria to gauge career success and meaning, and negotiating transitions. This significantly broadens current conceptualisation of career uncertainty and its impact on career behaviour than existing literature
Framing as Path-Dependence
A âframingâ effect occurs when an agentâs choices are not invariant under changes in the way a choice problem is formulated, e.g. changes in the way the options are described (violation of description invariance) or in the way preferences are elicited (violation of procedure invariance). In this paper we examine precisely which classical conditions of rationality it is whose non-satisfaction may lead to framing effects. We show that (under certain conditions), if (and only if) an agent's initial dispositions on a set of propositions are âimplicitly inconsistentâ, her decisions may be âpath-dependentâ, i.e. dependent on the order in which the propositions are considered. We suggest that different ways of framing a choice problem may induce the order in which relevant propositions are considered and hence affect the decision made. This theoretical explanation suggests some observations about human psychology which are consistent with those made by psychologists and provides a unified framework for explaining violations of description and procedure invariance.framing, preference reversal, path-dependence, rationality, deductive closure
Learning Rank Reduced Interpolation with Principal Component Analysis
In computer vision most iterative optimization algorithms, both sparse and
dense, rely on a coarse and reliable dense initialization to bootstrap their
optimization procedure. For example, dense optical flow algorithms profit
massively in speed and robustness if they are initialized well in the basin of
convergence of the used loss function. The same holds true for methods as
sparse feature tracking when initial flow or depth information for new features
at arbitrary positions is needed. This makes it extremely important to have
techniques at hand that allow to obtain from only very few available
measurements a dense but still approximative sketch of a desired 2D structure
(e.g. depth maps, optical flow, disparity maps, etc.). The 2D map is regarded
as sample from a 2D random process. The method presented here exploits the
complete information given by the principal component analysis (PCA) of that
process, the principal basis and its prior distribution. The method is able to
determine a dense reconstruction from sparse measurement. When facing
situations with only very sparse measurements, typically the number of
principal components is further reduced which results in a loss of
expressiveness of the basis. We overcome this problem and inject prior
knowledge in a maximum a posterior (MAP) approach. We test our approach on the
KITTI and the virtual KITTI datasets and focus on the interpolation of depth
maps for driving scenes. The evaluation of the results show good agreement to
the ground truth and are clearly better than results of interpolation by the
nearest neighbor method which disregards statistical information.Comment: Accepted at Intelligent Vehicles Symposium (IV), Los Angeles, USA,
June 201
- âŠ