44,739 research outputs found

    An Empirical Approach to Temporal Reference Resolution

    Full text link
    This paper presents the results of an empirical investigation of temporal reference resolution in scheduling dialogs. The algorithm adopted is primarily a linear-recency based approach that does not include a model of global focus. A fully automatic system has been developed and evaluated on unseen test data with good results. This paper presents the results of an intercoder reliability study, a model of temporal reference resolution that supports linear recency and has very good coverage, the results of the system evaluated on unseen test data, and a detailed analysis of the dialogs assessing the viability of the approach.Comment: 13 pages, latex using aclap.st

    Quantifying the impact and relevance of scientific research

    Get PDF
    Qualitative and quantitative methods are being developed to measure the impacts of research on society, but they suffer from serious drawbacks associated with linking a piece of research to its subsequent impacts. We have developed a method to derive impact scores for individual research publications according to their contribution to answering questions of quantified importance to end users of research. To demonstrate the approach, here we evaluate the impacts of research into means of conserving wild bee populations in the UK. For published papers, there is a weak positive correlation between our impact score and the impact factor of the journal. The process identifies publications that provide high quality evidence relating to issues of strong concern. It can also be used to set future research agendas

    How uncertainty affects career behaviour : a narrative approach

    Get PDF
    Despite increased uncertainty in the environment, the role of uncertainty in people’s careers is poorly understood. Those few theories that account for uncertainty portray it as a negative influence on people’s career and should therefore be reduced or avoided. This article presents an empirical study that investigated the impact of uncertainty on people’s career behaviour using a narrative approach. The findings reveal that people have different understandings of career uncertainty, which leads to distinct differences in subsequent career behaviour. Specifically, we identified four qualitatively different meanings of career uncertainty we have called Stabiliser, Glider, Energiser and Adventurer. The findings add to the existing literature by showing how each meaning of career uncertainty affects career decision making, criteria to gauge career success and meaning, and negotiating transitions. This significantly broadens current conceptualisation of career uncertainty and its impact on career behaviour than existing literature

    Framing as Path-Dependence

    Get PDF
    A “framing” effect occurs when an agent’s choices are not invariant under changes in the way a choice problem is formulated, e.g. changes in the way the options are described (violation of description invariance) or in the way preferences are elicited (violation of procedure invariance). In this paper we examine precisely which classical conditions of rationality it is whose non-satisfaction may lead to framing effects. We show that (under certain conditions), if (and only if) an agent's initial dispositions on a set of propositions are “implicitly inconsistent”, her decisions may be “path-dependent”, i.e. dependent on the order in which the propositions are considered. We suggest that different ways of framing a choice problem may induce the order in which relevant propositions are considered and hence affect the decision made. This theoretical explanation suggests some observations about human psychology which are consistent with those made by psychologists and provides a unified framework for explaining violations of description and procedure invariance.framing, preference reversal, path-dependence, rationality, deductive closure

    Learning Rank Reduced Interpolation with Principal Component Analysis

    Full text link
    In computer vision most iterative optimization algorithms, both sparse and dense, rely on a coarse and reliable dense initialization to bootstrap their optimization procedure. For example, dense optical flow algorithms profit massively in speed and robustness if they are initialized well in the basin of convergence of the used loss function. The same holds true for methods as sparse feature tracking when initial flow or depth information for new features at arbitrary positions is needed. This makes it extremely important to have techniques at hand that allow to obtain from only very few available measurements a dense but still approximative sketch of a desired 2D structure (e.g. depth maps, optical flow, disparity maps, etc.). The 2D map is regarded as sample from a 2D random process. The method presented here exploits the complete information given by the principal component analysis (PCA) of that process, the principal basis and its prior distribution. The method is able to determine a dense reconstruction from sparse measurement. When facing situations with only very sparse measurements, typically the number of principal components is further reduced which results in a loss of expressiveness of the basis. We overcome this problem and inject prior knowledge in a maximum a posterior (MAP) approach. We test our approach on the KITTI and the virtual KITTI datasets and focus on the interpolation of depth maps for driving scenes. The evaluation of the results show good agreement to the ground truth and are clearly better than results of interpolation by the nearest neighbor method which disregards statistical information.Comment: Accepted at Intelligent Vehicles Symposium (IV), Los Angeles, USA, June 201
    • 

    corecore