22,918 research outputs found

    Metaphor Aptness And Conventionality: A Processing Fluency Account

    Get PDF
    Conventionality and aptness are two dimensions of metaphorical sentences thought to play an important role in determining how quick and easy it is to process a metaphor. Conventionality reflects the familiarity of a metaphor whereas aptness reflects the degree to which a metaphor vehicle captures important features of a metaphor topic. In recent years it has become clear that operationalizing these two constructs is not as simple as asking naïve raters for subjective judgments. It has been found that ratings of aptness and conventionality are highly correlated, which has led some researchers to pursue alternative methods for measuring the constructs. Here, in four experiments, we explore the underlying reasons for the high correlation in ratings of aptness and conventionality, and question the construct validity of various methods for measuring the two dimensions. We find that manipulating the processing fluency of a metaphorical sentence by means of familiarization to similar senses of the metaphor (“in vivo conventionalization”) influences ratings of the sentence\u27s aptness. This misattribution may help explain why subjective ratings of aptness and conventionality are highly correlated. In addition, we find other reasons to question the construct validity of conventionality and aptness measures: for instance, we find that conventionality is context dependent and thus not attributable to a metaphor vehicle alone, and we find that ratings of aptness take more into account than they should

    Simultaneous inference for misaligned multivariate functional data

    Full text link
    We consider inference for misaligned multivariate functional data that represents the same underlying curve, but where the functional samples have systematic differences in shape. In this paper we introduce a new class of generally applicable models where warping effects are modeled through nonlinear transformation of latent Gaussian variables and systematic shape differences are modeled by Gaussian processes. To model cross-covariance between sample coordinates we introduce a class of low-dimensional cross-covariance structures suitable for modeling multivariate functional data. We present a method for doing maximum-likelihood estimation in the models and apply the method to three data sets. The first data set is from a motion tracking system where the spatial positions of a large number of body-markers are tracked in three-dimensions over time. The second data set consists of height and weight measurements for Danish boys. The third data set consists of three-dimensional spatial hand paths from a controlled obstacle-avoidance experiment. We use the developed method to estimate the cross-covariance structure, and use a classification setup to demonstrate that the method outperforms state-of-the-art methods for handling misaligned curve data.Comment: 44 pages in total including tables and figures. Additional 9 pages of supplementary material and reference

    Trustworthy Experimentation Under Telemetry Loss

    Full text link
    Failure to accurately measure the outcomes of an experiment can lead to bias and incorrect conclusions. Online controlled experiments (aka AB tests) are increasingly being used to make decisions to improve websites as well as mobile and desktop applications. We argue that loss of telemetry data (during upload or post-processing) can skew the results of experiments, leading to loss of statistical power and inaccurate or erroneous conclusions. By systematically investigating the causes of telemetry loss, we argue that it is not practical to entirely eliminate it. Consequently, experimentation systems need to be robust to its effects. Furthermore, we note that it is nontrivial to measure the absolute level of telemetry loss in an experimentation system. In this paper, we take a top-down approach towards solving this problem. We motivate the impact of loss qualitatively using experiments in real applications deployed at scale, and formalize the problem by presenting a theoretical breakdown of the bias introduced by loss. Based on this foundation, we present a general framework for quantitatively evaluating the impact of telemetry loss, and present two solutions to measure the absolute levels of loss. This framework is used by well-known applications at Microsoft, with millions of users and billions of sessions. These general principles can be adopted by any application to improve the overall trustworthiness of experimentation and data-driven decision making.Comment: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, October 201
    corecore