69,386 research outputs found

    Translation and human-computer interaction

    Get PDF
    This paper seeks to characterise translation as a form of human-computer interaction. The evolution of translator-computer interaction is explored and the challenges and benefits are enunciated. The concept of cognitive ergonomics is drawn on to argue for a more caring and inclusive approach towards the translator by developers of translation technology. A case is also made for wider acceptance by the translation community of the benefits of the technology at their disposal and for more humanistic research on the impact of technology on the translator, the translation profession and the translation process

    Evaluating Cache Coherent Shared Virtual Memory for Heterogeneous Multicore Chips

    Full text link
    The trend in industry is towards heterogeneous multicore processors (HMCs), including chips with CPUs and massively-threaded throughput-oriented processors (MTTOPs) such as GPUs. Although current homogeneous chips tightly couple the cores with cache-coherent shared virtual memory (CCSVM), this is not the communication paradigm used by any current HMC. In this paper, we present a CCSVM design for a CPU/MTTOP chip, as well as an extension of the pthreads programming model, called xthreads, for programming this HMC. Our goal is to evaluate the potential performance benefits of tightly coupling heterogeneous cores with CCSVM

    Reading in a foreign language: Strategic variation between readers of differing proficiency

    Get PDF
    For university language students who are required to deal with literary texts for linguistic or literary purposes, there is hardly any transitional stage between short adapted expository texts, read in the early stages of language learning, and complex literary texts, encountered at university in the literature class. Language readers must then make a substantial mental effort to understand texts intended for a native readership. In challenging reading mode, the quality of reading depends on the efficiency of problem-solving operations, including evaluative and executive strategies, put into place in order to attempt to fill in the comprehension gaps present in complex texts. Although reading strategies used by foreign language learners have been identified and categorised by research, the conditions of their use and their relationships are still unclear. Moreover, to my knowledge, no empirical investigation has focused specifically on comprehension monitoring in the context of foreign language literary texts. Literature instruction would benefit from such a study. Using verbal reports to elicit data, this study proposes to examine how proficient and less proficient university students of French, at intermediate level of instruction, implement problem-solving strategies when reading literary texts. Strategies such as guessing at words, consulting a dictionary, and translating mentally, are studied in relation to their contribution to the overall monitoring cycle. The results obtained indicate that proficient and less proficient readers tend to use the same strategies but with different purposes. The study demonstrates that the major difference between the two groups of respondents resides in ability some readers have to integrate meaning and construct text in a cohesive and synthetic fashion

    Move Forward and Tell: A Progressive Generator of Video Descriptions

    Full text link
    We present an efficient framework that can generate a coherent paragraph to describe a given video. Previous works on video captioning usually focus on video clips. They typically treat an entire video as a whole and generate the caption conditioned on a single embedding. On the contrary, we consider videos with rich temporal structures and aim to generate paragraph descriptions that can preserve the story flow while being coherent and concise. Towards this goal, we propose a new approach, which produces a descriptive paragraph by assembling temporally localized descriptions. Given a video, it selects a sequence of distinctive clips and generates sentences thereon in a coherent manner. Particularly, the selection of clips and the production of sentences are done jointly and progressively driven by a recurrent network -- what to describe next depends on what have been said before. Here, the recurrent network is learned via self-critical sequence training with both sentence-level and paragraph-level rewards. On the ActivityNet Captions dataset, our method demonstrated the capability of generating high-quality paragraph descriptions for videos. Compared to those by other methods, the descriptions produced by our method are often more relevant, more coherent, and more concise.Comment: Accepted by ECCV 201

    Recurrent Memory Networks for Language Modeling

    Get PDF
    Recurrent Neural Networks (RNN) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. We demonstrate the power of RMN on language modeling and sentence completion tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform in-depth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous state-of-the-art by a large margin.Comment: 8 pages, 6 figures. Accepted at NAACL 201
    • 

    corecore