630,238 research outputs found
Scalable parallel computation of the translation operator in three dimensions
We propose a novel algorithm for the parallel, distributed-memory computation of the translation operator in the three-dimensional multilevel fast multipole algorithm (MLFMA). Sequential algorithms can compute the translation operator with L multipoles and O(L-2) sampling points in O(L-2) time. State-of-the-art hierarchical parallelization schemes of the MLFMA rely on the distribution of radiation patterns and associated translation operators among P = O(L-2) parallel processes, necessitating the development of distributed-memory algorithms for the computation of the translation operator. Whereas a baseline parallel algorithm computes this translation operator in O(L) time, we propose an algorithm that achieves this in only O(log L) time. For large translation operators and a high number of parallel processes, our algorithm proves to be roughly ten times faster than the baseline algorithm
The Virtual Block Interface: A Flexible Alternative to the Conventional Virtual Memory Framework
Computers continue to diversify with respect to system designs, emerging
memory technologies, and application memory demands. Unfortunately, continually
adapting the conventional virtual memory framework to each possible system
configuration is challenging, and often results in performance loss or requires
non-trivial workarounds. To address these challenges, we propose a new virtual
memory framework, the Virtual Block Interface (VBI). We design VBI based on the
key idea that delegating memory management duties to hardware can reduce the
overheads and software complexity associated with virtual memory. VBI
introduces a set of variable-sized virtual blocks (VBs) to applications. Each
VB is a contiguous region of the globally-visible VBI address space, and an
application can allocate each semantically meaningful unit of information
(e.g., a data structure) in a separate VB. VBI decouples access protection from
memory allocation and address translation. While the OS controls which programs
have access to which VBs, dedicated hardware in the memory controller manages
the physical memory allocation and address translation of the VBs. This
approach enables several architectural optimizations to (1) efficiently and
flexibly cater to different and increasingly diverse system configurations, and
(2) eliminate key inefficiencies of conventional virtual memory. We demonstrate
the benefits of VBI with two important use cases: (1) reducing the overheads of
address translation (for both native execution and virtual machine
environments), as VBI reduces the number of translation requests and associated
memory accesses; and (2) two heterogeneous main memory architectures, where VBI
increases the effectiveness of managing fast memory regions. For both cases,
VBI significanttly improves performance over conventional virtual memory
Literary translation and cultural memory
This article intends to investigate the relationship between literary translation and cultural memory, using a twentieth century film version of one of Shakespeare’s plays as a case study in inter-semiotic translation. The common perception of translation is often confined to its use as a language learning tool or as a means of information transfer between languages. The wider academic concept embraces not only inter-lingual translation, but both intra-lingual activity or rewording in the same language and inter-semiotic translation defined by Roman Jacobson as “the interpretation of verbal signs by means of signs of nonverbal sign systems” (Jakobson, 1959: 114)
Memory-augmented Neural Machine Translation
Neural machine translation (NMT) has achieved notable success in recent
times, however it is also widely recognized that this approach has limitations
with handling infrequent words and word pairs. This paper presents a novel
memory-augmented NMT (M-NMT) architecture, which stores knowledge about how
words (usually infrequently encountered ones) should be translated in a memory
and then utilizes them to assist the neural model. We use this memory mechanism
to combine the knowledge learned from a conventional statistical machine
translation system and the rules learned by an NMT system, and also propose a
solution for out-of-vocabulary (OOV) words based on this framework. Our
experiments on two Chinese-English translation tasks demonstrated that the
M-NMT architecture outperformed the NMT baseline by and BLEU points
on the two tasks, respectively. Additionally, we found this architecture
resulted in a much more effective OOV treatment compared to competitive
methods
Neural Semantic Encoders
We present a memory augmented neural network for natural language
understanding: Neural Semantic Encoders. NSE is equipped with a novel memory
update rule and has a variable sized encoding memory that evolves over time and
maintains the understanding of input sequences through read}, compose and write
operations. NSE can also access multiple and shared memories. In this paper, we
demonstrated the effectiveness and the flexibility of NSE on five different
natural language tasks: natural language inference, question answering,
sentence classification, document sentiment analysis and machine translation
where NSE achieved state-of-the-art performance when evaluated on publically
available benchmarks. For example, our shared-memory model showed an
encouraging result on neural machine translation, improving an attention-based
baseline by approximately 1.0 BLEU.Comment: Accepted in EACL 2017, added: comparison with NTM, qualitative
analysis and memory visualizatio
TMX markup: a challenge when adapting SMT to the localisation environment
Translation memory (TM) plays an important role in localisation workflows and is used as an efficient and fundamental tool to carry out translation. In recent years, statistical machine translation (SMT) techniques have been rapidly developed, and the translation quality and speed have been significantly improved as well. However,when applying SMT technique to facilitate post-editing in the localisation industry, we need to adapt SMT to the TM data which is formatted with special mark-up. In this paper, we explore some issues when adapting SMT to Symantec formatted TM data.
Three different methods are proposed to handle the Translation Memory eXchange (TMX) markup and a comparative study is carried out between them. Furthermore, we also compare the TMX-based SMT systems with a customised SYSTRAN system through human evaluation and automatic evaluation metrics. The experimental results conducted on the French and English language pair show that the SMT can perform well using TMX as input format either during training or at runtime
Faster Isn't Necessarily Better: The Role of Individual Differences on Processing Words with Multiple Translations
Words that can translate several ways into another language have only recently been examined in studies of bilingualism. The present study examined how individual differences in working memory span and interference affect the processing of such words during a translation task. 20 English-Spanish bilinguals performed a Stroop task and an operation word span task to determine their interference abilities and working memory spans, respectively. They then translated from English to Spanish and Spanish to English 239 words that varied in number of translations and concreteness. Bilinguals with lower interference and lower working memory spans were predicted to have the fastest response times for words with multiple translations, due to the ability to better suppress irrelevant information as well as limited capacity to hold several competing translations of a word in memory at once. Individuals with higher interference and higher working memory spans were predicted to be able to access and hold in memory all possible meanings of the word at once, yielding slower response times. The results demonstrated that interference and working memory span did predict response times in the translation task in accordance with the hypotheses, and can have significant impact on several aspects of translation
Not lost in translation: writing auditorily presented words at study increases correct recognition “at no cost”
© 2016 Taylor & Francis. Previous studies have reported a translation effect in memory, whereby encoding tasks that involve translating between processing domains produce a memory advantage relative to tasks that involve a single domain. We investigated the effects of translation on true and false memories using the Deese/Roediger-McDermott (DRM) procedure [Deese, J. (1959). On the prediction of occurrence of particular verbal intrusions in immediate recall. Journal of Experimental Psychology, 58, 17–22; Roediger, H. L., III, & McDermott, K. B. (1995). Creating false memories: Remembering words not presented in lists. Journal of Experimental Psychology: Learning, Memory, & Cognition, 21, 803–814]. Translation between modalities enhanced correct recognition but had no effect on false recognition. Results are consistent with previous research showing that correct memory can be enhanced “at no cost” in terms of accuracy. Findings are discussed in terms of understanding the relationship between true and false memories produced by the DRM procedure
- …
