301 research outputs found

    Two Words, One Meaning: Evidence of Automatic Co-Activation of Translation Equivalents

    Get PDF
    Research on the processing of translations offers important insights on how bilinguals negotiate the representation of words from two languages in one mind and one brain. Evidence so far has shown that translation equivalents effectively activate each other as well as their shared concept even when translations lack of any formal overlap (i.e., non-cognates) and even when one of them is presented subliminally, namely under masked priming conditions. In the lexical decision studies testing masked translation priming effects with unbalanced bilinguals a remarkably stable pattern emerges: larger effects in the dominant (L1) to the non-dominant (L2) translation direction, than vice versa. Interestingly, this asymmetry vanishes when simultaneous and balanced bilinguals are tested, suggesting that the linguistic profile of the bilinguals could be determining the pattern of cross-language lexico-semantic activation across the L2 learning trajectory. The present study aims to detect whether L2 proficiency is the critical variable rendering the otherwise asymmetric cross-language activation of translations obtained in the lexical decision task into symmetric. Non-cognate masked translation priming effects were examined with three groups of Greek (L1)–English (L2) unbalanced bilinguals, differing exclusively at their level of L2 proficiency. Although increased L2 proficiency led to improved overall L2 performance, masked translation priming effects were virtually identical across the three groups, yielding in all cases significant but asymmetric effects (i.e., larger effects in the L1 → L2 than in the L2 → L1 translation direction). These findings show that proficiency does not modulate masked translation priming effects at intermediate levels, and that a native-like level of L2 proficiency is needed for symmetric effects to emerge. They furthermore, pose important constraints on the operation of the mechanisms underlying the development of cross-language lexico-semantic links

    Region graph partition function expansion and approximate free energy landscapes: Theory and some numerical results

    Full text link
    Graphical models for finite-dimensional spin glasses and real-world combinatorial optimization and satisfaction problems usually have an abundant number of short loops. The cluster variation method and its extension, the region graph method, are theoretical approaches for treating the complicated short-loop-induced local correlations. For graphical models represented by non-redundant or redundant region graphs, approximate free energy landscapes are constructed in this paper through the mathematical framework of region graph partition function expansion. Several free energy functionals are obtained, each of which use a set of probability distribution functions or functionals as order parameters. These probability distribution function/functionals are required to satisfy the region graph belief-propagation equation or the region graph survey-propagation equation to ensure vanishing correction contributions of region subgraphs with dangling edges. As a simple application of the general theory, we perform region graph belief-propagation simulations on the square-lattice ferromagnetic Ising model and the Edwards-Anderson model. Considerable improvements over the conventional Bethe-Peierls approximation are achieved. Collective domains of different sizes in the disordered and frustrated square lattice are identified by the message-passing procedure. Such collective domains and the frustrations among them are responsible for the low-temperature glass-like dynamical behaviors of the system.Comment: 30 pages, 11 figures. More discussion on redundant region graphs. To be published by Journal of Statistical Physic

    An overview of the ciao multiparadigm language and program development environment and its design philosophy

    Full text link
    We describe some of the novel aspects and motivations behind the design and implementation of the Ciao multiparadigm programming system. An important aspect of Ciao is that it provides the programmer with a large number of useful features from different programming paradigms and styles, and that the use of each of these features can be turned on and off at will for each program module. Thus, a given module may be using e.g. higher order functions and constraints, while another module may be using objects, predicates, and concurrency. Furthermore, the language is designed to be extensible in a simple and modular way. Another important aspect of Ciao is its programming environment, which provides a powerful preprocessor (with an associated assertion language) capable of statically finding non-trivial bugs, verifying that programs comply with specifications, and performing many types of program optimizations. Such optimizations produce code that is highly competitive with other dynamic languages or, when the highest levéis of optimization are used, even that of static languages, all while retaining the interactive development environment of a dynamic language. The environment also includes a powerful auto-documenter. The paper provides an informal overview of the language and program development environment. It aims at illustrating the design philosophy rather than at being exhaustive, which would be impossible in the format of a paper, pointing instead to the existing literature on the system

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    On line partial discharge diagnosis of power cables

    No full text
    This paper has the purpose to address in a practical way the topic of PD testing HV cables, with emphasis on on-line measurements and diagnostic interpretations. For this purpose, FAQ are discussed and examples reported of advantages and limitations of on-line PD testing

    Envisaging links between fundamental research in electrical insulation and electrical asset management

    No full text
    The trend of industrial and even institutional funding for research in the electrical energy world is going clearly in the direction of short-term, applied projects. On the one hand, this may be unavoidable with globalization and the fast increase in the energy needs of developing countries, while on the other hand it may affect in the long term the capability of developing base research in a sector that requires, more than many others, innovation and interdisciplinarity. Research financial institutions, such as the EC, EPRI, CRIEPI, etc., are going in the direction of promoting applied work. This can be seen with favor as long as it can correct distortions of the past, when fundamental research had been seen sometimes as a permanent exercise of speculations without oversight for an applied end; but the risk is to lose the fundamentals of knowledge which is the only way to promote real and stable innovation. Speaking at the International Symposium on Electrical Insulation in Vancouver, June 2008, we recognize that electrical insulation is the weakest part of most electrical apparatus, in both MV and HV systems. Transformers, rotating machines, switches, overhead lines, and of course, cables present failure modes which in most cases involve electrical, thermal, mechanical, and environmental degradation of electrical insulation, resulting in the loss of capability to withstand operational stresses. Reliability, maintenance, and availability of electrical assets involve, therefore, the knowledge of ageing processes of electrical insulation, of insulating material behavior as a function of various operational factors (e.g. from frequency to temperature or environmental conditions), of diagnostic properties and methodologies, of new technologies and new materials (such as extra-clean manufacturing, nanostructured, or composite materials). These aspects have roots that must come from fundamental interdisciplinary research which involves the physics and chemistry of insulating materials. This paper has, therefore, the purpose of showing how much fundamental research has contributed to practical topics such as management of electrical assets, which is a key point for reliability, availability, cost saving, and energy quality of any electrical system, highlighting how much the latter can benefit from the former. After a broad introduction, the focus of the paper is on how base research has contributed to the development of diagnostic tools that are fundamental for maintenance decisions involving electrical assets
    • 

    corecore