13 research outputs found
Dynamic Word length distributions of each time period with different texts scales.
<p>Dynamic Word length distributions of each time period with different texts scales.</p
Fitting <i>y = ax-b</i> to the relation between word length and type-token ratio for each time period with <i>N</i> = 10000.
<p>Fitting <i>y = ax-b</i> to the relation between word length and type-token ratio for each time period with <i>N</i> = 10000.</p
How Does Word Length Evolve in Written Chinese?
<div><p>We demonstrate a substantial evidence that the word length can be an essential lexical structural feature for word evolution in written Chinese. The data used in this study are diachronic Chinese short narrative texts with a time span of over 2000-years. We show that the increase of word length is an essential regularity in word evolution. On the one hand, word frequency is found to depend on word length, and their relation is in line with the Power law function y = ax<sup>-b</sup>. On the other hand, our deeper analyses show that the increase of word length results in the simplification in characters for balance in written Chinese. Moreover, the correspondence between written and spoken Chinese is discussed. We conclude that the disyllabic trend may account for the increase of word length, and its impacts can be explained in "the principle of least effort".</p></div
Static Word length distributions of each time period with different texts scales (<i>N</i> = 1000, <i>N</i> = 2000 and <i>N</i> = 3000 characters; statistics for the sample in each time period can be found in S1 File).
<p>Static Word length distributions of each time period with different texts scales (<i>N</i> = 1000, <i>N</i> = 2000 and <i>N</i> = 3000 characters; statistics for the sample in each time period can be found in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0138567#pone.0138567.s001" target="_blank">S1 File</a>).</p
Linear fittings to static word probability changes of each word length class for texts with <i>N</i> = 1000.
<p>Linear fittings to static word probability changes of each word length class for texts with <i>N</i> = 1000.</p
Dynamic and static mean word length evolution for different text scales.
<p>Dynamic and static mean word length evolution for different text scales.</p
Coarse-to-Fine Construction for High-Resolution Representation in Visual Working Memory
<div><p>Background</p><p>This study explored whether the high-resolution representations created by visual working memory (VWM) are constructed in a coarse-to-fine or all-or-none manner. The coarse-to-fine hypothesis suggests that coarse information precedes detailed information in entering VWM and that its resolution increases along with the processing time of the memory array, whereas the all-or-none hypothesis claims that either both enter into VWM simultaneously, or neither does.</p> <p>Methodology/Principal Findings</p><p>We tested the two hypotheses by asking participants to remember two or four complex objects. An ERP component, contralateral delay activity (CDA), was used as the neural marker. CDA is higher for four objects than for two objects when coarse information is primarily extracted; yet, this CDA difference vanishes when detailed information is encoded. <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0057913#s2" target="_blank">Experiment 1</a> manipulated the comparison difficulty of the task under a 500-ms exposure time to determine a condition in which the detailed information was maintained. No CDA difference was found between two and four objects, even in an easy-comparison condition. Thus, <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0057913#s3" target="_blank">Experiment 2</a> manipulated the memory array’s exposure time under the easy-comparison condition and found a significant CDA difference at 100 ms while replicating <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0057913#s2" target="_blank">Experiment 1</a>′s results at 500 ms. In Experiment 3, the 500-ms memory array was blurred to block the detailed information; this manipulation reestablished a significant CDA difference.</p> <p>Conclusions/Significance</p><p>These findings suggest that the creation of high-resolution representations in VWM is a coarse-to-fine process.</p> </div
The complex and simple shapes used in Experiment 1.
<p>The stimuli in Category 4 were from ref. <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0057913#pone.0057913-Alvarez1" target="_blank">[8]</a>, and the other complex stimuli were new.</p
Results of Experiments 2.
<p>The mean accuracy (A), CDA waveforms (B), and averaged CDA amplitudes of the tested time window (C) for the exposure time of 100 ms and 500 ms. Error bars in Fig. 4A and 4C denote standard error. The CDA is a difference wave, constructed by subtracting the ipsilateral from the contralateral activity according to the cued hemifield. *indicates the difference between the two conditions was significant; whereas <i>n.s.</i> indicates the difference between the two conditions was non-significant. Grey areas of the CDA waveforms denote the tested time window.</p
Results of Experiments 3.
<p>The mean accuracy (A), CDA waveforms (B), and averaged CDA amplitudes of the tested time window (C) for remembering the blur objects. Error bars in Fig. 7A and 7C denote standard error. The CDA is a difference wave, constructed by subtracting the ipsilateral from the contralateral activity according to the cued hemifield. *indicates the difference between the two conditions was significant; whereas <i>n.s.</i> indicates the difference between the two conditions was non-significant. Grey areas of the CDA waveforms denote the tested time window.</p