1,430 research outputs found
On the efficiency of stochastic volume sources for the determination of light meson masses
We investigate the efficiency of single timeslice stochastic sources for the
calculation of light meson masses on the lattice as one varies the quark mass.
Simulations are carried out with Nf = 2 flavours of non-perturbatively O(a)
improved Wilson fermions for pion masses in the range of 450 - 760 MeV. Results
for pseudoscalar and vector meson two-point correlation functions computed
using stochastic as well as point sources are presented and compared. At fixed
computational cost the stochastic approach reduces the variance considerably in
the pseudoscalar channel for all simulated quark masses. The vector channel is
more affected by the intrinsic stochastic noise. In order to obtain stable
estimates of the statistical errors and a more pronounced plateau for the
effective vector meson mass, a relatively large number of stochastic sources
must be used.Comment: 18 pages, 6 figure
Recommended from our members
How are Bayesian models really used? Reply to Frank (2013)
In response to the proposal that cognitive phenomena might be best understood in terms of cognitive theories (Endress, 2013), Frank (2013) outlined an important research program, suggesting that Bayesian models should be used as rigorous, mathematically attractive implementations of psychological theories. This research program is important and promising. However, I show that it is not followed in practice. I then turn to Frank's defense of the assumption that learners prefer more specific rules (the "size principle"), and show that the results allegedly supporting this assumption do not provide any support for it. Further, I demonstrate that, in contrast to Frank's criticisms, there is no circularity in an account of rule-learning based on "common-sense psychology", and that Frank's other criticisms of this account are unsupported. I conclude that the research program outlined by Frank is important and promising, but needs to be followed in practice. Be that as it might, the rule-learning experiments discussed by Frank are still better explained by simple psychological mechanisms
Recommended from our members
Transitional probabilities count more than frequency, but might not be used for memorization
Learners often need to extract recurring items from continuous sequences, in both vision and audition. The best-known example is probably found in word-learning, where listeners have to determine where words start and end in fluent speech. This could be achieved through universal and experience-independent statistical mechanisms, for example by relying on Transitional Probabilities (TPs). Further, these mechanisms might allow learners to store items in memory. However, previous investigations have yielded conflicting evidence as to whether a sensitivity to TPs is diagnostic of the memorization of recurring items. Here, we address this issue in the visual modality. Participants were familiarized with a continuous sequence of visual items (i.e., arbitrary or everyday symbols), and then had to choose between (i) high-TP items that appeared in the sequence, (ii) high-TP items that did not appear in the sequence, and (iii) low-TP items that appeared in the sequence. Items matched in TPs but differing in (chunk) frequency were much harder to discriminate than items differing in TPs (with no significant sensitivity to chunk frequency), and learners preferred unattested high-TP items over attested low-TP items. Contrary to previous claims, these results cannot be explained on the basis of the similarity of the test items. Learners thus weigh within-item TPs higher than the frequency of the chunks, even when the TP differences are relatively subtle. We argue that these results are problematic for distributional clustering mechanisms that analyze continuous sequences, and provide supporting computational results. We suggest that the role of TPs might not be to memorize items per se, but rather to prepare learners to memorize recurring items once they are presented in subsequent learning situations with richer cues
Sustainable Growth with Environmental Spillovers: A Ramsey-Koopmans Approach
In this paper, we apply the canonical approach of Ramsey, Koopmans, and Diamond to the problem of optimal and intertemporally-equitable growth with a non-renewable resource constraint and show that the solution is sustainable. The model is extended to cases involving environmental amenities and disamenities and renewable resources. The solutions equivalently solve the problem of maximizing net national product adjusted for depreciation in natural capital and environmental effects, which turns out to be both sustainable and constant even without technical change.
Recommended from our members
Early conceptual and linguistic processes operate in independent channels
Language and concepts are intimately linked, but how do they interact? In the study reported here, we probed the relation between conceptual and linguistic processing at the earliest processing stages. We presented observers with sequences of visual scenes lasting 200 or 250 ms per picture. Results showed that observers understood and remembered the scenesâ abstract gist and, therefore, their conceptual meaning. However, observers remembered the scenes at least as well when they simultaneously performed a linguistic secondary task (i.e., reading and retaining sentences); in contrast, a nonlinguistic secondary task (equated for difficulty with the linguistic task) impaired scene recognition. Further, encoding scenes interfered with performance on the nonlinguistic task and vice versa, but scene processing and performing the linguistic task did not affect each other. At the earliest stages of conceptual processing, the extraction of meaning from visually presented linguistic stimuli and the extraction of conceptual information from the world take place in remarkably independent channels
Perceptual Constraints in Phonotactic Learning
Structural regularities in language have often been attributed to symbolic or statistical general purpose computations, whereas perceptual factors influencing such generalizations have received less interest. Here, we use phonotactic-like constraints as a case study to ask whether the structural properties of specific perceptual and memory mechanisms may facilitate the acquisition of grammatical-like regularities. Participants learned that the consonants Câ and Câ had to come from distinct sets in words of the form CâVccVCâ (where the critical consonants were in word edges) but not in words of the form cVCâCâVc (where the critical consonants were in word middles). Control conditions ruled out attentional or psychophysical difficulties in word middles. Participants did, however, learn such regularities in word middles when natural consonant classes were used instead of arbitrary consonant sets. We conclude that positional generalizations may be learned preferentially using edge-based positional codes, but that participants can also use other mechanisms when other linguistic cues are given
Recommended from our members
Large capacity temporary visual memory
Visual working memory (WM) capacity is thought to be limited to three or four items. However, many cognitive activities seem to require larger temporary memory stores. Here, we provide evidence for a temporary memory store with much larger capacity than past WM capacity estimates. Further, based on previous WM research, we show that a single factor|proactive interference|is su cient to bring capacity estimates down to the range of previousWM capacity estimates. Participants saw a rapid serial visual presentation (RSVP) of 5 to 21 pictures of familiar objects or words presented at rates of 4/s or 8/s, respectively, and thus too fast for strategies such as rehearsal. Recognition memory was tested with a single probe item. When new items were used on all trials, no xed memory capacities were observed, with estimates of up to 9.1 retained pictures for 21-item lists, and up to 30.0 retained pictures for 100-item lists, and no clear upper bound to how many items could be retained. Further, memory items were not stored in a temporally stable form of memory, but decayed almost completely after a few minutes. In contrast, when, as in most WM experiments, a small set of items was reused across all trials, thus creating proactive interference among items, capacity remained in the range reported in previous WM experiments. These results show that humans have a large-capacity temporary memory store in the absence of proactive interference, and raise the question of whether temporary memory in everyday cognitive processing is severely limited as in WM experiments, or has the much larger capacity found in the present experiments
Recommended from our members
Interference and memory capacity limitations
Working memory (WM) is thought to have a fixed and limited capacity. However, the origins of these capacity limitations are debated, and generally attributed to active, attentional processes. Here, we show that the existence of interference among items in memory mathematically guarantees fixed and limited capacity limits under very general conditions, irrespective of any processing assumptions. Assuming that interference (i) increases with the number of interfering items and (ii) brings memory performance to chance levels for large numbers of interfering items, capacity limits are a simple function of the relative influence of memorization and interference. In contrast, we show that time-based memory limitations do not lead to fixed memory capacity limitations that are independent of the timing properties of an experiment. We show that interference can mimic both slot-like and continuous resource-like memory limitations, suggesting that these types of memory performance might not be as different as commonly believed. We speculate that slot-like WM limitations might arise from crowding-like phenomena in memory when participants have to retrieve items. Further, based on earlier research on parallel attention and enumeration, we suggest that crowding-like phenomena might be a common reason for the three major cognitive capacity limitations. As suggested by Miller (1956) and Cowan (2001), these capacity limitations might thus arise due to a common reason, even though they likely rely on distinct processes
Looking Under the Hood and Tinkering with Voter Cynicism: Ross Perot and âPerspective by Incongruityâ
This essay examines Ross Perotâs 1992 presidential bid as a comic catalyst for a reinvigorated view of civic responsibility. Despite the Texas maverickâs political naivetĂ© and penchant for miscalculation, his very presence in the campaign reanimated Americansâ conception of grassroots democracy. By examining important and previously unexplored distinctions between planned and unplanned incongruity, we probe the means by which Perot invited consideration of alternative political perspectives and offered an appealing glimpse into a dormant, more deeply held democratic ideal
- âŠ