352,078 research outputs found
Border Correlations of Partial Words
Partial words are finite sequences over a finite alphabet A that may contain a number of “do not know” symbols denoted by &#x25CA’s. Setting = A&#x25CA &#x2192 {&#x25CA}, denotes the set of all partial words over A. In this paper, we investigate the border correlation function ß: A*&#x25CA &#x2192 {a, b} that specifies which conjugates (cyclic shifts) of a given partial word w of length n are bordered, that is, ß (w) = c0c1 ... cn-1 where ci = a or ci = b according to whether the ith cyclic shift &#x03C3i(w) of w is unbordered or bordered. A partial word w is bordered if a proper prefix x1 of w is compatible with a proper suffix x2 of w, in which case any partial word x containing both x1 and x2 is called a border of w. In addition to ß, we investigate an extension ß’: A*&#x25CA &#x2115 * that maps a partial word w of length n to m0m1...mn-1 where mi is the length of a shortest border of &#x03C3i(w). Our results extend those of Harju and Nowotka
Distillation of Multi-Party Non-Locality With and Without Partial Communication
Non-local correlations are one of the most fascinating consequences of
quantum physics from the point of view of information: Such correlations,
although not allowing for signaling, are unexplainable by pre-shared
information. The correlations have applications in cryptography, communication
complexity, and sit at the very heart of many attempts of understanding quantum
theory -- and its limits -- better in terms of classical information. In these
contexts, the question is crucial whether such correlations can be distilled,
i.e., whether weak correlations can be used for generating (a smaller amount
of) stronger. Whereas the question has been studied quite extensively for
bipartite correlations (yielding both pessimistic and optimistic results), only
little is known in the multi-partite case. We show that a natural
generalization of the well-known Popsecu-Rohrlich box can be distilled, by an
adaptive protocol, to the algebraic maximum. We use this result further to show
that a much bigger class of correlations, including all purely three-partite
correlations, can be distilled from arbitrarily weak to maximal strength with
partial communication, i.e., using only a subset of the channels required for
the creation of the same correlation from scratch. In other words, we show that
arbitrarily weak non-local correlations can have a "communication value" in the
context of the generation of maximal non-locality.Comment: 5 pages, 3 figure
Using Qualitative Hypotheses to Identify Inaccurate Data
Identifying inaccurate data has long been regarded as a significant and
difficult problem in AI. In this paper, we present a new method for identifying
inaccurate data on the basis of qualitative correlations among related data.
First, we introduce the definitions of related data and qualitative
correlations among related data. Then we put forward a new concept called
support coefficient function (SCF). SCF can be used to extract, represent, and
calculate qualitative correlations among related data within a dataset. We
propose an approach to determining dynamic shift intervals of inaccurate data,
and an approach to calculating possibility of identifying inaccurate data,
respectively. Both of the approaches are based on SCF. Finally we present an
algorithm for identifying inaccurate data by using qualitative correlations
among related data as confirmatory or disconfirmatory evidence. We have
developed a practical system for interpreting infrared spectra by applying the
method, and have fully tested the system against several hundred real spectra.
The experimental results show that the method is significantly better than the
conventional methods used in many similar systems.Comment: See http://www.jair.org/ for any accompanying file
How strongly do word reading times and lexical decision times correlate? Combining data from eye movement corpora and megastudies
We assess the amount of shared variance between three measures of visual word recognition latencies: eye movement latencies, lexical decision times and naming times. After partialling out the effects of word frequency and word length, two well-documented predictors of word recognition latencies, we see that 7-44% of the variance is uniquely shared between lexical decision times and naming times, depending on the frequency range of the words used. A similar analysis of eye movement latencies shows that the percentage of variance they uniquely share either with lexical decision times or with naming times is much lower. It is 5 – 17% for gaze durations and lexical decision times in studies with target words presented in neutral sentences, but drops to .2% for corpus studies in which eye movements to all words are analysed. Correlations between gaze durations and naming latencies are lower still. These findings suggest that processing times in isolated word processing and continuous text reading are affected by specific task demands and presentation format, and that lexical decision times and naming times are not very informative in predicting eye movement latencies in text reading once the effect of word frequency and word length are taken into account. The difference between controlled experiments and natural reading suggests that reading strategies and stimulus materials may determine the degree to which the immediacy-of-processing assumption and the eye-mind assumption apply. Fixation times are more likely to exclusively reflect the lexical processing of the currently fixated word in controlled studies with unpredictable target words rather than in natural reading of sentences or texts
Word Limited: An Empirical Analysis of the Relationship Between theLength, Resiliency, and Impact of Federal Regulations
Since the rise of the modern administrative state we have seen a demonstrable trend towards lengthier regulations. However, popular critiques of the administrative state that focus on the overall size of the Federal Register are misguided. They rest on the premise that more, and longer, regulations unduly burden industry and the economy in general. However, movement towards lengthier and more detailed regulations could be rational and largely unproblematic. This study tests two potential rational explanations for the trend towards longer regulations: dubbed (1) “the insulation hypothesis” and (2) “the socially beneficial hypothesis.” Each of these explanations embodies a theoretically rational decision. First, the insulation hypothesis rests on the idea that it would make sense for policy-makers to include more detailed legal and scientific support in new regulations, and thereby increase their length relative to previous regulations, if the addition-al detail provided more insulation from judicial review. Second, the socially beneficial hypothesis rests on the idea that devoting relatively more time and re-sources to each new rule would be appropriate if longer, newer regulations produced more net social benefits than older, shorter ones. The empirical analysis set forth in this article combines data from a number of publicly available sources to test these hypotheses. The results, confirming “the socially beneficial hypothesis,” add to the canon of empirical analysis of administrative law, building on the work of Cass Sunstein, Cary Coglianese, and others. Recognizing an overly burdensome regulatory state, an undoubtedly worthwhile and vital check in a democratic society, requires more than simply counting the pages of regulations. The results of this study should put some minds at ease, at least with respect to EPA regulations; they should also help better direct our scrutiny in the future
Recommended from our members
On Nonregularized Estimation of Psychological Networks.
An important goal for psychological science is developing methods to characterize relationships between variables. Customary approaches use structural equation models to connect latent factors to a number of observed measurements, or test causal hypotheses between observed variables. More recently, regularized partial correlation networks have been proposed as an alternative approach for characterizing relationships among variables through off-diagonal elements in the precision matrix. While the graphical Lasso (glasso) has emerged as the default network estimation method, it was optimized in fields outside of psychology with very different needs, such as high dimensional data where the number of variables (p) exceeds the number of observations (n). In this article, we describe the glasso method in the context of the fields where it was developed, and then we demonstrate that the advantages of regularization diminish in settings where psychological networks are often fitted ( p≪n ). We first show that improved properties of the precision matrix, such as eigenvalue estimation, and predictive accuracy with cross-validation are not always appreciable. We then introduce nonregularized methods based on multiple regression and a nonparametric bootstrap strategy, after which we characterize performance with extensive simulations. Our results demonstrate that the nonregularized methods can be used to reduce the false-positive rate, compared to glasso, and they appear to provide consistent performance across sparsity levels, sample composition (p/n), and partial correlation size. We end by reviewing recent findings in the statistics literature that suggest alternative methods often have superior performance than glasso, as well as suggesting areas for future research in psychology. The nonregularized methods have been implemented in the R package GGMnonreg
Effects of ecstasy/polydrug use on memory for associative information
Rationale
Associative learning underpins behaviours that are fundamental to the everyday functioning of the individual. Evidence pointing to learning deficits in recreational drug users merits further examination.
Objectives
A word pair learning task was administered to examine associative learning processes in ecstasy/polydrug users.
Methods
After assignment to either single or divided attention conditions, 44 ecstasy/polydrug users and 48 non-users were presented with 80 word pairs at encoding. Following this, four types of stimuli were presented at the recognition phase: the words as originally paired (old pairs), previously presented words in different pairings (conjunction pairs), old words paired with new words, and pairs of new words (not presented previously). The task was to identify which of the stimuli were intact old pairs.
Results
Ecstasy/ploydrug users produced significantly more false-positive responses overall compared to non-users. Increased long-term frequency of ecstasy use was positively associated with the propensity to produce false-positive responses. It was also associated with a more liberal signal detection theory decision criterion value. Measures of long term and recent cannabis use were also associated with these same word pair learning outcome measures. Conjunction word pairs, irrespective of drug use, generated the highest level of false-positive responses and significantly more false-positive responses were made in the divided attention condition compared to the single attention condition.
Conclusions
Overall, the results suggest that long-term ecstasy exposure may induce a deficit in associative learning and this may be in part a consequence of users adopting a more liberal decision criterion value
Partial traces in decoherence and in interpretation: What do reduced states refer to?
The interpretation of the concept of reduced state is a subtle issue that has
relevant consequences when the task is the interpretation of quantum mechanics
itself. The aim of this paper is to argue that reduced states are not the
quantum states of subsystems in the same sense as quantum states are states of
the whole composite system. After clearly stating the problem, our argument is
developed in three stages. First, we consider the phenomenon of
environment-induced decoherence as an example of the case in which the
subsystems interact with each other; we show that decoherence does not solve
the measurement problem precisely because the reduced state of the measuring
apparatus is not its quantum state. Second, the non-interacting case is
illustrated in the context of no-collapse interpretations, in which we show
that certain well-known experimental results cannot be accounted for due to the
fact that the reduced states of the measured system and the measuring apparatus
are conceived as their quantum states. Finally, we prove that reduced states
are a kind of coarse-grained states, and for this reason they cancel the
correlations of the subsystem with other subsystems with which it interacts or
is entangled.Comment: 26 page
- …