57,461 research outputs found
The effect of negative polarity items on inference verification
The scalar approach to negative polarity item (NPI) licensing assumes that NPIs are allowable
in contexts in which the introduction of the NPI leads to proposition strengthening (e.g., Kadmon &
Landman 1993, Krifka 1995, Lahiri 1997, Chierchia 2006). A straightforward processing prediction
from such a theory is that NPI’s facilitate inference verification from sets to subsets. Three
experiments are reported that test this proposal. In each experiment, participants evaluated whether
inferences from sets to subsets were valid. Crucially, we manipulated whether the premises
contained an NPI. In Experiment 1, participants completed a metalinguistic reasoning task, and
Experiments 2 and 3 tested reading times using a self-paced reading task. Contrary to expectations,
no facilitation was observed when the NPI was present in the premise compared to when it was
absent. In fact, the NPI significantly slowed down reading times in the inference region. Our results
therefore favor those scalar theories that predict that the NPI is costly to process (Chierchia 2006),
or other, nonscalar theories (Giannakidou 1998, Ladusaw 1992, Postal 2005, Szabolcsi 2004) that
likewise predict NPI processing cost but, unlike Chierchia (2006), expect the magnitude of the
processing cost to vary with the actual pragmatics of the NPI
Statistical Discrimination in the Criminal Justice System: The case for Fines Instead of Jail
We develop a model of statistical discrimination in criminal trials. Agents carry publicly observable labels of no economic significance (race, etc.) and choose to commit crimes if their privately observed utility from doing so is high enough. A crime generates noisy evidence, and defendants are convicted when the realized amount of evidence is sufficiently strong. Convicted offenders are penalized either by incarceration or by monetary fines. In the case of prison sentences, discriminatory equilibria can exist in which members of one group face a prior prejudice in trials and are convicted with less evidence than members of the other group. Such discriminatory equilibria cannot exist with monetary fines instead of prison sentences. Our findings have implications for potential reforms of the American criminal justice system.Statistical discrimination, criminal justice, prejudice
The distribution of sentences in tax-related cases: evidence from spanish courts of appeals
The distribution of sentences in tax-related cases in Spain shows that the government tends to lose more often in this type of cases than in any other type of administrative cases; it also shows that such distribution varies widely across the type of taxes and other variables. Our purpose is thus twofold: First, we attempt to identify the factors that explain the result of tax-related cases; then, we use those factors to build a model to forecast the government's probability of success in this type of cases
Some new results on decidability for elementary algebra and geometry
We carry out a systematic study of decidability for theories of (a) real
vector spaces, inner product spaces, and Hilbert spaces and (b) normed spaces,
Banach spaces and metric spaces, all formalised using a 2-sorted first-order
language. The theories for list (a) turn out to be decidable while the theories
for list (b) are not even arithmetical: the theory of 2-dimensional Banach
spaces, for example, has the same many-one degree as the set of truths of
second-order arithmetic.
We find that the purely universal and purely existential fragments of the
theory of normed spaces are decidable, as is the AE fragment of the theory of
metric spaces. These results are sharp of their type: reductions of Hilbert's
10th problem show that the EA fragments for metric and normed spaces and the AE
fragment for normed spaces are all undecidable.Comment: 79 pages, 9 figures. v2: Numerous minor improvements; neater proofs
of Theorems 8 and 29; v3: fixed subscripts in proof of Lemma 3
Probabilities on Sentences in an Expressive Logic
Automated reasoning about uncertain knowledge has many applications. One
difficulty when developing such systems is the lack of a completely
satisfactory integration of logic and probability. We address this problem
directly. Expressive languages like higher-order logic are ideally suited for
representing and reasoning about structured knowledge. Uncertain knowledge can
be modeled by using graded probabilities rather than binary truth-values. The
main technical problem studied in this paper is the following: Given a set of
sentences, each having some probability of being true, what probability should
be ascribed to other (query) sentences? A natural wish-list, among others, is
that the probability distribution (i) is consistent with the knowledge base,
(ii) allows for a consistent inference procedure and in particular (iii)
reduces to deductive logic in the limit of probabilities being 0 and 1, (iv)
allows (Bayesian) inductive reasoning and (v) learning in the limit and in
particular (vi) allows confirmation of universally quantified
hypotheses/sentences. We translate this wish-list into technical requirements
for a prior probability and show that probabilities satisfying all our criteria
exist. We also give explicit constructions and several general
characterizations of probabilities that satisfy some or all of the criteria and
various (counter) examples. We also derive necessary and sufficient conditions
for extending beliefs about finitely many sentences to suitable probabilities
over all sentences, and in particular least dogmatic or least biased ones. We
conclude with a brief outlook on how the developed theory might be used and
approximated in autonomous reasoning agents. Our theory is a step towards a
globally consistent and empirically satisfactory unification of probability and
logic.Comment: 52 LaTeX pages, 64 definiton/theorems/etc, presented at conference
Progic 2011 in New Yor
Recommended from our members
The modifier effect and property mutability
The modifier effect is the reduction in perceived likelihood of a generic property sentence, when the head noun is modified. We investigated the prediction that the modifier effect would be stronger for mutable than for central properties, without finding evidence for this predicted interaction over the course of five experiments. However Experiment 6, which provided a brief context for the modified concepts to lend them greater credibility, did reveal the predicted interaction. It is argued that the modifier effect arises primarily from a general lack of confidence in generic statements about the typical properties of unfamiliar concepts. Neither prototype nor classical models of concept combination receive support from the phenomenon
Advertising repetition and complexity of digital signage advertisements: simplicity rules!
Digital signage is probably the most skyrocketing advertising medium of the moment, since the LCD-screens are almost impossible to avoid in everyday life for consumers, while few academic research is present to explore the potential of this medium. An experiment (3x2x4) was conducted to test the role of the intensity of complexity (simple/moderate/complex), the dimension of complexity (visual/lexical) and the level of repetition (one/four/seven/ten exposures) on the attitude toward digital signage advertisements (Aad). The results indicate a significant influence of advertising complexity on Aad, where simple ads with a dominant visual component clearly work best
- …