79 research outputs found

    Doing right by the eyewitness evidence: a response to Berkowitz et al

    Get PDF
    Berkowitz et al. (Berkowitz, S. R., Garrett, B. L., Fenn, K. M., & Loftus, E. F. (2020). Convicting with confidence? Why we should not over-rely on eyewitness confidence. Memory. https://doi.org/10.1080/09658211.2020.1849308) attribute to us the claim that ā€œconfidence trumps allā€, and the few out-of-context quotations they selected can certainly be used to create that false impression. However, it is easily disproved, and we do so here. The notion that ā€œconfidence trumps allā€ is the mistake that the jurors made in the DNA exoneration cases, not a position that we have ever advocated

    The impact of sleep on eyewitness identifications

    Get PDF
    Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.Peer reviewe

    The impact of sleep on eyewitness identifications.

    Get PDF
    Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.Peer reviewe

    Useful scientific theories are useful: A reply to Rouder, Pratte, and Morey (2010)

    Full text link
    In a recognition memory experiment, Mickes, Wixted, and Wais (2007) reported that distributional statistics computed from ratings made using a 20-point confidence scale (which showed that the standard deviation of the ratings made to lures was approximately 0.80 times that of the targets) essentially matched the distributional statistics estimated indirectly by fitting a Gaussian signal-detection model to the receiver-operating characteristic (ROC). We argued that the parallel results serve to increase confidence in the Gaussian unequal-variance model of recognition memory. Rouder, Pratte, and Morey (2010) argue that the results are instead uninformative. In their view, parametric models of latent memory strength are not empirically distinguishable. As such, they argue, our conclusions are arbitrary, and parametric ROC analysis should be abandoned. In an attempt to demonstrate the inherent untestability of parametric models, they describe a non-Gaussian equal-variance model that purportedly accounts for our findings just as well as the Gaussian unequal-variance model does. However, we show that their new model-despite being contrived after the fact and in full view of the to-be-explained data-does not account for the results as well as the unequal-variance Gaussian model does. This outcome manifestly demonstrates that parametric models are, in fact, testable. Moreover, the results differentially favor the Gaussian account over the probit model and over several other reasonable distributional forms (such as the Weibull and the lognormal).</p

    Major memory for microblogs

    Full text link
    Online social networking is vastly popular and permits its members to post their thoughts as microblogs, an opportunity that people exploit, on Facebook alone, over 30 million times an hour. Such trivial ephemera, one might think, should vanish quickly from memory; conversely, they may comprise the sort of information that our memories are tuned to recognize, if that which we readily generate, we also readily store. In the first two experiments, participants' memory for Facebook posts was found to be strikingly stronger than their memory for human faces or sentences from books-a magnitude comparable to the difference in memory strength between amnesics and healthy controls. The second experiment suggested that this difference is not due to Facebook posts spontaneously generating social elaboration, because memory for posts is enhanced as much by adding social elaboration as is memory for book sentences. Our final experiment, using headlines, sentences, and reader comments from articles, suggested that the remarkable memory for microblogs is also not due to their completeness or simply their topic, but may be a more general phenomenon of their being the largely spontaneous and natural emanations of the human mind.</p

    Theoretical vs. empirical discriminability:the application of ROC methods to eyewitness identification

    Get PDF
    Abstract į…Ÿ Receiver operating characteristic (ROC) analysis was introduced to the field of eyewitness identification 5 years ago. Since that time, it has been both influential and controversial, and the debate has raised an issue about measuring discriminability that is rarely considered. The issue concerns the distinction between empirical discriminability (measured by area under the ROC curve) vs. underlying/theoretical discriminability (measured by dā€™ or variants of it). Under most circumstances, the two measures will agree about a difference between two conditions in terms of discriminability. However, it is possible for them to disagree, and that fact can lead to confusion about which condition actually yields higher discriminability. For example, if the two conditions have implications for real-world practice (e.g., a comparison of competing lineup formats), should a policymaker rely on the area-under-the-curve measure or the theory-based measure? Here, we illustrate the fact that a given empirical ROC yields as many underlying discriminability measures as there are theories that one is willing to take seriously. No matter which theory is correct, for practical purposes, the singular area-under-the-curve measure best identifies the diagnostically superior procedure. For that reason, area under the ROC curve informs policy in a way that underlying theoretical discriminability never can. At the same time, theoretical measures of discriminability are equally important, but for a different reason. Without an adequate theoretical understanding of the relevant task, the field will be in no position to enhance empirical discriminability

    Whoā€™s funny: Gender stereotypes, humor production, and memory bias

    Full text link
    It has often been asserted, by both men and women, that men are funnier. We explored two possible explanations for such a view, first testing whether men, when instructed to be as funny as possible, write funnier cartoon captions than do women, and second examining whether there is a tendency to falsely remember funny things as having been produced by men. A total of 32 participants, half from each gender, wrote captions for 20 cartoons. Raters then indicated the humor success of these captions. Raters of both genders found the captions written by males funnier, though this preference was significantly stronger among the male raters. In the second experiment, male and female participants were presented with the funniest and least funny captions from the first experiment, along with the caption author's gender. On a memory test, both females and males disproportionately misattributed the humorous captions to males and the nonhumorous captions to females. Men might think men are funnier because they actually find them so, but though women rated the captions written by males slightly higher, our data suggest that they may regard men as funnier more because they falsely attribute funny things to them.</p
    • ā€¦
    corecore