75,258 research outputs found

    Get the gist? The effects of processing depth on false recognition in short-term and long-term memory

    Get PDF
    Gist-based processing has been proposed to account for robust false memories in the converging-associates task. The deep-encoding processes known to enhance verbatim memory also strengthen gist memory and increase distortions of long-term memory (LTM). Recent research has demonstrated that compelling false memory illusions are relatively delay-invariant, also occurring under canonical short-term memory (STM) conditions. To investigate the contributions of gist to false memory at short and long delays, processing depth was manipulated as participants encoded lists of four semantically related words and were probed immediately, following a filled 3- to 4-s retention interval, or approximately 20 min later, in a surprise recognition test. In two experiments, the encoding manipulation dissociated STM and LTM on the frequency, but not the phenomenology, of false memory. Deep encoding at STM increases false recognition rates at LTM, but confidence ratings and remember/know judgments are similar across delays and do not differ as a function of processing depth. These results suggest that some shared and some unique processes underlie false memory illusions at short and long delays

    How \u3ci\u3eDaubert\u3c/i\u3e and its Progeny Have Failed Criminalistics Evidence and a Few Things the Judiciary Could Do About It.

    Get PDF
    Part I documents how courts have failed to faithfully apply Daubert’s criteria for scientific validity to this type of evidence. It describes how ambiguities and flaws in the terminology adopted in Daubert combinedwith the opaqueness of forensic-science publications and standards have been exploited to shield some test methods from critical judicial analysis. Simply desisting from these avoidance strategies would be an improvement. Part II notes how part of the U.S. Supreme Court’s opinion in Kumho Tire Co. v. Carmichael has enabled courts to lower the bar for what is presented as scientific evidence by mistakenly maintaining that there is no difference between that evidence and other expert testimony that need not be scientifically validated. It suggests that a version of Rule 702 that explicitly insists on more rigorous validation of evidence that is promoted or understood as being “scientific” would be workable and more clearly compatible with the rule’s common law roots. Part III sketches various meanings of the terms “reliability” and “validity” in science and statistics, on the one hand, and in the rules and opinions on the admissibility of expert evidence, on the other. It discusses the two-part definition of “validity” in the PCAST report and the proposed criteria for demonstrating scientific validity of subjective pattern-matching testimony. It contends that if “validity” means that a procedure (even a highly subjective one) for making measurements and drawing inferences is fit for its intended use, then whether test results that have higher error rates than the ones selected in the report might nevertheless assist fact finders who are also appropriately informed of the evidence’s probative value must be evaluated. Finally, Part IV articulates two distinct approaches to informing judges or jurors of the import of similarities in features: the traditional one in which examiners opine on the truth and falsity of source hypotheses and a more finely grained one in which criminalists report only on the strength of the evidence. It suggests that the rules for admitting scientific evidence need to be flexible enough to accommodate the latter, likelihood-based testimony when it has a satisfactory empirically established basis

    The Critical Role of Statistics in Demostrating the Reliability of Expert Evidence

    Get PDF
    Federal Rule of Evidence 702, which covers testimony by expert witnesses, allows a witness to testify “in the form of an opinion or otherwise” if “the testimony is based on sufficient facts or data” and “is the product of reliable principles and methods” that have been “reliably applied.” The determination of “sufficient” (facts or data) and whether the “reliable principles and methods” relate to the scientific question at hand involve more discrimination than the current Rule 702 may suggest. Using examples from latent fingerprint matching and trace evidence (bullet lead and glass), I offer some criteria that scientists often consider in assessing the “trustworthiness” of evidence to enable courts to better distinguish between “trustworthy” and “questionable” evidence. The codification of such criteria may ultimately strengthen the current Rule 702 so courts can better distinguish between demonstrably scientific sufficiency and “opinion” based on inadequate (or inappurtenant) methods

    Decision-making in blackjack : an electrophysiological analysis

    No full text
    Previous studies have identified a negative potential in the event-related potential (ERP), the error-related negativity (ERN), which is claimed to be triggered by a deviation from a reward expectation. Furthermore, this negativity is related to shifts in risk taking, strategic behavioral adjustments, and inhibition. We used a computer Blackjack gambling task to further examine the process associated with the ERN. Our findings are in line with the view that the ERN process is related to the degree of reward expectation. Furthermore, increased ERN amplitude is associated with the negative evaluation of ongoing decisions, and the amplitude of the ERN is directly related to risk-taking and decision-making behavior. However, the findings suggest that an explanation exclusively based on the deviation from a reward expectation may be insufficient and that the intention of the participants and the importance of a negative event for learning and behavioral change are crucial to the understanding of ERN phenomena

    The effects of self-awareness on body movement indicators of the intention to deceive

    Get PDF
    A study was conducted to investigate the body movements of participants waiting to be interviewed in one of two conditions: preparing to answer questions truthfully or preparing to lie. The effects of increased self-awareness were also investigated, with half of the participants facing a mirror; the other half facing a blank wall. Analysis of covertly obtained video footage showed a significant interaction for the duration of hand/arm movements between deception level and self-awareness. Without a mirror, participants expecting to lie spent less time moving their hands than those expecting to tell the truth; the opposite was seen in the presence of a mirror. Participants expecting to lie also had higher levels of anxiety and thought that they were left waiting for less time than those expecting to tell the truth. These findings led to the identification of further research areas with the potential to support deception detection in security applications

    History-based action selection bias in posterior parietal cortex.

    Get PDF
    Making decisions based on choice-outcome history is a crucial, adaptive ability in life. However, the neural circuit mechanisms underlying history-dependent decision-making are poorly understood. In particular, history-related signals have been found in many brain areas during various decision-making tasks, but the causal involvement of these signals in guiding behavior is unclear. Here we addressed this issue utilizing behavioral modeling, two-photon calcium imaging, and optogenetic inactivation in mice. We report that a subset of neurons in the posterior parietal cortex (PPC) closely reflect the choice-outcome history and history-dependent decision biases, and PPC inactivation diminishes the history dependency of choice. Specifically, many PPC neurons show history- and bias-tuning during the inter-trial intervals (ITI), and history dependency of choice is affected by PPC inactivation during ITI and not during trial. These results indicate that PPC is a critical region mediating the subjective use of history in biasing action selection

    How Jurors Evaluate Fingerprint Evidence: The Relative Importance of Match Language, Method Information, and Error Acknowledgment

    Get PDF
    Fingerprint examiners use a variety of terms and phrases to describe a finding of a match between a defendant\u27s fingerprints and fingerprint impressions collected from a crime scene. Despite the importance and ubiquity of fingerprint evidence in criminal cases, no prior studies examine how jurors evaluate such evidence. We present two studies examining the impact of different match phrases, method descriptions, and statements about possible examiner error on the weight given to fingerprint identification evidence by laypersons. In both studies, the particular phrase chosen to describe the finding of a match-whether simple and imprecise or detailed and claiming near certainty-had little effect on participants\u27 judgments about the guilt of a suspect. In contrast, the examiner admitting the possibility of error reduced the weight given to the fingerprint evidence-regardless of whether the admission was made during direct or cross-examination. In addition, the examiner providing information about the method used to make fingerprint comparisons reduced the impact of admitting the possibility of error. We found few individual differences in reactions to the fingerprint evidence across a wide range of participant variables, and we found widespread agreement regarding the uniqueness of fingerprints and the reliability of fingerprint identifications. Our results suggest that information about the reliability of fingerprint identifications will have a greater impact on lay interpretations of fingerprint evidence than the specific qualitative or quantitative terms chosen to describe a fingerprint match
    corecore