277 research outputs found

    Test Structure and Administration

    Get PDF
    In 1970, a psychologist named Dr. David Raskin, a researcher at the University of Utah, began a study of the probable lie comparison question polygraph technique. Raskin and his colleagues systematically studied and refined the elements of polygraphy by determining what aspects of the technique could be scientifically proven to increase validity and reliability (Raskin & Honts 2002). Their efforts culminated in the creation of what is known today as the Utah approach to the Comparison Question Test (CQT) The Utah-CQT is an empirically consistent and unified approach to polygraphy. The Utah-CQT, traditionally employed as a single issue Zone Comparison Test (ZCT), is amenable to other uses as a multi-facet or multiple-issue (mixed-issue) General Question Technique (GQT) and the related family of Modified General Question Technique (MGQT) examination formats. The Utah-CQT and the corresponding Utah Numerical Scoring System (Bell, Raskin, Honts & Kircher, 1999; Handler, 2006) resulted from over 30 years of scientific research and scientific peer-review. The resulting technique provides some of the highest rates of criterion accuracy and interrater reliability of any polygraph examination protocol (Senter, Dollins & Krapohl, 2004; Krapohl, 2006). The authors discuss the Utah-CQT using the Probable Lie Test (PLT) as well as the lesser known Directed Lie Test (DLT) and review some of the possible benefits offered by each method

    My point of view...

    Get PDF
    "The technology has advanced to computers that allow much more reliable processing and storage of the data."(...

    DETECTING ADVERSE DRUG REACTIONS IN THE NURSING HOME SETTING USING A CLINICAL EVENT MONITOR

    Get PDF
    Adverse drug reactions (ADRs) are the most clinically significant and costly medication-related problems in nursing homes (NH), and are associated with an estimated 93,000 deaths a year and as much as $4 billion of excess healthcare expenditures. Current ADR detection and management strategies that rely on pharmacist retrospective chart reviews (i.e., usual care) are inadequate. Active medication monitoring systems, such as clinical event monitors, are recommended by many safety organizations as an alternative to detect and manage ADRs. These systems have been shown to be less expensive, faster, and identify ADRs not normally detected by clinicians in the hospital setting. The main research goal of this dissertation is to review the rationale for the development and subsequent evaluation of an active medication monitoring system to automate the detection of ADRs in the NH setting. This dissertation includes three parts and each part has its own emphasis and methodology centered on the main topic of better understanding of how to detect ADRs in the NH setting.The first paper describes a systematic review of pharmacy and laboratory signals used by clinical event monitors to detect ADRs in hospitalized adult patients. The second paper describes the development of a consensus list of agreed upon laboratory, pharmacy, and Minimum Data Set signals that can be used by a clinical event monitor to detect potential ADRs. The third paper describes the implementation and pharmacist evaluation of a clinical event monitor using the signals developed by consensus.The findings in the papers described will help us to better understand, design, and evaluate active medication monitoring systems to automate the detection of ADRs in the NH setting. Future research is needed to determine if NH patients managed by physicians who receive active medication monitoring alerts have more ADRs detected, have a faster ADR management response time, and result in more cost-savings from a societal perspective, compared to usual care

    Trying an Accused Serial Sexual Harasser for Libel in a US Civil Court

    Get PDF
    The goal of this article is to provide a class of MeToo# victims of a high-profile serial sexual harasser with a non-invasive method for civil action, when the accused publicly dismisses the victims’ claims as lies. When these libelous claims do occur, the victims can be assembled into a class-action libel/defamation case, which in most US states must be mounted within two years of the claim. Because under current civil methods, the plaintiffs would be subject to intense cross-examination in a civil jury trial, class-action lawsuits with small numbers of plaintiffs (e.g. 5–8) have proven impossible to conduct. This article provides a blueprint to create a collaboration amongst the victims, credibility-assessment (lie-detector) experts, statisticians, and MeToo# attorneys to litigate libel suits, which will likely produce out-of-court settlements. Once the first case is successfully completed, precedent will be set to bring other perpetrators to justice, and act as a deterrent to future exploitation. The evidentiary basis would be based on testing the null hypothesis that all plaintiffs are lying, to compare the inferred lying rates of the plaintiffs to similar population controls, who would be known liars, to a “Yes” answer to “Did X sexually harass you?

    The empirical basis for the use of directed lie comparison questions in diagnostic and screening polygraphs.

    Get PDF
    There has been some question as to when it is advantageous or "permissible" to use directed lie comparison (DLC) questions in polygraph testing. More specifically, this question and this related discussion pertains to whether it is scientifically valid to use DLCs in diagnostic and/or screening test formats. Discussion of these questions extend quickly into the realm of professional ethics, which centers around ensuring that we, as professionals, make good choices that benefit our profession, our agencies, our communities, our countries, and the individual being tested. Ethics is, after all, a discussion about right and wrong with consideration for what bad or good things happen, and to whom these things happen, as a result of a particular choice of action. The polygraph profession sits at a crucial point of ethical discussions, and these discussions pertain to theories of truth and deception, and also to the competition of rights, priorities and potential impacts that may result in different benefits and consequences for individual persons and groups of people. It is a goal of science to provide evidence-based models for making decisions about individual cases, and for making policies that affect decisions pertaining to groups of cases. Evidence-based practices allow us to calculate the expected results and probability of error with mathematical precision, and therefore help us to better manage the impact that decisions and actions have on individuals and groups. It is our position that answers to questions about scientific validity and ethics should be informed and determined by data and evidence, and not by a declarative system of arbitrary rules without evidence. Compliance with policies and regulations is important, and this paper is not intended to supersede the existing policies or mandated field practices of any agency. Rather, this document is intended to orient the reader to the scientific evidence regarding DLCs, and to anchor a more informed professional discussion regarding matters of scientific validity and polygraph field practices. Administrators, policy makers, and field examiners place themselves in an untenable position when their decisions and policies are not grounded in science. That position is one of having to explain or defend one's policies or field practices when they are inconsistent with the published scientific evidence that is available to the opposing counsel during a legal contest. The same evidence that could be used to improve the effectiveness and validity of the polygraph could also be used to undermine the credibility and viability of the profession if we chose to ignore it. It is hoped that the information in this document will lead to further discussion and improvements in policies and field practices to include the current state of scientific evidence regarding the use of DLCs. Discussion Summary of the Research Evidence The views and opinions expressed in this paper are those of the authors and do not necessarily represent the associations, agencies, and entities with whom the authors are affiliated

    Non-publication of large randomized clinical trials: cross sectional analysis

    Get PDF
    Objective To estimate the frequency with which results of large randomized clinical trials registered with ClinicalTrials.gov are not available to the public.Design Cross sectional analysisSetting Trials with at least 500 participants that were prospectively registered with ClinicalTrials.gov and completed prior to January 2009.Data sources PubMed, Google Scholar, and Embase were searched to identify published manuscripts containing trial results. The final literature search occurred in November 2012. Registry entries for unpublished trials were reviewed to determine whether results for these studies were available in the ClinicalTrials.gov results database.Main outcome measures The frequency of non-publication of trial results and, among unpublished studies, the frequency with which results are unavailable in the ClinicalTrials.gov database.Results Of 585 registered trials, 171 (29%) remained unpublished. These 171 unpublished trials had an estimated total enrollment of 299 763 study participants. The median time between study completion and the final literature search was 60 months for unpublished trials. Non-publication was more common among trials that received industry funding (150/468, 32%) than those that did not (21/117, 18%), P=0.003. Of the 171 unpublished trials, 133 (78%) had no results available in ClinicalTrials.gov.Conclusions Among this group of large clinical trials, non-publication of results was common and the availability of results in the ClinicalTrials.gov database was limited. A substantial number of study participants were exposed to the risks of trial participation without the societal benefits that accompany the dissemination of trial results
    corecore