1,346 research outputs found

    Test Structure and Administration

    Get PDF
    In 1970, a psychologist named Dr. David Raskin, a researcher at the University of Utah, began a study of the probable lie comparison question polygraph technique. Raskin and his colleagues systematically studied and refined the elements of polygraphy by determining what aspects of the technique could be scientifically proven to increase validity and reliability (Raskin & Honts 2002). Their efforts culminated in the creation of what is known today as the Utah approach to the Comparison Question Test (CQT) The Utah-CQT is an empirically consistent and unified approach to polygraphy. The Utah-CQT, traditionally employed as a single issue Zone Comparison Test (ZCT), is amenable to other uses as a multi-facet or multiple-issue (mixed-issue) General Question Technique (GQT) and the related family of Modified General Question Technique (MGQT) examination formats. The Utah-CQT and the corresponding Utah Numerical Scoring System (Bell, Raskin, Honts & Kircher, 1999; Handler, 2006) resulted from over 30 years of scientific research and scientific peer-review. The resulting technique provides some of the highest rates of criterion accuracy and interrater reliability of any polygraph examination protocol (Senter, Dollins & Krapohl, 2004; Krapohl, 2006). The authors discuss the Utah-CQT using the Probable Lie Test (PLT) as well as the lesser known Directed Lie Test (DLT) and review some of the possible benefits offered by each method

    The empirical basis for the use of directed lie comparison questions in diagnostic and screening polygraphs.

    Get PDF
    There has been some question as to when it is advantageous or "permissible" to use directed lie comparison (DLC) questions in polygraph testing. More specifically, this question and this related discussion pertains to whether it is scientifically valid to use DLCs in diagnostic and/or screening test formats. Discussion of these questions extend quickly into the realm of professional ethics, which centers around ensuring that we, as professionals, make good choices that benefit our profession, our agencies, our communities, our countries, and the individual being tested. Ethics is, after all, a discussion about right and wrong with consideration for what bad or good things happen, and to whom these things happen, as a result of a particular choice of action. The polygraph profession sits at a crucial point of ethical discussions, and these discussions pertain to theories of truth and deception, and also to the competition of rights, priorities and potential impacts that may result in different benefits and consequences for individual persons and groups of people. It is a goal of science to provide evidence-based models for making decisions about individual cases, and for making policies that affect decisions pertaining to groups of cases. Evidence-based practices allow us to calculate the expected results and probability of error with mathematical precision, and therefore help us to better manage the impact that decisions and actions have on individuals and groups. It is our position that answers to questions about scientific validity and ethics should be informed and determined by data and evidence, and not by a declarative system of arbitrary rules without evidence. Compliance with policies and regulations is important, and this paper is not intended to supersede the existing policies or mandated field practices of any agency. Rather, this document is intended to orient the reader to the scientific evidence regarding DLCs, and to anchor a more informed professional discussion regarding matters of scientific validity and polygraph field practices. Administrators, policy makers, and field examiners place themselves in an untenable position when their decisions and policies are not grounded in science. That position is one of having to explain or defend one's policies or field practices when they are inconsistent with the published scientific evidence that is available to the opposing counsel during a legal contest. The same evidence that could be used to improve the effectiveness and validity of the polygraph could also be used to undermine the credibility and viability of the profession if we chose to ignore it. It is hoped that the information in this document will lead to further discussion and improvements in policies and field practices to include the current state of scientific evidence regarding the use of DLCs. Discussion Summary of the Research Evidence The views and opinions expressed in this paper are those of the authors and do not necessarily represent the associations, agencies, and entities with whom the authors are affiliated

    Generative replay underlies compositional inference in the hippocampal-prefrontal circuit

    Get PDF
    Human reasoning depends on reusing pieces of information by putting them together in new ways. However, very little is known about how compositional computation is implemented in the brain. Here, we ask participants to solve a series of problems that each require constructing a whole from a set of elements. With fMRI, we find that representations of novel constructed objects in the frontal cortex and hippocampus are relational and compositional. With MEG, we find that replay assembles elements into compounds, with each replay sequence constituting a hypothesis about a possible configuration of elements. The content of sequences evolves as participants solve each puzzle, progressing from predictable to uncertain elements and gradually converging on the correct configuration. Together, these results suggest a computational bridge between apparently distinct functions of hippocampal-prefrontal circuitry and a role for generative replay in compositional inference and hypothesis testing

    Immortalized human keratinocytes: A model system to study the efficacy of therapeutic drugs in response to the chemical warfare agent sulfur mustard (HD)

    Get PDF
    Cytokines have been established as biomarkers to detect exposure of cells to chemical warfare agents such as sulfur mustard (2,2'-dichlorodiethyl sulfide, HD). In this study cultured normal and SV40 immortalized human epidermal keratinocyte (NHEK/IHEK) cells were compared as potential model systems to measure the efficacy of therapeutic drugs against HD. Immortalized human epidermal keratinocytes resemble their primary cell counterparts but have the advantage of being carried through long-term culture. Immortalized cells also provide consistency and durability and are less costly than primary keratinocytes. Immunoassay studies were performed to examine the response of these two cell lines to HD. We found that both normal and immortalized NHEKs secreted the pro-inflammatory mediator interleukin-8 (IL-8) when exposed to HD. However, a major difference was observed between the NHEK cell line 6207 and IHEK cell line 425. IHEK cell line 425 produced higher levels of Interleuken-8 then those of its normal counterpart cell line 6207. This observation is significant since therapeutic drugs such as ibuprofen, which depress cytokine production, may not allow these biomarkers to be detected efficiently in experimental analysis of certain NHEK cell lines. The fact that Il-8 production higher in cell line 425 cell makes this in vitro model a potential screening tool to study the efficacy of drugs that suppress production of cytokine markers

    Relations between lipoprotein(a) concentrations, LPA genetic variants, and the risk of mortality in patients with established coronary heart disease: a molecular and genetic association study

    Get PDF
    Background: Lipoprotein(a) concentrations in plasma are associated with cardiovascular risk in the general population. Whether lipoprotein(a) concentrations or LPA genetic variants predict long-term mortality in patients with established coronary heart disease remains less clear. Methods: We obtained data from 3313 patients with established coronary heart disease in the Ludwigshafen Risk and Cardiovascular Health (LURIC) study. We tested associations of tertiles of lipoprotein(a) concentration in plasma and two LPA single-nucleotide polymorphisms ([SNPs] rs10455872 and rs3798220) with all-cause mortality and cardiovascular mortality by Cox regression analysis and with severity of disease by generalised linear modelling, with and without adjustment for age, sex, diabetes diagnosis, systolic blood pressure, BMI, smoking status, estimated glomerular filtration rate, LDL-cholesterol concentration, and use of lipid-lowering therapy. Results for plasma lipoprotein(a) concentrations were validated in five independent studies involving 10 195 patients with established coronary heart disease. Results for genetic associations were replicated through large-scale collaborative analysis in the GENIUS-CHD consortium, comprising 106 353 patients with established coronary heart disease and 19 332 deaths in 22 studies or cohorts. Findings: The median follow-up was 9·9 years. Increased severity of coronary heart disease was associated with lipoprotein(a) concentrations in plasma in the highest tertile (adjusted hazard radio [HR] 1·44, 95% CI 1·14–1·83) and the presence of either LPA SNP (1·88, 1·40–2·53). No associations were found in LURIC with all-cause mortality (highest tertile of lipoprotein(a) concentration in plasma 0·95, 0·81–1·11 and either LPA SNP 1·10, 0·92–1·31) or cardiovascular mortality (0·99, 0·81–1·2 and 1·13, 0·90–1·40, respectively) or in the validation studies. Interpretation: In patients with prevalent coronary heart disease, lipoprotein(a) concentrations and genetic variants showed no associations with mortality. We conclude that these variables are not useful risk factors to measure to predict progression to death after coronary heart disease is established. Funding: Seventh Framework Programme for Research and Technical Development (AtheroRemo and RiskyCAD), INTERREG IV Oberrhein Programme, Deutsche Nierenstiftung, Else-Kroener Fresenius Foundation, Deutsche Stiftung für Herzforschung, Deutsche Forschungsgemeinschaft, Saarland University, German Federal Ministry of Education and Research, Willy Robert Pitzer Foundation, and Waldburg-Zeil Clinics Isny

    100% RAG: Syracuse School of Architecture, Student Newspaper, 1989

    Get PDF
    100% RAG: Syracuse School of Architecture, Student Newspaper, 1989. Student newsletter from student contributors of Syracuse School of Architecture in 1989

    Testing gravitational-wave searches with numerical relativity waveforms: Results from the first Numerical INJection Analysis (NINJA) project

    Get PDF
    The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave data analysis communities. The purpose of NINJA is to study the sensitivity of existing gravitational-wave search algorithms using numerically generated waveforms and to foster closer collaboration between the numerical relativity and data analysis communities. We describe the results of the first NINJA analysis which focused on gravitational waveforms from binary black hole coalescence. Ten numerical relativity groups contributed numerical data which were used to generate a set of gravitational-wave signals. These signals were injected into a simulated data set, designed to mimic the response of the Initial LIGO and Virgo gravitational-wave detectors. Nine groups analysed this data using search and parameter-estimation pipelines. Matched filter algorithms, un-modelled-burst searches and Bayesian parameter-estimation and model-selection algorithms were applied to the data. We report the efficiency of these search methods in detecting the numerical waveforms and measuring their parameters. We describe preliminary comparisons between the different search methods and suggest improvements for future NINJA analyses.Comment: 56 pages, 25 figures; various clarifications; accepted to CQ

    Communications Biophysics

    Get PDF
    Contains reports on eight research projects split into four sections.National Institutes of Health (Grant 5 P01 NS13126)National Institutes of Health (Grant 5 K04 NS00113)National Institutes of Health (Training Grant 5 T32 NS07047)National Science Foundation (Grant BNS80-06369)National Institutes of Health (Grant 5 ROl NS11153)National Institutes of Health (Fellowship 1 F32 NS06544)National Science Foundation (Grant BNS77-16861)National Institutes of Health (Grant 5 R01 NS10916)National Institutes of Health (Grant 5 RO1 NS12846)National Science Foundation (Grant BNS77-21751)National Institutes of Health (Grant 1 R01 NS14092)National Institutes of Health (Grant 2 R01 NS11680)National Institutes of Health (Grant 5 ROl1 NS11080)National Institutes of Health (Training Grant 5 T32 GM07301

    Pan-Cancer Analysis of lncRNA Regulation Supports Their Targeting of Cancer Genes in Each Tumor Context

    Get PDF
    Long noncoding RNAs (lncRNAs) are commonly dys-regulated in tumors, but only a handful are known toplay pathophysiological roles in cancer. We inferredlncRNAs that dysregulate cancer pathways, onco-genes, and tumor suppressors (cancer genes) bymodeling their effects on the activity of transcriptionfactors, RNA-binding proteins, and microRNAs in5,185 TCGA tumors and 1,019 ENCODE assays.Our predictions included hundreds of candidateonco- and tumor-suppressor lncRNAs (cancerlncRNAs) whose somatic alterations account for thedysregulation of dozens of cancer genes and path-ways in each of 14 tumor contexts. To demonstrateproof of concept, we showed that perturbations tar-geting OIP5-AS1 (an inferred tumor suppressor) andTUG1 and WT1-AS (inferred onco-lncRNAs) dysre-gulated cancer genes and altered proliferation ofbreast and gynecologic cancer cells. Our analysis in-dicates that, although most lncRNAs are dysregu-lated in a tumor-specific manner, some, includingOIP5-AS1, TUG1, NEAT1, MEG3, and TSIX, synergis-tically dysregulate cancer pathways in multiple tumorcontexts
    • …
    corecore