74 research outputs found

    CROSS-VALIDATION OF THE VALIDITY-10 SUBSCALE OF THE NEUROBEHAVIORAL SYMPTOM INVENTORY

    Get PDF
    The present study is a cross-validation of the Validity-10 embedded symptom validity indicator from the Neurobehavioral Symptom Inventory (NSI) for the detection of questionable response validity during evaluation for mild traumatic brain injury (TBI). The sample and data derived from a three-site Veterans Affairs (VA) parent study to validate the TBI Clinical Reminder, a routine set of questions asked of all recently returned veterans at VA facilities to screen for history of TBI. In the parent study, veterans recently returned from Iraq and Afghanistan underwent an evaluation for TBI with a physician and completed an assessment battery including neuropsychological tests of cognitive performance and indicators of symptom and performance validity, psychiatric assessment measures, a structured interview for post-traumatic stress disorder (PTSD), and various behavioral health questionnaires. The present study estimated the test operating characteristics of Validity-10, using NSI results gathered during the physician evaluation to compute Validity-10 scores, and using results on several other measures of symptom and performance validity from the assessment battery as criteria for questionable response validity. Only individuals who had positive screen results for TBI on the TBI Clinical Reminder prior to full evaluation were included in the present sample. Sensitivity of Validity-10 to questionable validity was moderately high (.60 - .70) to excellent (.90 - 1.00) at high levels of specificity (\u3e .80). Effects of different base rates of and different criteria for questionable validity on the utility of Validity-10 were explored as well. Chi-square analyses to determine the effect of PTSD symptoms on the utility of Validity-10 demonstrated overall classification accuracy in general, and false positive rate in particular, were relatively poorer when used with individuals who reported significant PTSD symptoms. Overall, these findings support the use of Validity-10 (at cut score Validity-10 ≥ 19) to identify those veterans being evaluation for mild TBI in the VA system who should be referred for comprehensive secondary evaluation by a clinical neuropsychologist using multiple forms of symptom and performance validity testing. Further studies of the effects of PTSD symptoms on the accuracy of Validity-10 for this purpose are recommended

    Pb Neurotoxicity: Neuropsychological Effects of Lead Toxicity

    Get PDF
    Neurotoxicity is a term used to describe neurophysiological changes caused by exposure to toxic agents. Such exposure can result in neurocognitive symptoms and/or psychiatric disturbances. Common toxic agents include heavy metals, drugs, organophosphates, bacterial, and animal neurotoxins. Among heavy metal exposures, lead exposure is one of the most common exposures that can lead to significant neuropsychological and functional decline in humans. In this review, neurotoxic lead exposure\u27s pathophysiology, etiology, and epidemiology are explored. In addition, commonly associated neuropsychological difficulties in intelligence, memory, executive functioning, attention, processing speed, language, visuospatial skills, motor skills, and affect/mood are explored

    Examining the Effects of Formal Education Level on the Montreal Cognitive Assessment

    Get PDF
    Background: Brief, global assessments such as the Montreal Cognitive Assessment (MoCA) are widely used in primary care for assessing cognition in older adults. Like other neuropsychological instruments, lower formal education can influence MoCA interpretation. Methods: Data from 2 large studies of cognitive aging were used—Alzheimer’s Disease Neuroimaging Initiative (ADNI) and National Alzheimer’s Coordinating Center (NACC). Both use comprehensive examinations to determine cognitive status and have brain amyloid status for many participants. Mixed models were used to account for random variation due to data source. Results: Cognitively intact participants with lower education (≤12 years) were more likely than those with higher education (\u3e12 years) to be classified as potentially impaired using the MoCA cutoff of \u3c26 (P \u3c .01). Backwards selection revealed 4 MoCA items significantly associated with education (cube copy, serial subtraction, phonemic fluency, abstraction). Subtracting these items scores yielded an alternative MoCA score with a maximum of 24 and a cutoff of ≤19 for classifying participants with mild cognitive impairment. Using the alternative MoCA score and cutoff, among cognitively intact participants, both education groups were similarly likely to be classified as potentially impaired (P \u3e .67). Conclusions: The alternative MoCA score neutralized the effects of formal education. Although further research is needed, this alternative score offers a simple procedure for interpreting MoCAs administered to older adults with ≤12 years education. These educational effects also highlight that the MoCA is part of the assessment process—not a singular diagnostic test—and a comprehensive workup is necessary to accurately diagnose cognitive impairments

    Longitudinal Assessment of Dementia Measures in Down Syndrome

    Get PDF
    Introduction: Early detection of dementia symptoms is critical in Down syndrome (DS) but complicated by clinical assessment barriers. The current study aimed to characterize cognitive and behavioral impairment using longitudinal trajectories comparing several measures of cognitive and behavioral functioning. Methods: Measures included global cognitive status (Severe Impairment Battery [SIB]), motor praxis (Brief Praxis Test [BPT]), and clinical dementia informant ratings (Dementia Questionnaire for People with Learning Disabilities [DLD]). One-year reliability was assessed using a two-way mixed effect, consistency, single measurement intraclass correlation among non-demented participants. Longitudinal assessment of SIB, BPT, and DLD was completed using linear mixed effect models. Results: One‐year reliability (n = 52; 21 male) was moderate for DLD (0.69 to 0.75) and good for SIB (0.87) and BPT (0.80). Longitudinal analysis (n = 72) revealed significant age by diagnosis interactions for SIB (F(2, 115.02) = 6.06, P = .003), BPT (F(2, 85.59) = 4.56, P = .013), and DLD (F(2, 103.56) = 4.48, P = .014). SIB progression (PR) had a faster decline in performance versus no‐dementia (ND) (t(159) = −2.87; P = .013). Dementia had a faster decline in BPT performance versus ND (t(112) = −2.46; P = .041). PR showed quickly progressing scores compared to ND (t(128) = −2.86; P = .014). Discussion: Current measures demonstrated moderate to good reliability. Longitudinal analysis revealed that SIB, BPT, and DLD changed with age depending on diagnostic progression; no change rates were dependent on baseline cognition, indicating usefulness across a variety of severity levels in DS

    Cognitive and Behavioral Domains That Reliably Differentiate Normal Aging and Dementia in Down Syndrome

    Get PDF
    Primary care integration of Down syndrome (DS)-specific dementia screening is strongly advised. The current study employed principal components analysis (PCA) and classification and regression tree (CART) analyses to identify an abbreviated battery for dementia classification. Scale- and subscale-level scores from 141 participants (no dementia n = 68; probable Alzheimer’s disease n = 73), for the Severe Impairment Battery (SIB), Dementia Scale for People with Learning Disabilities (DLD), and Vineland Adaptive Behavior Scales—Second Edition (Vineland-II) were analyzed. Two principle components (PC1, PC2) were identified with the odds of a probable dementia diagnosis increasing 2.54 times per PC1 unit increase and by 3.73 times per PC2 unit increase. CART analysis identified that the DLD sum of cognitive scores (SCS \u3c 35 raw) and Vineland-II community subdomain (\u3c 36 raw) scores best classified dementia. No significant difference in the PCA versus CART area under the curve (AUC) was noted (D(65.196) = −0.57683; p = 0.57; PCA AUC = 0.87; CART AUC = 0.91). The PCA sensitivity was 80% and specificity was 70%; CART was 100% and specificity was 81%. These results support an abbreviated dementia screening battery to identify at-risk individuals with DS in primary care settings to guide specialized diagnostic referral

    Great SCO2T! Rapid tool for carbon sequestration science, engineering, and economics

    Full text link
    CO2 capture and storage (CCS) technology is likely to be widely deployed in coming decades in response to major climate and economics drivers: CCS is part of every clean energy pathway that limits global warming to 2C or less and receives significant CO2 tax credits in the United States. These drivers are likely to stimulate capture, transport, and storage of hundreds of millions or billions of tonnes of CO2 annually. A key part of the CCS puzzle will be identifying and characterizing suitable storage sites for vast amounts of CO2. We introduce a new software tool called SCO2T (Sequestration of CO2 Tool, pronounced "Scott") to rapidly characterizing saline storage reservoirs. The tool is designed to rapidly screen hundreds of thousands of reservoirs, perform sensitivity and uncertainty analyses, and link sequestration engineering (injection rates, reservoir capacities, plume dimensions) to sequestration economics (costs constructed from around 70 separate economic inputs). We describe the novel science developments supporting SCO2T including a new approach to estimating CO2 injection rates and CO2 plume dimensions as well as key advances linking sequestration engineering with economics. Next, we perform a sensitivity and uncertainty analysis of geology combinations (including formation depth, thickness, permeability, porosity, and temperature) to understand the impact on carbon sequestration. Through the sensitivity analysis we show that increasing depth and permeability both can lead to increased CO2 injection rates, increased storage potential, and reduced costs, while increasing porosity reduces costs without impacting the injection rate (CO2 is injected at a constant pressure in all cases) by increasing the reservoir capacity.Comment: CO2 capture and storage; carbon sequestration; reduced-order modeling; climate change; economic

    The Allen Telescope Array Twenty-centimeter Survey - A 690-Square-Degree, 12-Epoch Radio Dataset - I: Catalog and Long-Duration Transient Statistics

    Full text link
    We present the Allen Telescope Array Twenty-centimeter Survey (ATATS), a multi-epoch (12 visits), 690 square degree radio image and catalog at 1.4GHz. The survey is designed to detect rare, very bright transients as well as to verify the capabilities of the ATA to form large mosaics. The combined image using data from all 12 ATATS epochs has RMS noise sigma = 3.94mJy / beam and dynamic range 180, with a circular beam of 150 arcsec FWHM. It contains 4408 sources to a limiting sensitivity of S = 20 mJy / beam. We compare the catalog generated from this 12-epoch combined image to the NRAO VLA Sky Survey (NVSS), a legacy survey at the same frequency, and find that we can measure source positions to better than ~20 arcsec. For sources above the ATATS completeness limit, the median flux density is 97% of the median value for matched NVSS sources, indicative of an accurate overall flux calibration. We examine the effects of source confusion due to the effects of differing resolution between ATATS and NVSS on our ability to compare flux densities. We detect no transients at flux densities greater than 40 mJy in comparison with NVSS, and place a 2-sigma upper limit on the transient rate for such sources of 0.004 per square degree. These results suggest that the > 1 Jy transients reported by Matsumura et al. (2009) may not be true transients, but rather variable sources at their flux density threshold.Comment: 41 pages, 19 figures, ApJ accepted; corrected minor typo in Table

    The Allen Telescope Array: The First Widefield, Panchromatic, Snapshot Radio Camera for Radio Astronomy and SETI

    Get PDF
    The first 42 elements of the Allen Telescope Array (ATA-42) are beginning to deliver data at the Hat Creek Radio Observatory in Northern California. Scientists and engineers are actively exploiting all of the flexibility designed into this innovative instrument for simultaneously conducting surveys of the astrophysical sky and conducting searches for distant technological civilizations. This paper summarizes the design elements of the ATA, the cost savings made possible by the use of COTS components, and the cost/performance trades that eventually enabled this first snapshot radio camera. The fundamental scientific program of this new telescope is varied and exciting; some of the first astronomical results will be discussed.Comment: Special Issue of Proceedings of the IEEE: "Advances in Radio Telescopes", Baars,J. Thompson,R., D'Addario, L., eds, 2009, in pres

    The Allen Telescope Array Pi GHz Sky Survey I. Survey Description and Static Catalog Results for the Bootes Field

    Get PDF
    The Pi GHz Sky Survey (PiGSS) is a key project of the Allen Telescope Array. PiGSS is a 3.1 GHz survey of radio continuum emission in the extragalactic sky with an emphasis on synoptic observations that measure the static and time-variable properties of the sky. During the 2.5-year campaign, PiGSS will twice observe ~250,000 radio sources in the 10,000 deg^2 region of the sky with b > 30 deg to an rms sensitivity of ~1 mJy. Additionally, sub-regions of the sky will be observed multiple times to characterize variability on time scales of days to years. We present here observations of a 10 deg^2 region in the Bootes constellation overlapping the NOAO Deep Wide Field Survey field. The PiGSS image was constructed from 75 daily observations distributed over a 4-month period and has an rms flux density between 200 and 250 microJy. This represents a deeper image by a factor of 4 to 8 than we will achieve over the entire 10,000 deg^2. We provide flux densities, source sizes, and spectral indices for the 425 sources detected in the image. We identify ~100$ new flat spectrum radio sources; we project that when completed PiGSS will identify 10^4 flat spectrum sources. We identify one source that is a possible transient radio source. This survey provides new limits on faint radio transients and variables with characteristic durations of months.Comment: Accepted for publication in ApJ; revision submitted with extraneous figure remove
    corecore