2 research outputs found

    Digital pen technology for conducting cognitive assessments: a cross-over study with older adults

    Get PDF
    Many digitalized cognitive assessments exist to increase reliability, standardization, and objectivity. Particularly in older adults, the performance of digitized cognitive assessments can lead to poorer test results if they are unfamiliar with the computer, mouse, keyboard, or touch screen. In a cross-over design study, 40 older adults (age M = 74.4 ± 4.1 years) conducted the Trail Making Test A and B with a digital pen (digital pen tests, DPT) and a regular pencil (pencil tests, PT) to identify differences in performance. Furthermore, the tests conducted with a digital pen were analyzed manually (manual results, MR) and electronically (electronic results, ER) by an automized system algorithm to determine the possibilities of digital pen evaluation. ICC(2,k) showed a good level of agreement for TMT A (ICC(2,k) = 0.668) and TMT B (ICC(2,k) = 0.734) between PT and DPT. When comparing MR and ER, ICC(2,k) showed an excellent level of agreement in TMT A (ICC(2,k) = 0.999) and TMT B (ICC(2,k) = 0.994). The frequency of pen lifting correlates significantly with the execution time in TMT A (r = 0.372, p = 0.030) and TMT B (r = 0.567, p < 0.001). A digital pen can be used to perform the Trail Making Test, as it has been shown that there is no difference in the results due to the type of pen used. With a digital pen, the advantages of digitized testing can be used without having to accept the disadvantages

    Digitization of neuropsychological diagnostics: a pilot study to compare three paper-based and digitized cognitive assessments

    Get PDF
    Background and objective: The number of people suffering from dementia is increasing worldwide and so is the need for reliable and economical diagnostic instruments. Therefore, the aim of this study was to compare the processing times of the neuropsychological tests Trail Making Tests A and B (TMT-A/B) and Color-Word Interference Test (CWIT), which were performed in both digital and paper versions. Methods: The pilot study was conducted among 50 healthy participants (age 65–83 years) using a randomized crossover design. The correlations and differences in the individual processing times of the two test versions were statistically analyzed. Further research questions concerned the influence of the individual usage of technology and the technology commitment of participants as well as the influence of the assessed usability on participants’ performance. Results: Between the two versions (paper-based vs. digital) statistically significant correlations were found in all tests, e.g., TMT-A r(48) = 0.63, p < 0.01; TMT-B rs(48) = 0.77, p < 0.001). The mean value comparison showed statistically significant differences, e.g., interference table (CWIT) t(49) = 11.24, p < 0.01). Correlations with medium effect were found between the differences in processing times and the individual usage of computer (e.g., rs(48) = − 0.31) and smartphone (rs(48) =  − 0.29) and between the processing times of the TMT-B and the usability (rs(48) = 0.29). Conclusions: The high correlations between the test procedures appear promising. However, the differences found in the processing times of the two test versions require validation and standardization of digitized test procedures before they can be used in practice
    corecore