49 research outputs found

    Effect of Emotional Picture Viewing on Voluntary Eyeblinks

    Get PDF

    Explicit behavioral detection of visual changes develops without their implicit neurophysiological detectability

    Get PDF
    Change blindness is a failure of reporting major changes across consecutive images if separated, e.g., by a brief blank interval. Successful change detection across interrupts requires focal attention to the changes. However, findings of implicit detection of visual changes during change blindness have raised the question of whether the implicit mode is necessary for development of the explicit mode. To this end, we recorded the visual mismatch negativity (vMMN) of the event-related potentials (ERPs) of the brain, an index of implicit pre-attentive visual change detection, in adult humans performing an oddball-variant of change blindness flicker task. Images of 500 ms in duration were presented repeatedly in continuous sequences, alternating with a blank interval (either 100 ms or 500 ms in duration throughout a stimulus sequence). Occasionally (P = 0.2), a change (referring to color changes, omissions, or additions of objects or their parts in the image) was present. The participants attempted to explicitly (via voluntary button press) detect the occasional change. With both interval durations, it took 10–15 change presentations in average for the participants to eventually detect the changes explicitly in a sequence, the 500 ms interval only requiring a slightly longer exposure to the series than the 100 ms one. Nevertheless, prior to this point of explicit detectability, the implicit detection of the changes vMMN could only be observed with the 100 ms intervals. These findings of explicit change detection developing with and without implicit change detection may suggest that the two modes of change detection recruit independent neural mechanisms

    Cardiorespiratory Fitness Estimation Based on Heart Rate and Body Acceleration in Adults With Cardiovascular Risk Factors : Validation Study

    Get PDF
    Publisher Copyright: © Antti-Pekka E Rissanen, Mirva Rottensteiner, Urho M Kujala, Jari L O Kurkela, Jan Wikgren, Jari A Laukkanen. Originally published in JMIR Cardio (https://cardio.jmir.org), 25.10.2022. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Cardio, is properly cited. The complete bibliographic information, a link to the original publication on https://cardio.jmir.org, as well as this copyright and license information must be included.Background: Cardiorespiratory fitness (CRF) is an independent risk factor for cardiovascular morbidity and mortality. Adding CRF to conventional risk factors (eg, smoking, hypertension, impaired glucose metabolism, and dyslipidemia) improves the prediction of an individual's risk for adverse health outcomes such as those related to cardiovascular disease. Consequently, it is recommended to determine CRF as part of individualized risk prediction. However, CRF is not determined routinely in everyday clinical practice. Wearable technologies provide a potential strategy to estimate CRF on a daily basis, and such technologies, which provide CRF estimates based on heart rate and body acceleration, have been developed. However, the validity of such technologies in estimating individual CRF in clinically relevant populations is poorly known. Objective: The objective of this study is to evaluate the validity of a wearable technology, which provides estimated CRF based on heart rate and body acceleration, in working-aged adults with cardiovascular risk factors. Methods: In total, 74 adults (age range 35-64 years; n=56, 76% were women; mean BMI 28.7, SD 4.6 kg/m2) with frequent cardiovascular risk factors (eg, n=64, 86% hypertension; n=18, 24% prediabetes; n=14, 19% type 2 diabetes; and n=51, 69% metabolic syndrome) performed a 30-minute self-paced walk on an indoor track and a cardiopulmonary exercise test on a treadmill. CRF, quantified as peak O2 uptake, was both estimated (self-paced walk: a wearable single-lead electrocardiogram device worn to record continuous beat-to-beat R-R intervals and triaxial body acceleration) and measured (cardiopulmonary exercise test: ventilatory gas analysis). The accuracy of the estimated CRF was evaluated against that of the measured CRF. Results: Measured CRF averaged 30.6 (SD 6.3; range 20.1-49.6) mL/kg/min. In all participants (74/74, 100%), mean difference between estimated and measured CRF was −0.1 mL/kg/min (P = .90), mean absolute error was 3.1 mL/kg/min (95% CI 2.6-3.7), mean absolute percentage error was 10.4% (95% CI 8.5-12.5), and intraclass correlation coefficient was 0.88 (95% CI 0.80-0.92). Similar accuracy was observed in various subgroups (sexes, age, BMI categories, hypertension, prediabetes, and metabolic syndrome). However, mean absolute error was 4.2 mL/kg/min (95% CI 2.6-6.1) and mean absolute percentage error was 16.5% (95% CI 8.6-24.4) in the subgroup of patients with type 2 diabetes (14/74, 19%). Conclusions: The error of the CRF estimate, provided by the wearable technology, was likely below or at least very close to the clinically significant level of 3.5 mL/kg/min in working-aged adults with cardiovascular risk factors, but not in the relatively small subgroup of patients with type 2 diabetes. From a large-scale clinical perspective, the findings suggest that wearable technologies have the potential to estimate individual CRF with acceptable accuracy in clinically relevant populations.Peer reviewe

    Visuaalisen neglectin tunnistamisen haasteet kognitiivisessa ajokykyarviossa: tapaustutkimus

    Get PDF
    TĂ€mĂ€n tutkimuksen tarkoituksena oli selvittÀÀ, kuinka luotettavasti visuaalista vasemman puolen neglectiĂ€ ja ajokykyĂ€ pystytÀÀn arvioimaan kognitiivisilla kynĂ€-paperitehtĂ€villĂ€ kliinisessĂ€ yksilöarvioinnissa. Suoriutumista kynĂ€-paperitehtĂ€vissĂ€ (Rey, Reyn vĂ€litön tahaton mieleenpalautus, BIT perinteiset osatehtĂ€vĂ€t, Vilkin viivat) verrattiin suoriutumiseen simulaattoriajossa. LisĂ€ksi tarkasteltiin visuaalista havainnointia simulaattoriajon aikana silmĂ€nliikkeitĂ€ mittaamalla. Tutkimukseen osallistui kolme neglect-potilasta ja 17 tervettĂ€ verrokkia. Lopulliseen aineistoanalyysiin otettiin kymmenen verrokkia, koska seitsemĂ€n joutui keskeyttĂ€mÀÀn simulaattoripahoinvoinnin takia. Vasemman puolen havainnointi oli oikeaa puolta heikompaa ja ajaminen erittĂ€in virhealtista siitĂ€ huolimatta, ettĂ€ kognitiivinen testisuoriutuminen ei antanut kahdella kolmesta neglect-potilaasta viitteitĂ€ neglect-oireista ja siten antanut aihetta epĂ€illĂ€ heidĂ€n ajokykyÀÀn. LisĂ€ksi neglect-potilaiden oiretiedostus nĂ€yttĂ€ytyi huomattavan heikkona. TĂ€mĂ€ tutkimus antaa viitteitĂ€ siitĂ€, ettĂ€ perinteiset kynĂ€-paperitehtĂ€vĂ€t eivĂ€t ole riittĂ€vĂ€n herkkiĂ€ tunnistamaan visuaalista vasemman puolen neglectiĂ€ kliinisessĂ€ yksilöarvioinnissa 6–7,5 kuukautta aivoverenvuodon jĂ€lkeen eivĂ€tkĂ€ pysty luotettavasti ennustamaan potilaiden ajokykyisyyttĂ€. Abstract The aim of this study was to clarify how reliably neglect symptoms and driving ability can be evaluated by cognitive paper and pencil tests in clinical assessment. Paper and pencil test performance (Rey, Rey immediate recall, BIT traditional subtests, Vilkki visual search for parallel lines) was compared to the driving simulator performance. Additionally, the visual search was recorded by eye-tracking device during simulator drive. Three neglect patients and 17 healthy controls participated in this study. Ten healthy controls were included in the final analyses. Seven healthy controls quit the study due to simulator sickness and their data was excluded from final analyses. Despite the fact that neglect was not revealed in the cognitive tests and thus there was no reason to doubt the driving ability of two out of three patients, neglect patients’ simulator drive was full of driving errors and perception of the left side was weaker than of the right side. Additionally, all the patients had clear anosognosia. This study suggests that traditional paper and pencil tests are not sensitive enough to recognize left visual neglect in clinical assessment 6–7.5 months after hemorrhage and cannot reliably predict the ability to drive. Keywords: neglect, hemi-inattention, driving, simulator, assessmen

    The impact of lifestyle factors on the intensity of adverse effects in single and repeated session protocols of transcranial electrical stimulation : an exploratory pilot study

    Get PDF
    Transcranial electrical stimulation (tES) has shown promise in the treatment of conditions such as depression and chronic pain with mild-to-moderate adverse effects (AEs). Few previous studies have attempted to identify factors predicting tES-induced AEs. In particular, AEs resulting from repeated sessions of tES remain understudied. We conducted an exploratory retrospective analysis of two independent randomized controlled studies to investigate whether lifestyle factors (i.e. chronic alcohol use, smoking, exercise, and quality and length of sleep) modify the severity and frequency of tES-induced AEs, and evaluated the progression of AEs over repeated sessions. We utilized two double-blinded samples: 1) a male sample (n=82) randomized to receive transcranial direct current stimulation (tDCS) or sham for 5 days, and 2) a mixed-sex sample (n=60) who received both transcranial random noise stimulation (tRNS) and sham in a crossover setting. The severity of AEs was recorded on a scale of 0-100. The data was analysed using negative binomial models. In addition, we performed power calculations and, to guide future research, evaluated the numbers of individuals needed to detect non-significant observations as significant. By day 5, the tDCS group experienced more sensations under the electrodes than the sham group. Alcohol use, smoking, exercise, or quality or duration of sleep did not appear to be associated with the intensity of the AEs. The subsequent power analyses indicated that substantially larger samples would be needed to detect the observed associations as significant. Repetitive sessions do not appear to introduce additional AE burden to individuals receiving either tDCS or tRNS, at least with protocols lasting up to 5 days. Alcohol use, smoking, exercise, or quality or duration of sleep appear to only have an effect of negligible size, if any, on AEs induced by tDCS or tRNS, and studies with sample sizes ranging from roughly 100 individuals to hundreds of thousands of individuals would be required to detect such effects as significant

    The effect of writing modality on recollection in children and adolescents

    Get PDF
    We set out to assess the extent to which writing modality affects recollection in children and adolescents. We examined 10- to 11-year-old children’s (N = 63) and 16-year-old adolescents’ (N = 43) handwriting, keyboarding with a laptop computer and keyboarding with a touchscreen tablet computer or mobile phone in a within-subjects experimental design. Participants were instructed to write down stories dictated to them in the three writing modalities. Recollection of the stories was assessed using free recall of details in the stories. The results indicate that the writing modality affects recollection, handwriting leading to better recollection. However, currently, digital writing tools are inundating classrooms and workplaces around the globe, making their competent use a necessity in today’s world. For example, in Finland, students are obligated to use a laptop in upper secondary education and in the national final examination. In light of the results, we highlight the importance of balancing the instruction and practice of different writing modalities. Given the limitations of this study, we suggest conducting a larger-scale study and further research on the educational and cognitive implications of using and learning to write using multiple writing modalities.Peer reviewe

    Measuring psychosocial stress with heart rate variability-based methods in different health and age groups

    Get PDF
    Objective. Autonomic nervous system function and thereby bodily stress and recovery reactions may be assessed by wearable devices measuring heart rate (HR) and its variability (HRV). So far, the validity of HRV-based stress assessments has been mainly studied in healthy populations. In this study, we determined how psychosocial stress affects physiological and psychological stress responses in both young (18-30 years) and middle-aged (45-64 years) healthy individuals as well as in patients with arterial hypertension and/or either prior evidence of prediabetes or type 2 diabetes. We also studied how an HRV-based stress index (Relax-Stress Intensity, RSI) relates to perceived stress (PS) and cortisol (CRT) responses during psychosocial stress. Approach. A total of 197 participants were divided into three groups: (1) healthy young (HY, N = 63), (2) healthy middle-aged (HM, N = 61) and (3) patients with cardiometabolic risk factors (Pts, N = 73, 32-65 years). The participants underwent a group version of Trier Social Stress Test (TSST-G). HR, HRV (quantified as root mean square of successive differences of R-R intervals, RMSSD), RSI, PS, and salivary CRT were measured regularly during TSST-G and a subsequent recovery period. Main results. All groups showed significant stress reactions during TSST-G as indicated by significant responses of HR, RMSSD, RSI, PS, and salivary CRT. Between-group differences were also observed in all measures. Correlation and regression analyses implied RSI being the strongest predictor of CRT response, while HR was more closely associated with PS. Significance. The HRV-based stress index mirrors responses of CRT, which is an independent marker for physiological stress, around TSST-G. Thus, the HRV-based stress index may be used to quantify physiological responses to psychosocial stress across various health and age groups.Peer reviewe

    Project DyAdd : Non-linguistic Theories of Dyslexia Predict Intelligence

    Get PDF
    Two themes have puzzled the research on developmental and learning disorders for decades. First, some of the risk and protective factors behind developmental challenges are suggested to be shared and some are suggested to be specific for a given condition. Second, language-based learning difficulties like dyslexia are suggested to result from or correlate with non-linguistic aspects of information processing as well. In the current study, we investigated how adults with developmental dyslexia or ADHD as well as healthy controls cluster across various dimensions designed to tap the prominent non-linguistic theories of dyslexia. Participants were 18-55-year-old adults with dyslexia (n= 36), ADHD (n= 22), and controls (n= 35). Non-linguistic theories investigated with experimental designs included temporal processing impairment, abnormal cerebellar functioning, procedural learning difficulties, as well as visual processing and attention deficits. Latent profile analysis (LPA) was used to investigate the emerging groups and patterns of results across these experimental designs. LPA suggested three groups: (1) a large group with average performance in the experimental designs, (2) participants predominantly from the clinical groups but with enhanced conditioning learning, and (3) participants predominantly from the dyslexia group with temporal processing as well as visual processing and attention deficits. Despite the presence of these distinct patterns, participants did not cluster very well based on their original status, nor did the LPA groups differ in their dyslexia or ADHD-related neuropsychological profiles. Remarkably, the LPA groups did differ in their intelligence. These results highlight the continuous and overlapping nature of the observed difficulties and support the multiple deficit model of developmental disorders, which suggests shared risk factors for developmental challenges. It also appears that some of the risk factors suggested by the prominent non-linguistic theories of dyslexia relate to the general level of functioning in tests of intelligence.Peer reviewe
    corecore