4 research outputs found

    Functional Synergy Between Postural and Visual Behaviors When Performing a Difficult Precise Visual Task in Upright Stance

    No full text
    Previous works usually report greater postural stability in precise visual tasks (e.g. gaze-shift tasks) than in stationary-gaze tasks. However, existing cognitive models do not fully support these results as they assume that performing an attention-demanding task while standing would alter postural stability because of the competition of attention between the tasks. Contrary to these cognitive models, attentional resources may increase to create a synergy between visual and postural brain processes to perform precise oculomotor behaviors. To test this hypothesis, we investigated a difficult searching task and a control free-viewing task. The precise visual task required the 16 young participants to find a target in densely furnished images. The free-viewing task consisted of looking at similar images without searching anything. As expected, the participants exhibited significantly lower body displacements (linear, angular) and a significantly higher cognitive workload in the precise visual task than in the free-viewing task. Most important, our exploration showed functional synergies between visual and postural processes in the searching task, that is, significant negative relationships showing lower head and neck displacements to reach more expended zones of fixation. These functional synergies seemed to involve a greater attentional demand because they were not significant anymore when the cognitive workload was controlled (partial correlations). In the free-viewing task, only significant positive relationships were found and they did not involve any change in cognitive workload. An alternative cognitive model and its potential subtended neuroscientific circuit are proposed to explain the supposedly cognitively grounded functional nature of vision–posture synergies in precise visual tasks.SCOPUS: ar.jinfo:eu-repo/semantics/publishe

    Balance Impairment in Radiation Induced Leukoencephalopathy Patients Is Coupled With Altered Visual Attention in Natural Tasks

    Get PDF
    Background: Recent studies have shown that alterations in executive function and attention lead to balance control disturbances. One way of exploring the allocation of attention is to record eye movements. Most experimental data come from a free viewing of static scenes but additional information can be leveraged by recording eye movements during natural tasks. Here, we aimed to provide evidence of a correlation between impaired visual alteration in natural tasks and postural control in patients suffering from Radiation-Induced Leukoencephalopathy (RIL).Methods: The study subjects were nine healthy controls and 10 patients who were diagnosed with RIL at an early stage, with isolated dysexecutive syndrome without clinically detectable gait or posture impairment. We performed a balance evaluation and eye movement recording during an ecological task (reading a recipe while cooking). We calculated a postural score and oculomotor parameters already proposed in the literature. We performed a variable selection using an out-of-bag random permutation and a random forest regression algorithm to find: (i) if visual parameters can predict postural deficit and, (ii) which are the most important of them in this prediction. Results were validated using the leave-one-out cross-validation procedure.Results: Postural scores indeed were found significantly lower in patients with RIL than in healthy controls. Visual parameters were found able to predict the postural score of RIL patients with normalized root mean square error (RMSE) of 0.16. The present analysis showed that horizontal and vertical eye movements, as well as the average duration of the saccades and fixations influenced significantly the prediction of the postural score in RIL patients. While two patients with very low MATTIS-Attention sub score showed the lowest postural scores, no statistically significant relationship was found between the two outcomes.Conclusion: These results highlight the significant relationship between the severity of balance deficits and the visual characteristics in RIL patients. It seems that increased balance impairment is coupled with a reduced focusing capacity in ecological tasks. Balance and eye movement recordings during a natural task could be a useful aspect of multidimensional scoring of the dysexecutive syndrome

    Regulating distance to the screen while engaging in difficult tasks

    Get PDF
    Regulation of distance to the screen (i.e., head-to-screen distance, fluctuation of head-to-screen distance) has been proved to reflect the cognitive engagement of the reader. However, it is still not clear (a) whether regulation of distance to the screen can be a potential parameter to infer high cognitive load and (b) whether it can predict the upcoming answer accuracy. Configuring tablets or other learning devices in a way that distance to the screen can be analyzed by the learning software is in close reach. The software might use the measure as a person-specific indicator of need for extra scaffolding. In order to better gauge this potential, we analyzed eye-tracking data of children (N = 144, Mage = 13 years, SD = 3.2 years) engaging in multimedia learning, as distance to the screen is estimated as a by-product of eye tracking. Children were told to maintain a still seated posture while reading and answering questions at three difficulty levels (i.e., easy vs. medium vs. difficult). Results yielded that task difficulty influences how well the distance to the screen can be regulated, supporting that regulation of distance to the screen is a promising measure. Closer head-to-screen distance and larger fluctuation of head-to-screen distance can reflect that participants are engaging in a challenging task. Only large fluctuation of head-to-screen distance can predict the future incorrect answers. The link between distance to the screen and processing of cognitive task can obtrusively embody reader’s cognitive states during system usage, which can support adaptive learning and testing

    Estimating Level of Engagement from Ocular Landmarks

    Get PDF
    E-learning offers many advantages like being economical, flexible and customizable, but also has challenging aspects such as lack of – social-interaction, which results in contemplation and sense of remoteness. To overcome these and sustain learners’ motivation, various stimuli can be incorporated. Nevertheless, such adjustments initially require an assessment of engagement level. In this respect, we propose estimating engagement level from facial landmarks exploiting the facts that (i) perceptual decoupling is promoted by blinking during mentally demanding tasks; (ii) eye strain increases blinking rate, which also scales with task disengagement; (iii) eye aspect ratio is in close connection with attentional state and (iv) users’ head position is correlated with their level of involvement. Building empirical models of these actions, we devise a probabilistic estimation framework. Our results indicate that high and low levels of engagement are identified with considerable accuracy, whereas medium levels are inherently more challenging, which is also confirmed by inter-rater agreement of expert coders
    corecore