410 research outputs found

    How the dominant reading direction changes parafoveal processing:A combined EEG/eye-tracking study

    Get PDF
    Reading directions vary across writing systems. Through long-term experience readers adjust their visual systems to the dominant reading direction in their writing systems. However, little is known about the neural correlates underlying these adjustments because different writing systems do not just differ in reading direction, but also regarding visual and linguistic properties. Here, we took advantage that Chinese is read to different degrees in left-right or top-down directions in different regions. We investigated visual word processing in participants from Taiwan (both top-down and left-right directions) and from mainland China (only left-right direction). Combined EEG/eye tracking was used together with a saccade-contingent parafoveal preview manipulation to investigate neural correlates, while participants read 5-word lists. Fixation-related potentials (FRPs) showed a reduced late N1 effect (preview positivity), but this effect was modulated by the prior experience with a specific reading direction. Results replicate previous findings that valid previews facilitate visual word processing, as indicated by reduced FRP activation. Critically, the results indicate that this facilitation effect depends on experience with a given reading direction, suggesting a specific mechanism how cultural experience shapes the way people process visual information

    Auxilio: A Sensor-Based Wireless Head-Mounted Mouse for People with Upper Limb Disability

    Full text link
    Upper limb disability may be caused either due to accidents, neurological disorders, or even birth defects, imposing limitations and restrictions on the interaction with a computer for the concerned individuals using a generic optical mouse. Our work proposes the design and development of a working prototype of a sensor-based wireless head-mounted Assistive Mouse Controller (AMC), Auxilio, facilitating interaction with a computer for people with upper limb disability. Combining commercially available, low-cost motion and infrared sensors, Auxilio solely utilizes head and cheek movements for mouse control. Its performance has been juxtaposed with that of a generic optical mouse in different pointing tasks as well as in typing tasks, using a virtual keyboard. Furthermore, our work also analyzes the usability of Auxilio, featuring the System Usability Scale. The results of different experiments reveal the practicality and effectiveness of Auxilio as a head-mounted AMC for empowering the upper limb disabled community.Comment: 28 pages, 9 figures, 5 table

    Realization of Delayed Least Mean Square Adaptive Algorithm using Verilog HDL for EEG Signals

    Get PDF
    An efficient architecture for the implementation of delayed least mean square (DLMS) adaptive filter is presented in this paper. It is shown that the proposed architectures reduces the register complexity and also supports the faster convergence. Compared to transpose form, the direct form LMS adaptive filter has fast convergence but both has most similar critical path. Further it is shown that in most of the practical cases, very small adaptation delay is sufficient enough to implement a direct-form LMS adaptive filter where in normal cases a very high sampling rate is required and also it shows that no pipelining approach is necessary. From the above discussed estimations three different architectures of LMS adaptive filter has been designed. They are, first design comprise of zero delays i.e., with no adaptation delays, second design comprises of only single delay i.e., with only one adaptation delay, and lastly the third design comprises of two adaptation delays. Among all the three designs zero adaptation delay structure gives efficient performance comparatively. Design with zero adaptation delay involves the minimum energy per sample (EPS) and also minimum area compared to other two designs. The aim of this thesis is to design an efficient filter structures to create a system-on-chip (SoC) solution by using an optimized code for solving various adaptive filtering problems in the system. In this thesis our main focus is on interference cancellation in electroencephalogram (EEG) applications by using the proposed filter structures. Modern field programmable gate arrays (FPGAs) have the resources that are required to design an effective adaptive filtering structures. The designs are evaluated in terms of design time, area and delays

    Parafoveal and foveal N400 effects in natural reading:A timeline of semantic processing from fixation-related potentials

    Get PDF
    The depth at which parafoveal words are processed during reading is an ongoing topic of debate. Recent studies using RSVP-with-flanker paradigms have shown that implausible words within sentences elicit N400 components while they are still in parafoveal vision, suggesting that the semantics of parafoveal words can be accessed to rapidly update the sentence representation. To study this effect in natural reading, we combined the co-registration of eye movements and EEG with the deconvolution modeling of fixation-related potentials (FRPs) to test whether semantic plausibility is processed parafoveally during Chinese sentence reading. For one target word per sentence, both its parafoveal and foveal plausibility were orthogonally manipulated using the boundary paradigm. Consistent with previous eye movement studies, we observed a delayed effect of parafoveal plausibility on fixation durations that only emerged on the foveal word. Crucially, in FRPs aligned to the pre-target fixation, a clear N400 effect emerged already based on parafoveal plausibility, with more negative voltages for implausible previews. Once participants fixated the target, we again observed an N400 effect of foveal plausibility. Interestingly, this foveal N400 was absent whenever the preview had been implausible, indicating that when a word’s (im)plausibility is already processed in parafoveal vision, this information is not revised anymore upon direct fixation. Implausible words also elicited a late positive complex (LPC), but exclusively in foveal vision. Our results provide convergent neural and behavioral evidence for the parafoveal uptake of semantic information, but also indicate different contributions of parafoveal versus foveal information towards higher-level sentence processing

    What's in a name? Brain activity reveals categorization processes differ across languages

    Get PDF
    The linguistic relativity hypothesis proposes that speakers of different languages perceive and conceptualize the world differently, but do their brains reflect these differences? In English, most nouns do not provide linguistic clues to their categories, whereas most Mandarin Chinese nouns provide explicit category information, either morphologically (e.g., the morpheme “vehicle” che1 in the noun “train” huo3che1 ) or orthographically (e.g., the radical “bug” chong2 in the character for the noun “butterfly” hu2die2 ). When asked to judge the membership of atypical (e.g., train) vs. typical (e.g., car) pictorial exemplars of a category (e.g., vehicle), English speakers ( N = 26) showed larger N300 and N400 event-related potential (ERP) component differences, whereas Mandarin speakers ( N = 27) showed no such differences. Further investigation with Mandarin speakers only ( N = 22) found that it was the morphologically transparent items that did not show a typicality effect, whereas orthographically transparent items elicited moderate N300 and N400 effects. In a follow-up study with English speakers only ( N = 25), morphologically transparent items also showed different patterns of N300 and N400 activation than nontransparent items even for English speakers. Together, these results demonstrate that even for pictorial stimuli, how and whether category information is embedded in object names affects the extent to which typicality is used in category judgments, as shown in N300 and N400 responses. Hum Brain Mapp, 2010. © 2010 Wiley-Liss, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/78215/1/20974_ftp.pd

    Eye-Tracking Signals Based Affective Classification Employing Deep Gradient Convolutional Neural Networks

    Get PDF
    Utilizing biomedical signals as a basis to calculate the human affective states is an essential issue of affective computing (AC). With the in-depth research on affective signals, the combination of multi-model cognition and physiological indicators, the establishment of a dynamic and complete database, and the addition of high-tech innovative products become recent trends in AC. This research aims to develop a deep gradient convolutional neural network (DGCNN) for classifying affection by using an eye-tracking signals. General signal process tools and pre-processing methods were applied firstly, such as Kalman filter, windowing with hamming, short-time Fourier transform (SIFT), and fast Fourier transform (FTT). Secondly, the eye-moving and tracking signals were converted into images. A convolutional neural networks-based training structure was subsequently applied; the experimental dataset was acquired by an eye-tracking device by assigning four affective stimuli (nervous, calm, happy, and sad) of 16 participants. Finally, the performance of DGCNN was compared with a decision tree (DT), Bayesian Gaussian model (BGM), and k-nearest neighbor (KNN) by using indices of true positive rate (TPR) and false negative rate (FPR). Customizing mini-batch, loss, learning rate, and gradients definition for the training structure of the deep neural network was also deployed finally. The predictive classification matrix showed the effectiveness of the proposed method for eye moving and tracking signals, which performs more than 87.2% inaccuracy. This research provided a feasible way to find more natural human-computer interaction through eye moving and tracking signals and has potential application on the affective production design process

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Language and Math: What If We Have Two Separate Naming Systems?

    Get PDF
    Producción CientíficaThe role of language in numerical processing has traditionally been restricted to counting and exact arithmetic. Nevertheless, the impact that each of a bilinguals’ languages may have in core numerical representations has not been questioned until recently. What if the language in which math has been first acquired (LLmath) had a bigger impact in our math processing? Based on previous studies on language switching we hypothesize that balanced bilinguals would behave like unbalanced bilinguals when switching between the two codes for math. In order to address this question, we measured the brain activity with magneto encephalography (MEG) and source estimation analyses of 12 balanced Basque-Spanish speakers performing a task in which participants were unconscious of the switches between the two codes. The results show an asymmetric switch cost between the two codes for math, and that the brain areas responsible for these switches are similar to those thought to belong to a general task switching mechanism. This implies that the dominances for math and language could run separately from the general language dominance.Departamento de Cultura y Política Lingüística del Gobierno Vasco (grant PRE_992)Junta de Castilla y León - FEDER (Project VA009P17

    Toward Simulation-Based Training Validation Protocols: Exploring 3d Stereo with Incremental Rehearsal and Partial Occlusion to Instigate and Modulate Smooth Pursuit and Saccade Responses in Baseball Batting

    Get PDF
    “Keeping your eye on the ball” is a long-standing tenet in baseball batting. And yet, there are no protocols for objectively conditioning, measuring, and/or evaluating eye-on-ball coordination performance relative to baseball-pitch trajectories. Although video games and other virtual simulation technologies offer alternatives for training and obtaining objective measures, baseball batting instruction has relied on traditional eye-pitch coordination exercises with qualitative “face validation”, statistics of whole-task batting performance, and/or subjective batter-interrogation methods, rather than on direct, quantitative eye-movement performance evaluations. Further, protocols for validating transfer-of-training (ToT) for video games and other simulation-based training have not been established in general ― or for eye-movement training, specifically. An exploratory research study was conducted to consider the ecological and ToT validity of a part-task, virtual-fastball simulator implemented in 3D stereo along with a rotary pitching machine standing as proxy for the live-pitch referent. The virtual-fastball and live-pitch simulation couple was designed to facilitate objective eye-movement response measures to live and virtual stimuli. The objective measures 1) served to assess the ecological validity of virtual fastballs, 2) informed the characterization and comparison of eye-movement strategies employed by expert and novice batters, 3) enabled a treatment protocol relying on repurposed incremental-rehearsal and partial-occlusion methods intended to instigate and modulate strategic eye movements, and 4) revealed whether the simulation-based treatment resulted in positive (or negative) ToT in the real task. Results indicated that live fastballs consistently elicited different saccade onset time responses than virtual fastballs. Saccade onset times for live fastballs were consistent with catch-up saccades that follow the smooth-pursuit maximum velocity threshold of approximately 40-70˚/sec while saccade onset times for virtual fastballs lagged in the order of 13%. More experienced batters employed more deliberate and timely combinations of smooth pursuit and catch-up saccades than less experienced batters, enabling them to position their eye to meet the ball near the front edge of home plate. Smooth pursuit and saccade modulation from treatment was inconclusive from virtual-pitch pre- and post-treatment comparisons, but comparisons of live-pitch pre- and post-treatment indicate ToT improvements. Lagging saccade onset times from virtual-pitch suggest possible accommodative-vergence impairment due to accommodation-vergence conflict inherent to 3D stereo displays
    corecore