10 research outputs found

    Classification of brain states that predicts future performance in visual tasks based on co-integration analysis of EEG data

    No full text
    Electroencephalogram (EEG) is a popular tool for studying brain activity. Numerous statistical techniques exist to enhance understanding of the complex dynamics underlying the EEG recordings. Inferring the functional network connectivity between EEG channels is of interest, and non-parametric inference methods are typically applied. We propose a fully parametric model-based approach via cointegration analysis. It not only estimates the network but also provides further insight through cointegration vectors, which characterize equilibrium states, and the corresponding loadings, which describe the mechanism of how the EEG dynamics is drawn to the equilibrium. We outline the estimation procedure in the context of EEG data, which faces specific challenges compared with the common econometric problems, for which cointegration analysis was originally conceived. In particular, the dimension is higher, typically around 64; there is usually access to repeated trials; and the data are artificially linearly dependent through the normalization done in EEG recordings. Finally, we illustrate the method on EEG data from a visual task experiment and show how brain states identified via cointegration analysis can be utilized in further investigations of determinants playing roles in sensory identifications

    Multilevel Modeling of Gaze From Listeners With Hearing Loss Following a Realistic Conversation

    No full text
    Purpose: There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation. Method: Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener’s gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition. Results: We found no evidence that our conversation events predicted changes in the listener’s gaze, but the listener’s gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker’s speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners. Conclusions: MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener’s perception of realistic conversation. Our experience provides insight to guide future research with this method.</p

    Clustering Users Based on Hearing Aid Use: An Exploratory Analysis of Real-World Data

    No full text
    While the assessment of hearing aid use has traditionally relied on subjective self-reported measures, smartphone-connected hearing aids enable objective data logging from a large number of users. Objective data logging allows to overcome the inaccuracy of self-reported measures. Moreover, data logging enables assessing hearing aid use with a greater temporal resolution and longitudinally, making it possible to investigate hourly patterns of use and to account for the day-to-day variability. This study aims to explore patterns of hearing aid use throughout the day and assess whether clusters of users with similar use patterns can be identified. We did so by analyzing objective hearing aid use data logged from 15,905 real-world users over a 4-month period. Firstly, we investigated the daily amount of hearing aid use and its within-user and between-user variability. We found that users, on average, used the hearing aids for 10.01 h/day, exhibiting a substantial between-user (SD = 2.76 h) and within-user (SD = 3.88 h) variability. Secondly, we examined hearing aid use hourly patterns by clustering 453,612 logged days into typical days of hearing aid use. We identified three typical days of hearing aid use: full day (44% of days), afternoon (27%), and sporadic evening (26%) day of hearing aid use. Thirdly, we explored the usage patterns of the hearing aid users by clustering the users based on the proportion of time spent in each of the typical days of hearing aid use. We found three distinct user groups, each characterized by a predominant (i.e., experienced ~60% of the time) typical day of hearing aid use. Notably, the largest user group (49%) of users predominantly had full days of hearing aid use. Finally, we validated the user clustering by training a supervised classification ensemble to predict the cluster to which each user belonged. The high accuracy achieved by the supervised classifier ensemble (~86%) indicated valid user clustering and showed that such a classifier can be successfully used to group new hearing aid users in the future. This study provides a deeper insight into the adoption of hearing care treatments and paves the way for more personalized solutions

    Investigating the Provision and Context of Use of Hearing Aid Listening Programs From Real-world Data: Observational Study

    No full text
    BackgroundListening programs enable hearing aid (HA) users to change device settings for specific listening situations and thereby personalize their listening experience. However, investigations into real-world use of such listening programs to support clinical decisions and evaluate the success of HA treatment are lacking. ObjectiveWe aimed to investigate the provision of listening programs among a large group of in-market HA users and the context in which the programs are typically used. MethodsFirst, we analyzed how many and which programs were provided to 32,336 in-market HA users. Second, we explored 332,271 program selections from 1312 selected users to investigate the sound environments in which specific programs were used and whether such environments reflect the listening intent conveyed by the name of the used program. Our analysis was based on real-world longitudinal data logged by smartphone-connected HAs. ResultsIn our sample, 57.71% (18,663/32,336) of the HA users had programs for specific listening situations, which is a higher proportion than previously reported, most likely because of the inclusion criteria. On the basis of association rule mining, we identified a primary additional listening program, Speech in Noise, which is frequent among users and often provided when other additional programs are also provided. We also identified 2 secondary additional programs (Comfort and Music), which are frequent among users who get ≥3 programs and usually provided in combination with Speech in Noise. In addition, 2 programs (TV and Remote Mic) were related to the use of external accessories and not found to be associated with other programs. On average, users selected Speech in Noise, Comfort, and Music in louder, noisier, and less-modulated (all P<.01) environments compared with the environment in which they selected the default program, General. The difference from the sound environment in which they selected General was significantly larger in the minutes following program selection than in the minutes preceding it. ConclusionsThis study provides a deeper insight into the provision of listening programs on a large scale and demonstrates that additional listening programs are used as intended and according to the sound environment conveyed by the program name
    corecore