4 research outputs found

    Model-Predictive Control with New NUV Priors

    Full text link
    Normals with unknown variance (NUV) can represent many useful priors including LpL_p norms and other sparsifying priors, and they blend well with linear-Gaussian models and Gaussian message passing algorithms. In this paper, we elaborate on recently proposed discretizing NUV priors, and we propose new NUV representations of half-space constraints and box constraints. We then demonstrate the use of such NUV representations with exemplary applications in model predictive control, with a variety of constraints on the input, the output, or the internal stateof the controlled system. In such applications, the computations boil down to iterations of Kalman-type forward-backward recursions, with a complexity (per iteration) that is linear in the planning horizon. In consequence, this approach can handle long planning horizons, which distinguishes it from the prior art. For nonconvex constraints, this approach has no claim to optimality, but it is empirically very effective

    Characterising eye movement events with an unsupervised hidden markov model

    Get PDF
    Eye-tracking allows researchers to infer cognitive processes from eye movements that are classified into distinct events. Parsing the events is typically done by algorithms. Here we aim at developing an unsupervised, generative model that can be fitted to eye-movement data using maximum likelihood estimation. This approach allows hypothesis testing about fitted models, next to being a method for classification. We developed gazeHMM, an algorithm that uses a hidden Markov model as a generative model, has few critical parameters to be set by users, and does not require human coded data as input. The algorithm classifies gaze data into fixations, saccades, and optionally postsaccadic oscillations and smooth pursuits. We evaluated gazeHMM’s performance in a simulation study, showing that it successfully recovered hidden Markov model parameters and hidden states. Parameters were less well recovered when we included a smooth pursuit state and/or added even small noise to simulated data. We applied generative models with different numbers of events to benchmark data. Comparing them indicated that hidden Markov models with more events than expected had most likely generated the data. We also applied the full algorithm to benchmark data and assessed its similarity to human coding and other algorithms. For static stimuli, gazeHMM showed high similarity and outperformed other algorithms in this regard. For dynamic stimuli, gazeHMM tended to rapidly switch between fixations and smooth pursuits but still displayed higher similarity than most other algorithms. Concluding that gazeHMM can be used in practice, we recommend parsing smooth pursuits only for exploratory purposes. Future hidden Markov model algorithms could use covariates to better capture eye movement processes and explicitly model event durations to classify smooth pursuits more accurately

    AUTOMATIC DETECTION OF NYSTAGMUS IN BEDSIDE VOG RECORDINGS FROM PATIENTS WITH VERTIGO

    Get PDF
    Benign Paroxysmal Positional Vertigo (BPPV) is the most common cause of vertigo. It can be diagnosed and treated using simple maneuvers done by vestibular experts. However, patients with this condition presenting to the emergency department have high chance of being misdiagnosed. Such high rate of misdiagnosis results in significant morbidity to the patient and also incurs huge medical costs from unnecessary neuroimaging tests. Hence, automatic medical diagnosis is the next step to aid ED practitioners to reduce diagnostic errors. However, current software employed for this diagnosis has been found to have very low specificity. This can be attributed to factors such as low sampling frequency of recording device and the fact that bedside recordings from patients are susceptible to noise and artifacts. This study aims to improve methods for automatic quantification of nystagmus, a key sign of BPPV. Testing the current method using eye movement data recorded in patients during the diagnostic maneuver yielded better results than the commercial software

    Model-based Separation, Detection, and Classification of Eye Movements

    No full text
    Objective: We present a physiologically motivated eye movement analysis framework for model-based separation, detection, and classification (MBSDC) of eye movements. By estimating kinematic and neural controller signals for saccades, smooth pursuit, and fixational eye movements in a mechanistic model of the oculomotor system we are able to separate and analyze these eye movements independently. Methods: We extended an established oculomotor model for horizontal eye movements by neural controller signals and by a blink artifact model. To estimate kinematic (position, velocity, acceleration, forces) and neural controller signals from eye position data, we employ Kalman smoothing and sparse input estimation techniques. The estimated signals are used for detecting saccade start and end points, and for classifying the recording into saccades, smooth pursuit, fixations, post-saccadic oscillations, and blinks. Results: On simulated data, the reconstruction error of the velocity profiles is about half the error value obtained by the commonly employed approach of filtering and numerical differentiation. In experiments with smooth pursuit data from human subjects, we observe an accurate signal separation. In addition, in neural recordings from non-human primates, the estimated neural controller signals match the real recordings strikingly well. Significance: The MBSDC framework enables the analysis of multi-type eye movement recordings and provides a physiologically motivated approach to study motor commands and might aid the discovery of new digital biomarkers. Conclusion: The proposed framework provides a model-based approach for a wide variety of eye movement analysis tasks
    corecore