6,245 research outputs found

    EyeRIS: A General-Purpose System for Eye Movement Contingent Display Control

    Full text link
    In experimental studies of visual performance, the need often emerges to modify the stimulus according to the eye movements perfonncd by the subject. The methodology of Eye Movement-Contingent Display (EMCD) enables accurate control of the position and motion of the stimulus on the retina. EMCD procedures have been used successfully in many areas of vision science, including studies of visual attention, eye movements, and physiological characterization of neuronal response properties. Unfortunately, the difficulty of real-time programming and the unavailability of flexible and economical systems that can be easily adapted to the diversity of experimental needs and laboratory setups have prevented the widespread use of EMCD control. This paper describes EyeRIS, a general-purpose system for performing EMCD experiments on a Windows computer. Based on a digital signal processor with analog and digital interfaces, this integrated hardware and software system is responsible for sampling and processing oculomotor signals and subject responses and modifying the stimulus displayed on a CRT according to the gaze-contingent procedure specified by the experimenter. EyeRIS is designed to update the stimulus within a delay of 10 ms. To thoroughly evaluate EyeRIS' perforltlancc, this study (a) examines the response of the system in a number of EMCD procedures and computational benchmarking tests, (b) compares the accuracy of implementation of one particular EMCD procedure, retinal stabilization, to that produced by a standard tool used for this task, and (c) examines EyeRIS' performance in one of the many EMCD procedures that cannot be executed by means of any other currently available device.National Institute of Health (EY15732-01

    GraFIX: a semiautomatic approach for parsing low- and high-quality eye-tracking data

    Get PDF
    Fixation durations (FD) have been used widely as a measurement of information processing and attention. However, issues like data quality can seriously influence the accuracy of the fixation detection methods and, thus, affect the validity of our results (Holmqvist, Nyström, & Mulvey, 2012). This is crucial when studying special populations such as infants, where common issues with testing (e.g., high degree of movement, unreliable eye detection, low spatial precision) result in highly variable data quality and render existing FD detection approaches highly time consuming (hand-coding) or imprecise (automatic detection). To address this problem, we present GraFIX, a novel semiautomatic method consisting of a two-step process in which eye-tracking data is initially parsed by using velocity-based algorithms whose input parameters are adapted by the user and then manipulated using the graphical interface, allowing accurate and rapid adjustments of the algorithms’ outcome. The present algorithms (1) smooth the raw data, (2) interpolate missing data points, and (3) apply a number of criteria to automatically evaluate and remove artifactual fixations. The input parameters (e.g., velocity threshold, interpolation latency) can be easily manually adapted to fit each participant. Furthermore, the present application includes visualization tools that facilitate the manual coding of fixations. We assessed this method by performing an intercoder reliability analysis in two groups of infants presenting low- and high-quality data and compared it with previous methods. Results revealed that our two-step approach with adaptable FD detection criteria gives rise to more reliable and stable measures in low- and high-quality data

    Steady-State movement related potentials for brain–computer interfacing

    Get PDF
    An approach for brain-computer interfacing (BCI) by analysis of steady-state movement related potentials (ssMRPs) produced during rhythmic finger movements is proposed in this paper. The neurological background of ssMRPs is briefly reviewed. Averaged ssMRPs represent the development of a lateralized rhythmic potential, and the energy of the EEG signals at the finger tapping frequency can be used for single-trial ssMRP classification. The proposed ssMRP-based BCI approach is tested using the classic Fisher's linear discriminant classifier. Moreover, the influence of the current source density transform on the performance of BCI system is investigated. The averaged correct classification rates (CCRs) as well as averaged information transfer rates (ITRs) for different sliding time windows are reported. Reliable single-trial classification rates of 88%-100% accuracy are achievable at relatively high ITRs. Furthermore, we have been able to achieve CCRs of up to 93% in classification of the ssMRPs recorded during imagined rhythmic finger movements. The merit of this approach is in the application of rhythmic cues for BCI, the relatively simple recording setup, and straightforward computations that make the real-time implementations plausible

    Entering PIN codes by smooth pursuit eye movements

    Get PDF
    Despite its potential gaze interaction is still not a widely-used interaction concept. Major drawbacks as the calibration, strain of the eyes and the high number of false alarms are associated with gaze based interaction and limit its practicability for every-day human computer interaction. In this paper two experiments are described which use smooth pursuit eye movements on moving display buttons. The first experiment was conducted to extract an easy and fast interaction concept and at the same time to collect data to develop a specific but robust algorithm. In a follow-up experiment, twelve conventionally calibrated participants interacted successfully with the system. For another group of twelve people the eye tracker was not calibrated individually, but on a third person. Results show that for both groups interaction was possible without false alarms. Both groups rated the user experience of the system as positive

    Molecular propensity as a driver for explorative reactivity studies

    Full text link
    Quantum chemical studies of reactivity involve calculations on a large number of molecular structures and comparison of their energies. Already the set-up of these calculations limits the scope of the results that one will obtain, because several system-specific variables such as the charge and spin need to be set prior to the calculation. For a reliable exploration of reaction mechanisms, a considerable number of calculations with varying global parameters must be taken into account, or important facts about the reactivity of the system under consideration can go undetected. For example, one could miss crossings of potential energy surfaces for different spin states or might not note that a molecule is prone to oxidation. Here, we introduce the concept of molecular propensity to account for the predisposition of a molecular system to react across different electronic states in certain nuclear configurations. Within our real-time quantum chemistry framework, we developed an algorithm that allows us to be alerted to such a propensity of a system under consideration.Comment: 10 pages, 7 figure

    Handheld image acquisition with real-time vision for human-computer interaction on mobile applications

    Get PDF
    Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), Universidade de Lisboa, Faculdade de Ciências, 2019Várias patologias importantes manifestam-se na retina, sendo que estas podem ter origem na própria retina ou então provirem de doenças sistémicas. A retinopatia diabética, o glaucoma e a degeneração macular relacionada com a idade são algumas dessas patologias oculares, e também as maiores causas de cegueira nos países desenvolvidos. Graças à maior prevalência que se tem verificado, tem havido uma aposta cada vez maior na massificação do rastreio destas doenças, principalmente na população mais suscetível de as contrair. Visto que a retina é responsável pela formação de imagens, ou seja, pelo sentido da visão, os componentes oculares que estão localizados anteriormente têm de ser transparentes, permitindo assim a passagem da luz. Isto faz com que a retina e, por sua vez, o tecido cerebral, possam ser examinados de forma não-invasiva. Existem várias técnicas de imagiologia da retina, incluindo a angiografia fluoresceínica, a tomografia de coerência ótica e a retinografia. O protótipo EyeFundusScope (EFS) da Fraunhofer é um retinógrafo portátil, acoplado a um smartphone, que permite a obtenção de imagens do fundo do olho, sem que seja necessária a dilatação da pupila. Utiliza um algoritmo de aprendizagem automática para detetar lesões existentes na retina, que estão normalmente associadas a um quadro de retinopatia diabética. Para além disso, utiliza um sistema de suporte à decisão, que indica a ausência ou presença da referida retinopatia. A fiabilidade deste tipo de algoritmos e o correto diagnóstico por parte de oftalmologistas e neurologistas estão extremamente dependentes da qualidade das imagens adquiridas. A consistência da captura portátil, com este tipo de retinógrafos, está intimamente relacionada com uma interação apropriada com o utilizador. De forma a melhorar o contributo prestado pelo utilizador, durante o procedimento habitual da retinografia, foi desenvolvida uma nova interface gráfica de utilizador, na aplicação Android do EFS. A abordagem pretendida consiste em tornar o uso do EFS mais acessível, e encorajar técnicos não especializados a utilizarem esta técnica de imagem médica, tanto em ambiente clínico como fora deste. Composto por vários elementos de interação, que foram criados para atender às necessidades do protocolo de aquisição de imagem, a interface gráfica de utilizador deverá auxiliar todos os utilizadores no posicionamento e alinhamento do EFS com a pupila do doente. Para além disto, poderá existir um controlo personalizado do tempo despendido em aquisições do mesmo olho. Inicialmente, foram desenhadas várias versões dos elementos de interação rotacionais, sendo posteriormente as mesmas implementadas na aplicação Android. Estes elementos de interação utilizam os dados recolhidos dos sensores inerciais, já existentes no smartphone, para transmitir uma resposta em tempo real ao utilizador enquanto este move o EFS. Além dos elementos de interação rotacionais, também foram implementados um temporizador e um indicador do olho que está a ser examinado. Após a implementação de três configurações com as várias versões dos elementos de interação, procedeu-se à realização dos testes de usabilidade. No entanto, antes desta etapa se poder concretizar, foram realizados vários acertos e correções com a ajuda de um olho fantoma. Durante o planeamento dos testes de usabilidade foi estabelecido um protocolo para os diferentes cenários de uso e foi criado um tutorial com as principais cautelas que os utilizadores deveriam ter aquando das aquisições. Os resultados dos testes de usabilidade mostram que a nova interface gráfica teve um efeito bastante positivo na experiência dos utilizadores. A maioria adaptou-se rapidamente à nova interface, sendo que para muitos contribuiu para o sucesso da tarefa de aquisição de imagem. No futuro, espera-se que a combinação dos dados fornecidos pelos sensores inerciais, juntamente com a implementação de novos algoritmos de reconhecimento de imagem, sejam a base de uma nova e mais eficaz técnica de interação em prática clínica. Além disso, a nova interface gráfica poderá proporcionar ao EFS uma aplicação que sirva exclusivamente para efeitos de formação profissional.Many important diseases manifest themselves in the retina, both primary retinal conditions and systemic disorders. Diabetic retinopathy, glaucoma and age-related macular degeneration are some of the most frequent ocular disorders and the leading causes of blindness in developed countries. Since these disorders are becoming increasingly prevalent, there has been the need to encourage high coverage screening among the most susceptible population. As its function requires the retina to see the outside world, the involved optical components must be transparent for image formation. This makes the retinal tissue, and thereby brain tissue, accessible for imaging in a non-invasive manner. There are several approaches to visualize the retina including fluorescein angiography, optical coherence tomography and fundus photography. The Fraunhofer’s EyeFundusScope (EFS) prototype is a handheld smartphone-based fundus camera, that doesn’t require pupil dilation. It employs advanced machine learning algorithms to process the image in search of lesions that are often associated with diabetic retinopathy, making it a pre-diagnostic tool. The robustness of this computer vision algorithm, as well as the diagnose performance of ophthalmologists and neurologists, is strongly related with the quality of the images acquired. The consistency of handheld capture deeply depends on proper human interaction. In order to improve the user’s contribution to the retinal acquisition procedure, a new graphical user interface was designed and implemented in the EFS Acquisition App. The intended approach is to make the EFS easier to use by non-ophthalmic trained personnel, either in a non-clinical or in a clinical environment. Comprised of several interaction elements that were created to suit the needs of the acquisition procedure, the graphical user interface should help the user to position and align the EFS illumination beam with the patient’s pupil as well as keeping track of the time between acquisitions on the same eye. Initially, several versions of rotational interaction elements were designed and later implemented on the EFS Acquisition App. These used data from the smartphone’s inertial sensors to give real-time feedback to the user while moving the EFS. Besides the rotational interactional elements, a time-lapse and an eye indicator were also designed and implemented in the EFS. Usability tests took place, after three assemblies being successfully implemented and corrected with the help of a model eye ophthalmoscope trainer. Also, a protocol for the different use-case scenarios was elaborated, and a tutorial was created. Results from the usability tests, show that the new graphical user interface had a very positive outcome. The majority of users adapted very quickly to the new interface, and for many it contributed for a successful acquisition task. In the future, the grouping of inertial sensors data and image recognition may prove to be the foundations for a more efficient interaction technique performed in clinical practices. Furthermore, the new graphical user interface could provide the EFS with an application for educational purposes
    corecore