1,169 research outputs found

    Otolith function in human subjects: Perception of motion, reflex eye movements and vision during linear interaural acceleration

    Get PDF
    The thesis investigates how the otolith organs of the vestibular system, specifically the utricles, assist motion perception and aid visual stabilization, during translational lateral whole-body acceleration. It was found that high gradients of acceleration facilitate the detection of motion and that, for low acceleration gradients, motion perception in normal subjects relies on a 'velocity' threshold detection process. Experiments in patients without vestibular function indicated that, for the stimuli employed, the somatosensory system could be as sensitive to linear motion as the vestibular system. The interaction between the horizontal linear vestibulo-ocular reflex (LVOR) and visual context was characterized in the following experiments. Subjects were accelerated transiently in darkness, or while viewing earth-fixed or head-fixed targets. From onset, the eye velocity response to head translation was enhanced with acceleration level and target proximity, but was only slightly reduced by fixation of head-fixed targets. This suggested that the gain of the LVOR pathway was adjusted before or immediately after motion onset by a parameter depending mainly on viewing distance and less on the knowledge of probable relative target motion. For high relative target velocities, LVORs improved ocular fixation over what would be attained by pursuit alone, although fully compensatory eye movements were not always produced. The LVORs of patients who underwent unilateral vestibular deafferentation suggested that the utricular area generating transaural LVORs is the macular region lateral to the striola. Psychophysical experiments based on a reading task established the functional role of the LVOR for stabilising vision during high-frequency sinusoidal whole-body acceleration. Unlike normal subjects, visual acuity in patients without vestibular function was not better during self-motion than during display oscillation. Finally, the LVOR interaction with canal-ocular reflexes was studied using isolated and combined translational/rotational stimuli. The results showed that, shortly after motion onset, canal stimulation enhances the LVOR evoked by head translation

    Deep Learning Optimizers Comparison in Facial Expression Recognition

    Get PDF
    Artificial Intelligence is everywhere we go, whether it is programming an interactive cleaning robot or detecting a bank fraud. Its rise is inevitable. In the last few decades, many new architectures and approaches were brought up, so it becomes hard to know what is the best approach or architecture for a certain area. One of such areas is the detection of emotion in the human face, most commonly known by Facial Expression Recognition (or FER). In this work we started by doing an intensive collection of data concerning the theories that explain the existence of emotions, how they are distinguished from one another, and how they are recognized in a human face. After this, we started to develop deep learning models with different architectures as to compare their performances when used for Facial Expression Recognition. After developing the models, we took one of them and tested it with different deep learning optimizer algorithms, as to verify the difference among them, thus figuring out the best optimizing algorithm for this particular case.A Inteligência Artifical encontra-se presente em todo o lado, quer seja a programar um robô de limpeza interativo ou a detetar uma fraude bancária. A sua ascensão é inevitável. Nas últimas décadas, foram criadas inúmeras novas arquiteturas e abordagens e, por isso, torna-se difícil saber qual a melhor abordagem ou arquitetura para uma certa área. Uma dessas áreas é a deteção de emoção na cara humana, também conhecida como Reconhecimento de Expressão Facial. Neste trabalho começámos por realizar uma coleta intensiva de dados acerca das teorias que explicam a existência de emoções, como as mesmas são distinguidas umas das outras e como podem ser identificadas numa cara humana. Posteriormente, começámos a desenvolver modelos de deep learning com diferentes arquiteturas para comparar os respetivos desempenhos quando usadas em Reconhecimento de Expressão Facial. Após desenvolver os modelos, pegámos num dos mesmos e testámo-lo com diferentes algoritmos de otimização deep learning de forma a verificar quais as diferenças entre os mesmos, percebendo assim qual o mais indicado para uso neste caso em particular
    corecore