913 research outputs found

    Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars

    Full text link
    Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.Comment: 9 pages, 8 figures, 6 tables. Video: https://youtu.be/_r_bsjkJTH

    Recurrent Attention Models for Depth-Based Person Identification

    Get PDF
    We present an attention-based model that reasons on human body shape and motion dynamics to identify individuals in the absence of RGB information, hence in the dark. Our approach leverages unique 4D spatio-temporal signatures to address the identification problem across days. Formulated as a reinforcement learning task, our model is based on a combination of convolutional and recurrent neural networks with the goal of identifying small, discriminative regions indicative of human identity. We demonstrate that our model produces state-of-the-art results on several published datasets given only depth images. We further study the robustness of our model towards viewpoint, appearance, and volumetric changes. Finally, we share insights gleaned from interpretable 2D, 3D, and 4D visualizations of our model's spatio-temporal attention.Comment: Computer Vision and Pattern Recognition (CVPR) 201

    Parallel computing for brain simulation

    Get PDF
    [Abstract] Background: The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. Aims: For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. Conclusion: This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing.Galicia. Consellerรญa de Cultura, Educaciรณn e Ordenaciรณn Universitaria; GRC2014/049Galicia. Consellerรญa de Cultura, Educaciรณn e Ordenaciรณn Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028

    A historical perspective of algorithmic lateral inhibition and accumulative computation in computer vision

    Get PDF
    Certainly, one of the prominent ideas of Professor Josรฉ Mira was that it is absolutely mandatory to specify the mechanisms and/or processes underlying each task and inference mentioned in an architecture in order to make operational that architecture. The conjecture of the last fifteen years of joint research has been that any bottom-up organization may be made operational using two biologically inspired methods called ?algorithmic lateral inhibition?, a generalization of lateral inhibition anatomical circuits, and ?accumulative computation?, a working memory related to the temporal evolution of the membrane potential. This paper is dedicated to the computational formulation of both methods. Finally, all of the works of our group related to this methodological approximation are mentioned and summarized, showing that all of them support the validity of this approximation

    ์ฝ˜๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง์˜ ์ƒ‰์ƒ ์œ„๊ณ„ ํ•™์Šต์— ๋Œ€ํ•œ ํƒ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ธ๋ฌธ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ์ธ์ง€๊ณผํ•™์ „๊ณต, 2020. 8. ์žฅ๋ณ‘ํƒ.Empirical evidence suggests that color categories emerge in a universal, recurrent, hierarchical pattern across different cultures in the following order; white, black < red < green, yellow < blue < brown < pink, gray, orange, and purple}. This pattern is referred to as the "Color Hierarchy". Over two experiments, the present study examines whether there is evidence for such hierarchical color category learning patterns in Convolutional Neural Networks (CNNs). Experiment A investigates whether color categories are learned randomly, or in a fixed, hierarchical fashion. Results show that colors higher up the Color Hierarchy (e.g. red) are generally learned before colors lower down the hierarchy (e.g. brown, orange, gray). Experiment B examines whether object color affects recall in object detection. Similar to Experiment A, results show that object recall is noticeably impacted by color, with colors higher up the Color Hierarchy generally showing better recall. Additionally, objects whose color can be described by adjectives that emphasise colorfulness (e.g. vivid, brilliant, deep) show better recall than those which de-emphasise colorfulness (e.g. dark, pale, light). The effect of both color hue and adjective on object recall is still observable, even when controlling for contrast through grayscale images. These results highlight similarities between humans and CNNs in color perception, and provide insight into factors that influence object detection. They also show the value of using deep learning techniques as a means of investigating cognitive universalities in an efficient, unbiased, cost-effective way.๊ฒฝํ—˜์ ์œผ๋กœ ์ƒ‰์ƒ ๊ณ„์—ด์€ ๋ณดํŽธ์ ์œผ๋กœ ๋‹ค์–‘ํ•œ ๋ฌธํ™”๊ถŒ์— ๊ฑธ์ณ ์ˆœํ™˜์ ์ด๊ณ  ์œ„๊ณ„์ ์ธ ํŒจํ„ด์„ ๋‚˜ํƒ€๋‚ด๋ฉฐ ๊ทธ ์ˆœ์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋‚˜ํƒ€๋‚œ๋‹ค; ๊ฒ€์€์ƒ‰ < ๋ถ‰์€์ƒ‰ <๋…น์ƒ‰ < ๋…ธ๋ž€์ƒ‰ < ํŒŒ๋ž€์ƒ‰ < ๊ฐˆ์ƒ‰ < ๋ถ„ํ™์ƒ‰, ํšŒ์ƒ‰, ์ฃผํ™ฉ์ƒ‰, ๋ณด๋ผ์ƒ‰}. ์ด๋Ÿฌํ•œ ๊ฒฝํ–ฅ์€ "์ƒ‰์ƒ ์œ„๊ณ„(Color Hierarchy)"๋ผ ๋ถˆ๋ฆฐ๋‹ค. ์†Œ๊ฐœ๋  ๋‘ ๊ฐ€์ง€ ์‹คํ—˜์„ ํ†ตํ•ด ๋ณธ ์—ฐ๊ตฌ๋Š” ์ฝ˜๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง์˜ ๊ฒฝ์šฐ์—๋„ ์ƒ‰์ƒ ์œ„๊ณ„ ์ˆœ์„œ์— ๋”ฐ๋ฅธ ์ƒ‰์ƒ๊ณ„์—ด ํ•™์Šต์ด ์ง„ํ–‰๋˜๋Š”์ง€ ์•Œ์•„๋ณธ๋‹ค. ์‹คํ—˜ A๋Š” ์ƒ‰์ƒ ๊ณ„์—ด์ด ๋ฌด์ž‘์œ„๋กœ ํ•™์Šต๋˜๋Š”์ง€ ์œ„๊ณ„์ ์ธ ์ˆœ์„œ๋ฅผ ํ†ตํ•ด ํ•™์Šต๋˜๋Š”์ง€ ์•Œ์•„๋ณธ๋‹ค. ์‹คํ—˜์˜ ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด ์ƒ‰์ƒ ์œ„๊ณ„์ƒ์œผ๋กœ ๋” ์ƒ์œ„์˜ ์ƒ‰์ƒ(์˜ˆ: ๋ถ‰์€์ƒ‰)์€ ์ผ๋ฐ˜์ ์œผ๋กœ ํ•˜์œ„์˜ ์ƒ‰์ƒ๋“ค (์˜ˆ: ๊ฐˆ์ƒ‰, ์ฃผํ™ฉ์ƒ‰, ํšŒ์ƒ‰) ๋ณด๋‹ค ์•ž์„œ ํ•™์Šต์ด ์ด๋ค„์ง์„ ๋ณผ ์ˆ˜ ์žˆ๋‹ค. ์‹คํ—˜ B๋Š” ์ƒ‰์ƒ ์œ„๊ณ„์— ๋”ฐ๋ฅธ ํ•™์Šต ํŽธ์ฐจ๊ฐ€ ๊ฐ์ฒด์ธ์‹ ํ•™์Šต์˜ ์žฌํ˜„๋ฅ (recall)์—๋„ ์˜ํ–ฅ์„ ๋ผ์น˜๋Š”์ง€ ์•Œ์•„๋ณธ๋‹ค. ์‹คํ—˜ A์—์„œ์™€๊ฐ™์ด ์ƒ‰์ƒ ์œ„๊ณ„๋Š” ๊ฐ์ฒด์ธ์‹ ์žฌํ˜„๋ฅ ์—๋„ ํฐ ์˜ํ–ฅ์„ ๋ผ์นœ๋‹ค. ์ถ”๊ฐ€์ ์œผ๋กœ ์ƒ‰์ƒ์„ ๊ฐ•์กฐํ•˜๋Š” ๋ถ€์‚ฌ(์˜ˆ: ์„ ๋ช…ํ•œ, ๋ˆˆ์— ๋„๋Š”, ์ง™์€)์™€ ํ•จ๊ป˜ ๋ฌ˜์‚ฌ๋œ ๊ฐ์ฒด์˜ ๊ฒฝ์šฐ์—๋Š” ๋ฐ˜๋Œ€๋กœ ์ƒ‰์ƒ์„ ์–ต์ œํ•˜๋Š” ๋ถ€์‚ฌ(์˜ˆ: ์–ด๋‘์šด, ์˜…์€, ์—ท์€)์™€ ํ•จ๊ป˜ ๋ฌ˜์‚ฌ๋œ ๊ฐ์ฒด๋“ค๋ณด๋‹ค ์žฌํ˜„๋ฅ ์ด ๋†’๊ฒŒ ๋‚˜ํƒ€๋‚œ๋‹ค. ๋ถ€์‚ฌ์™€ ์ƒ‰์ƒ์˜ ํšจ๊ณผ๋Š” ํ‘๋ฐฑ ์ด๋ฏธ์ง€๋“ค์— ๋Œ€ํ•ด์„œ๋„ ์—ฌ์ „ํžˆ ๊ด€์ธก๋œ๋‹ค. ์ด์™€ ๊ฐ™์€ ๊ฒฐ๊ณผ๋“ค์€ ์‚ฌ๋žŒ๊ณผ ์ฝ˜๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง์˜ ์ƒ‰์ƒ ์ง€๊ฐ๊ณผ์ •์˜ ์œ ์‚ฌ์„ฑ์„ ๋ณด์—ฌ์ฃผ๋ฉฐ ๊ฐ์ฒด ์ธ์‹์— ์˜ํ–ฅ์„ ์ฃผ๋Š” ์š”์ธ๋“ค์— ๋Œ€ํ•œ ํ†ต์ฐฐ๋ ฅ์„ ์ œ๊ณตํ•œ๋‹ค. ๋˜ํ•œ ์ด ๊ฒฐ๊ณผ๋“ค์€ ๋”ฅ๋Ÿฌ๋‹ ๋ฐฉ๋ฒ•์ด ์ธ์ง€๊ณผ์ •์˜ ๋ณดํŽธ์„ฑ์„ ์‚ดํ”ผ๋Š” ๋ฐ์— ํšจ์œจ์ ์ด๊ณ , ์น˜์šฐ์น˜์ง€ ์•Š์œผ๋ฉฐ, ๊ฒฝ์ œ์ ์ธ ๋ฐฉ๋ฒ•์ž„์„ ์ง€์‹œํ•œ๋‹ค.Chapter 1 Introduction 1 1.1 Is Color Categorization Random? 2 1.2 Modelling the Color Hierarchy 4 1.3 Convolutional Neural Networks and Color Learning 5 1.4 Hypotheses 6 Chapter 2 Datasets 8 2.1 Basic Color Dataset 8 2.2 Modanet 8 2.2.1 Color Annotating Process 9 Chapter 3 Color Space 12 3.1 Opponent Color Space 12 3.2 Luminance Color Spaces 13 Chapter 4 Experiment A: CNN Color Classification Recall Ex-periment 15 4.1 Model 15 4.2 Method 17 4.3 Results 17 Chapter 5 Experiment B: Faster R-CNN Colored Clothing Recall Experiment 21 5.1 Model 21 5.2 Method 22 5.3 Results 23 Chapter 6 Discussion and Conclusion 28 References 33 ๊ตญ๋ฌธ์ดˆ๋ก 36 Acknowledgements 37Maste

    Revisiting algorithmic lateral inhibition and accumulative computation

    Get PDF
    Certainly, one of the prominent ideas of Professor Mira was that it is absolutely mandatory to specify the mechanisms and/or processes underlying each task and inference mentioned in an architecture in order to make operational that architecture. The conjecture of the last fifteen years of joint research of Professor Mira and our team at University of Castilla-La Mancha has been that any bottom-up organization may be made operational using two biologically inspired methods called ?algorithmic lateral inhibition?, a generalization of lateral inhibition anatomical circuits, and ?accumulative computation?, a working memory related to the temporal evolution of the membrane potential. This paper is dedicated to the computational formulations of both methods, which have led to quite efficient solutions of problems related to motion-based computer vision
    • โ€ฆ
    corecore