2,718 research outputs found

    A Double-Deep Spatio-Angular Learning Framework for Light Field based Face Recognition

    Full text link
    Face recognition has attracted increasing attention due to its wide range of applications, but it is still challenging when facing large variations in the biometric data characteristics. Lenslet light field cameras have recently come into prominence to capture rich spatio-angular information, thus offering new possibilities for advanced biometric recognition systems. This paper proposes a double-deep spatio-angular learning framework for light field based face recognition, which is able to learn both texture and angular dynamics in sequence using convolutional representations; this is a novel recognition framework that has never been proposed before for either face recognition or any other visual recognition task. The proposed double-deep learning framework includes a long short-term memory (LSTM) recurrent network whose inputs are VGG-Face descriptions that are computed using a VGG-Very-Deep-16 convolutional neural network (CNN). The VGG-16 network uses different face viewpoints rendered from a full light field image, which are organised as a pseudo-video sequence. A comprehensive set of experiments has been conducted with the IST-EURECOM light field face database, for varied and challenging recognition tasks. Results show that the proposed framework achieves superior face recognition performance when compared to the state-of-the-art.Comment: Submitted to IEEE Transactions on Circuits and Systems for Video Technolog

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Light Field Saliency Detection with Deep Convolutional Networks

    Get PDF
    Light field imaging presents an attractive alternative to RGB imaging because of the recording of the direction of the incoming light. The detection of salient regions in a light field image benefits from the additional modeling of angular patterns. For RGB imaging, methods using CNNs have achieved excellent results on a range of tasks, including saliency detection. However, it is not trivial to use CNN-based methods for saliency detection on light field images because these methods are not specifically designed for processing light field inputs. In addition, current light field datasets are not sufficiently large to train CNNs. To overcome these issues, we present a new Lytro Illum dataset, which contains 640 light fields and their corresponding ground-truth saliency maps. Compared to current light field saliency datasets [1], [2], our new dataset is larger, of higher quality, contains more variation and more types of light field inputs. This makes our dataset suitable for training deeper networks and benchmarking. Furthermore, we propose a novel end-to-end CNN-based framework for light field saliency detection. Specifically, we propose three novel MAC (Model Angular Changes) blocks to process light field micro-lens images. We systematically study the impact of different architecture variants and compare light field saliency with regular 2D saliency. Our extensive comparisons indicate that our novel network significantly outperforms state-of-the-art methods on the proposed dataset and has desired generalization abilities on other existing datasets.Comment: 14 pages, 14 figure

    Colour constancy beyond the classical receptive field

    Get PDF
    The problem of removing illuminant variations to preserve the colours of objects (colour constancy) has already been solved by the human brain using mechanisms that rely largely on centre-surround computations of local contrast. In this paper we adopt some of these biological solutions described by long known physiological findings into a simple, fully automatic, functional model (termed Adaptive Surround Modulation or ASM). In ASM, the size of a visual neuron's receptive field (RF) as well as the relationship with its surround varies according to the local contrast within the stimulus, which in turn determines the nature of the centre-surround normalisation of cortical neurons higher up in the processing chain. We modelled colour constancy by means of two overlapping asymmetric Gaussian kernels whose sizes are adapted based on the contrast of the surround pixels, resembling the change of RF size. We simulated the contrast-dependent surround modulation by weighting the contribution of each Gaussian according to the centre-surround contrast. In the end, we obtained an estimation of the illuminant from the set of the most activated RFs' outputs. Our results on three single-illuminant and one multi-illuminant benchmark datasets show that ASM is highly competitive against the state-of-the-art and it even outperforms learning-based algorithms in one case. Moreover, the robustness of our model is more tangible if we consider that our results were obtained using the same parameters for all datasets, that is, mimicking how the human visual system operates. These results suggest a dynamical adaptation mechanisms contribute to achieving higher accuracy in computational colour constancy

    Roadmap on digital holography [Invited]

    Get PDF
    This Roadmap article on digital holography provides an overview of a vast array of research activities in the field of digital holography. The paper consists of a series of 25 sections from the prominent experts in digital holography presenting various aspects of the field on sensing, 3D imaging and displays, virtual and augmented reality, microscopy, cell identification, tomography, label-free live cell imaging, and other applications. Each section represents the vision of its author to describe the significant progress, potential impact, important developments, and challenging issues in the field of digital holography
    corecore