83 research outputs found

    Minimizing hallucination in Histogram of Oriented Gradients

    Get PDF
    International audienceHistogram of Oriented Gradients is one of the most extensively used image descriptors in computer vision. It has successfully been applied to various vision tasks such as localization, classification and recognition. As it mainly captures gradient strengths in an image, it is sensitive to local variations in illumination and contrast. In the result, a normalization of this descriptor turns out to be essential for good performance [3, 4]. Although different normal-ization schemes have been investigated, all of them usually employ L1 or L2-norm. In this paper we show that an in-cautious application of L-like norms to the HOG descrip-tor might produce a hallucination effect. To overcome this issue, we propose a new normalization scheme that effectively minimizes hallucinations. This scheme is built upon a detailed analysis of the gradient distribution resulting in adding an extra bin with a specific value that increases HOG distinctiveness. We validated our approach on person re-identification and action recognition, demonstrating significant boost in the performance

    Valvekaameratel põhineva inimseire täiustamine pildi resolutsiooni parandamise ning näotuvastuse abil

    Get PDF
    Due to importance of security in the society, monitoring activities and recognizing specific people through surveillance video camera is playing an important role. One of the main issues in such activity rises from the fact that cameras do not meet the resolution requirement for many face recognition algorithms. In order to solve this issue, in this work we are proposing a new system which super resolve the image. First, we are using sparse representation with the specific dictionary involving many natural and facial images to super resolve images. As a second method, we are using deep learning convulutional network. Image super resolution is followed by Hidden Markov Model and Singular Value Decomposition based face recognition. The proposed system has been tested on many well-known face databases such as FERET, HeadPose, and Essex University databases as well as our recently introduced iCV Face Recognition database (iCV-F). The experimental results shows that the recognition rate is increasing considerably after applying the super resolution by using facial and natural image dictionary. In addition, we are also proposing a system for analysing people movement on surveillance video. People including faces are detected by using Histogram of Oriented Gradient features and Viola-jones algorithm. Multi-target tracking system with discrete-continuouos energy minimization tracking system is then used to track people. The tracking data is then in turn used to get information about visited and passed locations and face recognition results for tracked people

    Image Restoration by Matching Gradient Distributions

    Get PDF
    The restoration of a blurry or noisy image is commonly performed with a MAP estimator, which maximizes a posterior probability to reconstruct a clean image from a degraded image. A MAP estimator, when used with a sparse gradient image prior, reconstructs piecewise smooth images and typically removes textures that are important for visual realism. We present an alternative deconvolution method called iterative distribution reweighting (IDR) which imposes a global constraint on gradients so that a reconstructed image should have a gradient distribution similar to a reference distribution. In natural images, a reference distribution not only varies from one image to another, but also within an image depending on texture. We estimate a reference distribution directly from an input image for each texture segment. Our algorithm is able to restore rich mid-frequency textures. A large-scale user study supports the conclusion that our algorithm improves the visual realism of reconstructed images compared to those of MAP estimators

    Learning with Privileged Information using Multimodal Data

    Get PDF
    Computer vision is the science related to teaching machines to see and understand digital images or videos. During the last decade, computer vision has seen tremendous progress on perception tasks such as object detection, semantic segmentation, and video action recognition, which lead to the development and improvements of important industrial applications such as self-driving cars and medical image analysis. These advances are mainly due to fast computation offered by GPUs, the development of high capacity models such as deep neural networks, and the availability of large datasets, often composed by a variety of modalities. In this thesis, we explore how multimodal data can be used to train deep convolutional neural networks. Humans perceive the world through multiple senses, and reason over the multimodal space of stimuli to act and understand the environment. One way to improve the perception capabilities of deep learning methods is to use different modalities as input, as it offers different and complementary information about the scene. Recent multimodal datasets for computer vision tasks include modalities such as depth maps, infrared, skeleton coordinates, and others, besides the traditional RGB. This thesis investigates deep learning systems that learn from multiple visual modalities. In particular, we are interested in a very practical scenario in which an input modality is missing at test time. The question we address is the following: how can we take advantage of multimodal datasets for training our model, knowing that, at test time, a modality might be missing or too noisy? The case of having access to more information at training time than at test time is referred to as learning using privileged information. In this work, we develop methods to address this challenge, with special focus on the tasks of action and object recognition, and on the modalities of depth, optical flow, and RGB, that we use for inference at test time. This thesis advances the art of multimodal learning in three different ways. First, we develop a deep learning method for video classification that is trained on RGB and depth data, and is able to hallucinate depth features and predictions at test time. Second, we build on this method and propose a more generic mechanism based on adversarial learning to learn to mimic the predictions originated by the depth modality, and is able to automatically switch from true depth features to generated depth features in case of a noisy sensor. Third, we develop a method that learns a single network trained on RGB data, that is enriched with additional supervision information from other modalities such as depth and optical flow at training time, and that outperforms an ensemble of networks trained independently on these modalities

    Deep Learning Based Face Detection and Recognition in MWIR and Visible Bands

    Get PDF
    In non-favorable conditions for visible imaging like extreme illumination or nighttime, there is a need to collect images in other spectra, specifically infrared. Mid-Wave infrared (3-5 microm) images can be collected without giving away the location of the sensor in varying illumination conditions. There are many algorithms for face detection, face alignment, face recognition etc. proposed in visible band till date, while the research using MWIR images is highly limited. Face detection is an important pre-processing step for face recognition, which in turn is an important biometric modality. This thesis works towards bridging the gap between MWIR and visible spectrum through three contributions. First, a dual band based deep face detection model that works well in visible and MWIR spectrum is proposed using transfer learning. Different models are trained and tested extensively using visible and MWIR images and the one model that works well for this data is determined. For this model, experiments are conducted to learn the speed/accuracy trade-off. Following this, the available MWIR dataset is extended through augmentation using traditional methods and generative adversarial networks (GANs). Traditional methods used to augment the data are brightness adjustment, contrast enhancement, applying noise to and de-noising the images. A deep learning based GAN architecture is developed and is used to generate new face identities. The generated images are added to the original dataset and the face detection model developed earlier is once again trained and tested. The third contribution is the proposal of another GAN that converts given thermal ace images into their visible counterparts. A pre-trained model is used as discriminator for this purpose and is trained to classify the images as real and fake and an identity network is used to provide further feedback to the generator. The generated visible images are used as probe images and the original visible images are used as gallery images to perform face recognition experiments using a state-of-the-art visible-to-visible face recognition algorithm

    DATA-DRIVEN FACIAL IMAGE SYNTHESIS FROM POOR QUALITY LOW RESOLUTION IMAGE

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    • …
    corecore