31 research outputs found

    Advanced Techniques for Face Recognition under Challenging Environments

    Get PDF
    Automatically recognizing faces captured under uncontrolled environments has always been a challenging topic in the past decades. In this work, we investigate cohort score normalization that has been widely used in biometric verification as means to improve the robustness of face recognition under challenging environments. In particular, we introduce cohort score normalization into undersampled face recognition problem. Further, we develop an effective cohort normalization method specifically for the unconstrained face pair matching problem. Extensive experiments conducted on several well known face databases demonstrate the effectiveness of cohort normalization on these challenging scenarios. In addition, to give a proper understanding of cohort behavior, we study the impact of the number and quality of cohort samples on the normalization performance. The experimental results show that bigger cohort set size gives more stable and often better results to a point before the performance saturates. And cohort samples with different quality indeed produce different cohort normalization performance. Recognizing faces gone after alterations is another challenging problem for current face recognition algorithms. Face image alterations can be roughly classified into two categories: unintentional (e.g., geometrics transformations introduced by the acquisition devide) and intentional alterations (e.g., plastic surgery). We study the impact of these alterations on face recognition accuracy. Our results show that state-of-the-art algorithms are able to overcome limited digital alterations but are sensitive to more relevant modifications. Further, we develop two useful descriptors for detecting those alterations which can significantly affect the recognition performance. In the end, we propose to use the Structural Similarity (SSIM) quality map to detect and model variations due to plastic surgeries. Extensive experiments conducted on a plastic surgery face database demonstrate the potential of SSIM map for matching face images after surgeries

    Free-hand Sketch Understanding and Analysis

    Get PDF
    PhDWith the proliferation of touch screens, sketching input has become popular among many software products. This phenomenon has stimulated a new round of boom in free-hand sketch research, covering topics like sketch recognition, sketch-based image retrieval, sketch synthesis and sketch segmentation. Comparing to previous sketch works, the newly proposed works are generally employing more complicated sketches and sketches in much larger quantity, thanks to the advancements in hardware. This thesis thus demonstrates some new works on free-hand sketches, presenting novel thoughts on aforementioned topics. On sketch recognition, Eitz et al. [32] are the first explorers, who proposed the large-scale TU-Berlin sketch dataset [32] that made sketch recognition possible. Following their work, we continue to analyze the dataset and find that the visual cue sparsity and internal structural complexity are the two biggest challenges for sketch recognition. Accordingly, we propose multiple kernel learning [45] to fuse multiple visual cues and star graph representation [12] to encode the structures of the sketches. With the new schemes, we have achieved significant improvement on recognition accuracy (from 56% to 65.81%). Experimental study on sketch attributes is performed to further boost sketch recognition performance and enable novel retrieval-by-attribute applications. For sketch-based image retrieval, we start by carefully examining the existing works. After looking at the big picture of sketch-based image retrieval, we highlight that studying the sketch’s ability to distinguish intra-category object variations should be the most promising direction to proceed on, and we define it as the fine-grained sketch-based image retrieval problem. Deformable part-based model which addresses object part details and object deformations is raised to tackle this new problem, and graph matching is employed to compute the similarity between deformable part-based models by matching the parts of different models. To evaluate this new problem, we combine the TU-Berlin sketch dataset and the PASCAL VOC photo dataset [36] to form a new challenging cross-domain dataset with pairwise sketch-photo similarity ratings, and our proposed method has shown promising results on this new dataset. Regarding sketch synthesis, we focus on the generating of real free-hand style sketches for general categories, as the closest previous work [8] only managed to show efficacy on a single category: human faces. The difficulties that impede sketch synthesis to reach other categories include the cluttered edges and diverse object variations due to deformation. To address those difficulties, we propose a deformable stroke model to form the sketch synthesis into a detection process, which is directly aiming at the cluttered background and the object variations. To alleviate the training of such a model, a perceptual grouping algorithm is further proposed that utilizes stroke length’s relationship to stroke semantics, stroke temporal order and Gestalt principles [58] to perform part-level sketch segmentation. The perceptual grouping provides semantic part-level supervision automatically for the deformable stroke model training, and an iterative learning scheme is introduced to gradually refine the supervision and the model training. With the learned deformable stroke models, sketches with distinct free-hand style can be generated for many categories

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    Image Quality Evaluation in Lossy Compressed Images

    Get PDF
    This research focuses on the quantification of image quality in lossy compressed images, exploring the impact of digital artefacts and scene characteristics upon image quality evaluation. A subjective paired comparison test was implemented to assess perceived quality of JPEG 2000 against baseline JPEG over a range of different scene types. Interval scales were generated for both algorithms, which indicated a subjective preference for JPEG 2000, particularly at low bit rates, and these were confirmed by an objective distortion measure. The subjective results did not follow this trend for some scenes however, and both algorithms were found to be scene dependent as a result of the artefacts produced at high compression rates. The scene dependencies were explored from the interval scale results, which allowed scenes to be grouped according to their susceptibilities to each of the algorithms. Groupings were correlated with scene measures applied in a linked study. A pilot study was undertaken to explore perceptibility thresholds of JPEG 2000 of the same set of images. This work was developed with a further experiment to investigate the thresholds of perceptibility and acceptability of higher resolution JPEG 2000 compressed images. A set of images was captured using a professional level full-frame Digital Single Lens Reflex camera, using a raw workflow and carefully controlled image-processing pipeline. The scenes were quantified using a set of simple scene metrics to classify them according to whether they were average, higher than, or lower than average, for a number of scene properties known to affect image compression and perceived image quality; these were used to make a final selection of test images. Image fidelity was investigated using the method of constant stimuli to quantify perceptibility thresholds and just noticeable differences (JNDs) of perceptibility. Thresholds and JNDs of acceptability were also quantified to explore suprathreshold quality evaluation. The relationships between the two thresholds were examined and correlated with the results from the scene measures, to identify more or less susceptible scenes. It was found that the level and differences between the two thresholds was an indicator of scene dependency and could be predicted by certain types of scene characteristics. A third study implemented the soft copy quality ruler as an alternative psychophysical method, by matching the quality of compressed images to a set of images varying in a single attribute, separated by known JND increments of quality. The imaging chain and image processing workflow were evaluated using objective measures of tone reproduction and spatial frequency response. An alternative approach to the creation of ruler images was implemented and tested, and the resulting quality rulers were used to evaluate a subset of the images from the previous study. The quality ruler was found to be successful in identifying scene susceptibilities and observer sensitivity. The fourth investigation explored the implementation of four different image quality metrics. These were the Modular Image Difference Metric, the Structural Similarity Metric, The Multi-scale Structural Similarity Metric and the Weighted Structural Similarity Metric. The metrics were tested against the subjective results and all were found to have linear correlation in terms of predictability of image quality

    REAL TIME ASSISTANCE IN PHOTOGRAPHY USING SOCIAL MEDIA

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
    corecore