12,021 research outputs found

    Reference face graph for face recognition

    Get PDF
    Face recognition has been studied extensively; however, real-world face recognition still remains a challenging task. The demand for unconstrained practical face recognition is rising with the explosion of online multimedia such as social networks, and video surveillance footage where face analysis is of significant importance. In this paper, we approach face recognition in the context of graph theory. We recognize an unknown face using an external reference face graph (RFG). An RFG is generated and recognition of a given face is achieved by comparing it to the faces in the constructed RFG. Centrality measures are utilized to identify distinctive faces in the reference face graph. The proposed RFG-based face recognition algorithm is robust to the changes in pose and it is also alignment free. The RFG recognition is used in conjunction with DCT locality sensitive hashing for efficient retrieval to ensure scalability. Experiments are conducted on several publicly available databases and the results show that the proposed approach outperforms the state-of-the-art methods without any preprocessing necessities such as face alignment. Due to the richness in the reference set construction, the proposed method can also handle illumination and expression variation

    Using the discrete hadamard transform to detect moving objects in surveillance video

    Get PDF
    In this paper we present an approach to object detection in surveillance video based on detecting moving edges using the Hadamard transform. The proposed method is characterized by robustness to illumination changes and ghosting effects and provides high speed detection, making it particularly suitable for surveillance applications. In addition to presenting an approach to moving edge detection using the Hadamard transform, we introduce two measures to track edge history, Pixel Bit Mask Difference (PBMD) and History Update Value (H UV ) that help reduce the false detections commonly experienced by approaches based on moving edges. Experimental results show that the proposed algorithm overcomes the traditional drawbacks of frame differencing and outperforms existing edge-based approaches in terms of both detection results and computational complexity

    Siamese Instance Search for Tracking

    Get PDF
    In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.Comment: This paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition, 201

    Interaction between high-level and low-level image analysis for semantic video object extraction

    Get PDF
    Authors of articles published in EURASIP Journal on Advances in Signal Processing are the copyright holders of their articles and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate the article, according to the SpringerOpen copyright and license agreement (http://www.springeropen.com/authors/license)

    Real-time architecture for robust motion estimation under varying illumination conditions

    Get PDF
    Motion estimation from image sequences is a complex problem which requires high computing resources and is highly affected by changes in the illumination conditions in most of the existing approaches. In this contribution we present a high performance system that deals with this limitation. Robustness to varying illumination conditions is achieved by a novel technique that combines a gradient-based optical flow method with a non-parametric image transformation based on the Rank transform. The paper describes this method and quantitatively evaluates its robustness to different illumination changing patterns. This technique has been successfully implemented in a real-time system using reconfigurable hardware. Our contribution presents the computing architecture, including the resources consumption and the obtained performance. The final system is a real-time device capable to computing motion sequences in real-time even in conditions with significant illumination changes. The robustness of the proposed system facilitates its use in multiple potential application fields.This work has been supported by the grants DEPROVI (DPI2004-07032), DRIVSCO (IST-016276-2) and TIC2007:”Plataforma Sw-Hw para sistemas de visión 3D en tiempo real”

    A study on local photometric models and their application to robust tracking

    Get PDF
    International audienceSince modeling reflections in image processing is a difficult task, most com- puter vision algorithms assume that objects are Lambertian and that no lighting change occurs. Some photometric models can partly answer this issue by assuming that the lighting changes are the same at each point of a small window of interest. Through a study based on specular reflection models, we explicit the assumptions on which these models are implicitly based and the situations in which they could fail. This paper proposes two photometric models, which compensate for spec- ular highlights and lighting variations. They assume that photometric changes vary smoothly on the window of interest. Contrary to classical models, the characteristics of the object surface and the lighting changes can vary in the area being observed. First, we study the validity of these models with re- spect to the acquisition setup: relative locations between the light source, the sensor and the object as well as the roughness of the surface. Then, these models are used to improve feature points tracking by simultaneously estimating the photometric and geometric changes. The proposed methods are compared to well-known tracking methods robust to affine photometric changes. Experimental results on specular objects demonstrate the robust- ness of our approaches to specular highlights and lighting changes
    corecore