1 research outputs found

    Fusing the facial temporal information in videos for face recognition

    No full text
    Face recognition is a challenging and innovative research topic in the present sophisticated world of visual technology. In most of the existing approaches, the face recognition from the still images is affected by intra‐personal variations such as pose, illumination and expression which degrade the performance. This study proposes a novel approach for video‐based face recognition due to the availability of large intra‐personal variations. The feature vector based on the normalised semi‐local binary patterns is obtained for the face region. Each frame is matched with the signature of the faces in the database and a rank list is formed. Each ranked list is clustered and its reliability is analysed for re‐ranking. To characterise an individual in a video, multiple re‐ranked lists across the multiple video frames are fused to form a video signature. This video signature embeds diverse intra‐personal and temporal variations, which facilitates in matching two videos with large variations. For matching two videos, their video signatures are compared using Kendall‐Tau distance. The developed methods are deployed on the YouTube and ChokePoint videos, and they exhibit significant performance improvement owing to their approach when compared with the existing techniques
    corecore