3 research outputs found

    An Edge-Adapting Laplacian Kernel For Nonlinear Diffusion Filters

    Get PDF
    In this paper, first, a new Laplacian kernel is developed to integrate into it the anisotropic behavior to control the process of forward diffusion in horizontal and vertical directions. It is shown that, although the new kernel reduces the process of edge distortion, it nonetheless produces artifacts in the processed image. After examining the source of this problem, an analytical scheme is devised to obtain a spatially varying kernel that adapts itself to the diffusivity function. The proposed spatially varying Laplacian kernel is then used in various nonlinear diffusion filters starting from the classical Perona-Malik filter to the more recent ones. The effectiveness of the new kernel in terms of quantitative and qualitative measures is demonstrated by applying it to noisy images

    Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering

    Get PDF
    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates

    Enhanced Deep Learning Architectures for Face Liveness Detection for Static and Video Sequences

    Get PDF
    The major contribution of this research is the development of deep architectures for face liveness detection on a static image as well as video sequences that use a combination of texture analysis and deep Convolutional Neural Network (CNN) to classify the captured image or video as real or fake. Face recognition is a popular and efficient form of biometric authentication used in many software applications. One drawback of this technique is that, it is prone to face spoofing attacks, where an impostor can gain access to the system by presenting a photograph or recorded video of a valid user to the sensor. Thus, face liveness detection is a critical preprocessing step in face recognition authentication systems. The first part of our research was on face liveness detection on a static image, where we applied nonlinear diffusion based on an additive operator splitting scheme and a tri-diagonal matrix block-solver algorithm to the image, which enhances the edges and surface texture in the real image. The diffused image was then fed to a deep CNN to identify the complex and deep features for classification. We obtained high accuracy on the NUAA Photograph Impostor dataset using one of our enhanced architectures. In the second part of our research, we developed an end-to-end real-time solution for face liveness detection on static images, where instead of using a separate preprocessing step for diffusing the images, we used a combined architecture where the diffusion process and CNN were implemented in a single step. This integrated approach gave promising results with two different architectures, on the Replay-Attack and Replay-Mobile datasets. We also developed a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep CNN and Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Performance evaluation of our architecture on the Replay-Attack and Replay-Mobile datasets gave very competitive results. We performed liveness detection on video sequences using diffusion and the Two-Stream Inflated 3D ConvNet (I3D) architecture, and our experiments on the Replay-Attack and Replay-Mobile datasets gave very good results
    corecore