336 research outputs found

    An efficient framework for visible-infrared cross modality person re-identification

    Get PDF
    Visible-infrared cross-modality person re-identification (VI-ReId) is an essential task for video surveillance in poorly illuminated or dark environments. Despite many recent studies on person re-identification in the visible domain (ReId), there are few studies dealing specifically with VI-ReId. Besides challenges that are common for both ReId and VI-ReId such as pose/illumination variations, background clutter and occlusion, VI-ReId has additional challenges as color information is not available in infrared images. As a result, the performance of VI-ReId systems is typically lower than that of ReId systems. In this work, we propose a four-stream framework to improve VI-ReId performance. We train a separate deep convolutional neural network in each stream using different representations of input images. We expect that different and complementary features can be learned from each stream. In our framework, grayscale and infrared input images are used to train the ResNet in the first stream. In the second stream, RGB and three-channel infrared images (created by repeating the infrared channel) are used. In the remaining two streams, we use local pattern maps as input images. These maps are generated utilizing local Zernike moments transformation. Local pattern maps are obtained from grayscale and infrared images in the third stream and from RGB and three-channel infrared images in the last stream. We improve the performance of the proposed framework by employing a re-ranking algorithm for post-processing. Our results indicate that the proposed framework outperforms current state-of-the-art with a large margin by improving Rank-1/mAP by 29.79%/30.91% on SYSU-MM01 dataset, and by 9.73%/16.36% on RegDB dataset.WOS:000551127300017Scopus - Affiliation ID: 60105072Science Citation Index ExpandedQ2ArticleUluslararası işbirliği ile yapılmayan - HAYIREylül2020YÖK - 2020-2

    Toward a flexible facial analysis framework in OpenISS for visual effects

    Get PDF
    Facial analysis, including tasks such as face detection, facial landmark detection, and facial expression recognition, is a significant research domain in computer vision for visual effects. It can be used in various domains such as facial feature mapping for movie animation, biometrics/face recognition for security systems, and driver fatigue monitoring for transportation safety assistance. Most applications involve basic face and landmark detection as preliminary analysis approaches before proceeding into further specialized processing applications. As technology develops, there are plenty of implementations and resources for each task available for researchers, but the key missing properties among them all are fexibility and usability. The integration of functionality components involves complex configurations for each connection joint which is typically problematic with poor reusability and adjustability. The lack of support for integrating different functionality components greatly impact the research effort and cost for individual researchers, which also leads us to the idea of providing a framework solution that can help regarding the issue once and for all. To address this problem, we propose a user-friendly and highly expandable facial analysis framework solution. It contains a core that supports fundamental services for the framework, and a facial analysis module composed of implementations for facial analysis tasks. We evaluate our framework solution and achieve our goals of instantiating the facial analysis specialized framework, which essentially perform tasks in face detection, facial landmark detection, and facial expression recognition. This framework solution as a whole, solves the industry problem of lacking an execution platform for integrated facial analysis implementations and fills the gap in visual effects industry

    A Review of Deep Convolutional Neural Networks in Mobile Face Recognition

    Get PDF
    With the emergence of deep learning, Convolutional Neural Network (CNN) models have been proposed to advance the progress of various applications, including face recognition, object detection, pattern recognition, and number plate recognition. The utilization of CNNs in these areas has considerably improved security and surveillance capabilities by providing automated recognition solutions, such as traffic surveillance, access control devices, biometric security systems, and attendance systems. However, there is still room for improvement in this field. This paper discusses several classic CNN models, such as LeNet-5, AlexNet, VGGNet, GoogLeNet, and ResNet, as well as lightweight models for mobile-based applications, such as MobileNet, ShuffleNet, and EfficientNet. Additionally, deep CNN-based face recognition models, such as DeepFace, DeepID, FaceNet, and SphereFace, are explored, along with their architectural characteristics, advantages, disadvantages, and recognition accuracy. The results indicate that many scholars are researching lightweight face recognition, but applying it to mobile devices is impractical due to high computational costs. Furthermore, noise label learning is not robust in actual scenarios, and unlabeled face learning is expensive in manual labeling. Finally, this paper concludes with a discussion of the current problems faced by face recognition technology and its potential future directions for development

    Enhancing Face Recognition with Deep Learning Architectures: A Comprehensive Review

    Get PDF
    The progression of information discernment via facial identification and the emergence of innovative frameworks has exhibited remarkable strides in recent years. This phenomenon has been particularly pronounced within the realm of verifying individual credentials, a practice prominently harnessed by law enforcement agencies to advance the field of forensic science. A multitude of scholarly endeavors have been dedicated to the application of deep learning techniques within machine learning models. These endeavors aim to facilitate the extraction of distinctive features and subsequent classification, thereby elevating the precision of unique individual recognition. In the context of this scholarly inquiry, the focal point resides in the exploration of deep learning methodologies tailored for the realm of facial recognition and its subsequent matching processes. This exploration centers on the augmentation of accuracy through the meticulous process of training models with expansive datasets. Within the confines of this research paper, a comprehensive survey is conducted, encompassing an array of diverse strategies utilized in facial recognition. This survey, in turn, delves into the intricacies and challenges that underlie the intricate field of facial recognition within imagery analysis

    Object Detection and Classification in the Visible and Infrared Spectrums

    Get PDF
    The over-arching theme of this dissertation is the development of automated detection and/or classification systems for challenging infrared scenarios. The six works presented herein can be categorized into four problem scenarios. In the first scenario, long-distance detection and classification of vehicles in thermal imagery, a custom convolutional network architecture is proposed for small thermal target detection. For the second scenario, thermal face landmark detection and thermal cross-spectral face verification, a publicly-available visible and thermal face dataset is introduced, along with benchmark results for several landmark detection and face verification algorithms. Furthermore, a novel visible-to-thermal transfer learning algorithm for face landmark detection is presented. The third scenario addresses near-infrared cross-spectral periocular recognition with a coupled conditional generative adversarial network guided by auxiliary synthetic loss functions. Finally, a deep sparse feature selection and fusion is proposed to detect the presence of textured contact lenses prior to near-infrared iris recognition

    On Symbiosis of Attribute Prediction and Semantic Segmentation

    Full text link
    In this paper, we propose to employ semantic segmentation to improve person-related attribute prediction. The core idea lies in the fact that the probability of an attribute to appear in an image is far from being uniform in the spatial domain. We build our attribute prediction model jointly with a deep semantic segmentation network. This harnesses the localization cues learned by the semantic segmentation to guide the attention of the attribute prediction to the regions where different attributes naturally show up. Therefore, in addition to prediction, we are able to localize the attributes despite merely having access to image-level labels (weak supervision) during training. We first propose semantic segmentation-based pooling and gating, respectively denoted as SSP and SSG. In the former, the estimated segmentation masks are used to pool the final activations of the attribute prediction network, from multiple semantically homogeneous regions. In SSG, the same idea is applied to the intermediate layers of the network. SSP and SSG, while effective, impose heavy memory utilization since each channel of the activations is pooled/gated with all the semantic segmentation masks. To circumvent this, we propose Symbiotic Augmentation (SA), where we learn only one mask per activation channel. SA allows the model to either pick one, or combine (weighted superposition) multiple semantic maps, in order to generate the proper mask for each channel. SA simultaneously applies the same mechanism to the reverse problem by leveraging output logits of attribute prediction to guide the semantic segmentation task. We evaluate our proposed methods for facial attributes on CelebA and LFWA datasets, while benchmarking WIDER Attribute and Berkeley Attributes of People for whole body attributes. Our proposed methods achieve superior results compared to the previous works.Comment: Accepted for publication in PAMI. arXiv admin note: substantial text overlap with arXiv:1704.0874

    Real-Time Facial Emotion Recognition Using Fast R-CNN

    Get PDF
    In computer vision and image processing, object detection algorithms are used to detect semantic objects of certain classes of images and videos. Object detector algorithms use deep learning networks to classify detected regions. Unprecedented advancements in Convolutional Neural Networks (CNN) have led to new possibilities and implementations for object detectors. An object detector which uses a deep learning algorithm detect objects through proposed regions, and then classifies the region using a CNN. Object detectors are computationally efficient unlike a typical CNN which is computationally complex and expensive. Object detectors are widely used for face detection, recognition, and object tracking. In this thesis, deep learning based object detection algorithms are implemented to classify facially expressed emotions in real-time captured through a webcam. A typical CNN would classify images without specifying regions within an image, which could be considered as a limitation towards better understanding the network performance which depend on different training options. It would also be more difficult to verify whether a network have converged and is able to generalize, which is the ability to classify unseen data, data which was not part of the training set. Fast Region-based Convolutional Neural Network, an object detection algorithm; used to detect facially expressed emotion in real-time by classifying proposed regions. The Fast R-CNN is trained using a high-quality video database, consisting of 24 actors, facially expressing eight different emotions, obtained from images which were processed from 60 videos per actor. An object detector’s performance is measured using various metrics. Regardless of how an object detector performed with respect to average precision or miss rate, doing well on such metrics would not necessarily mean that the network is correctly classifying regions. This may result from the fact that the network model has been over-trained. In our work we showed that object detector algorithm such as Fast R-CNN performed surprisingly well in classifying facially expressed emotions in real-time, performing better than CNN

    Deep Learning Model Based on ResNet-50 for Beef Quality Classification

    Get PDF
    Food quality measurement is one of the most essential topics in agriculture and industrial fields. To classify healthy food using computer visual inspection, a new architecture was proposed to classify beef images to specify the rancid and healthy ones. In traditional measurements, the specialists are not able to classify such images, due to the huge number of beef images required to build a deep learning model. In the present study, different images of beef including healthy and rancid cases were collected according to the analysis done by the Laboratory of Food Technology, Faculty of Agriculture, Kafrelsheikh University in January of 2020. The texture analysis of the beef surface of the enrolled images makes it difficult to distinguish between the rancid and healthy images. Moreover, a deep learning approach based on ResNet-50 was presented as a promising classifier to grade and classify the beef images. In this work, a limited number of images were used to present the research problem of image resource limitation; eight healthy images and ten rancid beef images. This number of images is not sufficient to be retrained using deep learning approaches. Thus, Generative Adversarial Network (GAN) was proposed to augment the enrolled images to produce one hundred eighty images. The results obtained based on ResNet-50 classification achieve accuracy of 96.03%, 91.67%, and 88.89% in the training, testing, and validation phases, respectively. Furthermore, a comparison of the current model (ResNet-50) with the classical and deep learning architecture is made to demonstrate the efficiency of ResNet-50, in image classification
    corecore