111 research outputs found

    Recent Advances in Deep Learning Techniques for Face Recognition

    Full text link
    In recent years, researchers have proposed many deep learning (DL) methods for various tasks, and particularly face recognition (FR) made an enormous leap using these techniques. Deep FR systems benefit from the hierarchical architecture of the DL methods to learn discriminative face representation. Therefore, DL techniques significantly improve state-of-the-art performance on FR systems and encourage diverse and efficient real-world applications. In this paper, we present a comprehensive analysis of various FR systems that leverage the different types of DL techniques, and for the study, we summarize 168 recent contributions from this area. We discuss the papers related to different algorithms, architectures, loss functions, activation functions, datasets, challenges, improvement ideas, current and future trends of DL-based FR systems. We provide a detailed discussion of various DL methods to understand the current state-of-the-art, and then we discuss various activation and loss functions for the methods. Additionally, we summarize different datasets used widely for FR tasks and discuss challenges related to illumination, expression, pose variations, and occlusion. Finally, we discuss improvement ideas, current and future trends of FR tasks.Comment: 32 pages and citation: M. T. H. Fuad et al., "Recent Advances in Deep Learning Techniques for Face Recognition," in IEEE Access, vol. 9, pp. 99112-99142, 2021, doi: 10.1109/ACCESS.2021.309613

    Co-segmentation assisted cross-modality person re-identification

    Get PDF
    We present a deep learning-based method for Visible-Infrared person Re-Identification (VI-ReID). The major contribution lies in the incorporation of co-segmentation into a multi-task learning framework for VI-ReID, where the co-segmentation concept aids in making the distributions of RGB images and IR images the same for the same identity but diverse for different identities. Accordingly, a novel multi-task learning based model, i.e., co-segmentation assisted VI-ReID (CSVI), is proposed in this paper. Specifically, the co-segmentation network first takes as the inputs the modality-shared features extracted from a set of RGB and IR images by using the VI-ReID model. Then, it exploits their semantic similarities for predicting the person masks of the common identities within the input RGB and IR images by using a cross-modality center based weight generation module and a segmentation decoder. Doing so enables the VI-ReID model to extract more additional modality-shared shape features for boosting performance. Meanwhile, the co-segmentation network implicitly establishes the interactions among the set of RGB and IR images, thus further bridging the large modality discrepancies. Our model’s effectiveness and superiority are verified through experimental comparisons with state-of-the-art algorithms on several benchmark datasets

    Exploring deep learning powered person re-identification

    Get PDF
    With increased security demands, more and more video surveillance systems are installed in public places, such as schools, stations, and shopping malls. Such large-scale monitoring requires 24/7 video analytics, which cannot be achieved purely by manual operations. Thanks to recent advances in artificial intelligence (AI), deep learning algorithms enable automatic video analytics via smart devices, which interpret people/vehicle behaviours in real time to avoid anomalies effectively. Among various video analytical tasks, people search is one of the most critical use cases due to its wide application scenarios, such as searching for missing people, detecting intruders, and tracking suspects. However, current AI-powered people search is generally built upon facial recognition technique, which is effective yet may be privacy-invaded. To address the problem, person re-identification (ReID), which aims to identify person-of-interest without facial information, has become an effective panacea. Despite considerable achievements in recent years, person ReID still faces some tough challenges, such as 1) the strong reliance on identity labels during feature learning, 2) the tradeoff between searching speed and identification accuracy, and 3) the huge modality discrepancy lying between data from different sources, e.g., RGB image and infrared (IR) image. Therefore, the research interest of this thesis is to focus on the above challenges in person ReID, analyze the advantages and limitations of existing solutions, and propose improved solutions for each challenge. Specifically, to alleviate the identity label reliance during feature learning, an improved unsupervised person ReID framework is proposed in Chapter 3, which refines not only imperfect cluster results but also the optimisation directions of samples. Based on the unsupervised setting, we further focus on the tradeoff between searching speed and identification accuracy. To this end, an improved unsupervised binary feature learning scheme for person ReID is proposed in Chapter 4, which derives binary identity representations that not only are robust to transformations but also have low bit correlations. Apart from person ReID conducted within a single modality where both query and gallery are RGB images, cross-modality retrieval is more challenging yet more common in real-world scenarios. To handle the problem, a two-stream framework, facilitating person ReID with on-the-fly keypoint-aware features, is proposed in Chapter 5. Furthermore, the thesis spots several promising research topics in Chapter 6, which are instructive for future works in person ReI

    Camera Based Localization for Indoor Optical Wireless Networks

    Get PDF
    The main focus of this work is to implement device localization in an indoor communication network which employs short range Optical Wireless Communication (OWC) using pencil beams. OWC is becoming increasingly important as a solution to the shortage of available radio spectrum. In order to counter this problem, a radical new approach is proposed by performing wireless communication using optical rather than radio techniques, by deploying optical pencil beam technologies to provide users with access to an indoor optical fiber infrastructure. An architecture based on free-space optics has been adopted. The narrow infrared beam is considered a good solution because of its ability to optimally carry all the information which the optical fiber can transport, in an energy-efficient way. Beam Steered - Infrared Light Communication (BS-ILC) brings the light only where is needed. Multiple beams may independently serve user devices within a room, hence each device can get a non-shared capacity without conflicts with other devices. Infrared light beams, additionally, are allowed to be operated at a higher power than visible light beams, due to a higher eye safety threshold for infrared light. Together with the directivity of a beam, this implies that the received signal-to-noise ratio with BS-ILC can be substantially higher than with Visible Light Communication (VLC), enabling a higher data rate and longer reach at better power efficiency. Current BS-ILC prototypes allow multiple beams with over 100 Gbit/s per beam. This high performance can only be achieved with small footprints, hence the system needs to know the exact location of user devices. In this thesis, an accurate and fast localization/tracking technique using a low-cost camera and simple image processing is presented

    Personenwiedererkennung mittels maschineller Lernverfahren für öffentliche Einsatzumgebungen

    Get PDF
    Die erscheinungsbasierte Personenwiedererkennung in öffentlichen Einsatzumgebungen ist eines der schwierigsten, noch ungelösten Probleme der Bildverarbeitung. Viele Teilprobleme können nur gelöst werden, wenn Methoden des maschinellen Lernens mit Methoden der Bildverarbeitung kombiniert werden. In dieser Arbeit werden maschinelle Lernverfahren eingesetzt, um alle Abarbeitungsschritte einer erscheinungsbasierten Personenwiedererkennung zu verbessern: Mithilfe von Convolutional Neural Networks werden erscheinungsbasierte Merkmale gelernt, die eine Wiedererkennung auf menschlichem Niveau ermöglichen. Für die Generierung des Templates zur Beschreibung der Zielperson wird durch Einsatz maschineller Lernverfahren eine automatische Auswahl personenspezifischer, diskriminativer Merkmale getroffen. Durch eine gelernte Metrik können beim Vergleich von Merkmalsvektoren szenariospezifische Umwelteinflüsse kompensiert werden. Eine Fusion komplementärer Merkmale auf Score Level steigert die Wiedererkennungsleistung deutlich. Dies wird vor allem durch eine gelernte Gewichtung der Merkmale erreicht. Das entwickelte Verfahren wird exemplarisch anhand zweier Einsatzszenarien - Videoüberwachung und Robotik - evaluiert. Bei der Videoüberwachung ermöglicht die Wiedererkennung von Personen ein kameraübergreifendes Tracking. Dies hilft menschlichen Operateuren, den Aufenthaltsort einer gesuchten Person in kurzer Zeit zu ermitteln. Durch einen mobilen Serviceroboter kann der aktuelle Nutzer anhand einer erscheinungsbasierten Wiedererkennung identifiziert werden. Dies hilft dem Roboter bei der Erfüllung von Aufgaben, bei denen er den Nutzer lotsen oder verfolgen muss. Die Qualität der erscheinungsbasierten Personenwiedererkennung wird in dieser Arbeit anhand von zwölf Kriterien charakterisiert, die einen Vergleich mit biometrischen Verfahren ermöglichen. Durch den Einsatz maschineller Lernverfahren wird bei der erscheinungsbasierten Personenwiedererkennung in den betrachteten unüberwachten, öffentlichen Einsatzfeldern eine Erkennungsleistung erzielt, die sich mit biometrischen Verfahren messen kann.Appearance-based person re-identification in public environments is one of the most challenging, still unsolved computer vision tasks. Many sub-tasks can only be solved by combining machine learning with computer vision methods. In this thesis, we use machine learning approaches in order to improve all processing steps of the appearance-based person re-identification: We apply convolutional neural networks for learning appearance-based features capable of performing re-identification at human level. For generating a template to describe the person of interest, we apply machine learning approaches that automatically select person-specific, discriminative features. A learned metric helps to compensate for scenario-specific perturbations while matching features. Fusing complementary features at score level improves the re-identification performance. This is achieved by a learned feature weighting. We deploy our approach in two applications, namely surveillance and robotics. In the surveillance application, person re-identification enables multi-camera tracking. This helps human operators to quickly determine the current location of the person of interest. By applying appearance-based re-identification, a mobile service robot is able to keep track of users when following or guiding them. In this thesis, we measure the quality of the appearance-based person re-identification by twelve criteria. These criteria enable a comparison with biometric approaches. Due to the application of machine learning techniques, in the considered unsupervised, public fields of application, the appearance-based person re-identification performs on par with biometric approaches.Die erscheinungsbasierte Personenwiedererkennung in öffentlichen Einsatzumgebungen ist eines der schwierigsten, noch ungelösten Probleme der Bildverarbeitung. Viele Teilprobleme können nur gelöst werden, wenn Methoden des maschinellen Lernens mit Methoden der Bildverarbeitung kombiniert werden. In dieser Arbeit werden maschinelle Lernverfahren eingesetzt, um alle Abarbeitungsschritte einer erscheinungsbasierten Personenwiedererkennung zu verbessern, sodass eine Wiedererkennung auf menschlichem Niveau ermöglicht wird. Das entwickelte Verfahren wird anhand zweier Einsatzszenarien — Videoüberwachung und Robotik — evaluiert. Bei der Videoüberwachung ermöglicht die Wiedererkennung von Personen ein kameraübergreifendes Tracking um den Aufenthaltsort einer gesuchten Person in kurzer Zeit zu ermitteln. Durch einen mobilen Serviceroboter kann der aktuelle Nutzer anhand einer erscheinungsbasierten Wiedererkennung identifiziert werden. Dies hilft dem Roboter beim Lots

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Computer Vision Techniques for Ambient Intelligence Applications

    Get PDF
    Ambient Intelligence (AmI) is a muldisciplinary area which refers to environments that are sensitive and responsive to the presence of people and objects. The rapid progress of technology and simultaneous reduction of hardware costs characterizing the recent years have enlarged the number of possible AmI applications, thus raising at the same time new research challenges. In particular, one important requirement in AmI is providing a proactive support to people in their everyday working and free-time activities. To this aim, Computer Vision represents a core research track since only through suitable vision devices and techniques it is possible to detect elements of interest and understand the occurring events. The goal of this thesis is presenting and demonstrating efficacy of novel machine vision research contributes for different AmI scenarios: object keypoints analysis for Augmented Reality purpose, segmentation of natural images for plant species recognition and heterogeneous people identification in unconstrained environments

    Object Tracking

    Get PDF
    Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application
    corecore