5 research outputs found

    Deep Image Retrieval: A Survey

    Get PDF
    In recent years a vast amount of visual content has been generated and shared from various fields, such as social media platforms, medical images, and robotics. This abundance of content creation and sharing has introduced new challenges. In particular, searching databases for similar content, i.e.content based image retrieval (CBIR), is a long-established research area, and more efficient and accurate methods are needed for real time retrieval. Artificial intelligence has made progress in CBIR and has significantly facilitated the process of intelligent search. In this survey we organize and review recent CBIR works that are developed based on deep learning algorithms and techniques, including insights and techniques from recent papers. We identify and present the commonly-used benchmarks and evaluation methods used in the field. We collect common challenges and propose promising future directions. More specifically, we focus on image retrieval with deep learning and organize the state of the art methods according to the types of deep network structure, deep features, feature enhancement methods, and network fine-tuning strategies. Our survey considers a wide variety of recent methods, aiming to promote a global view of the field of instance-based CBIR.Comment: 20 pages, 11 figure

    Fusion features ensembling models using Siamese convolutional neural network for kinship verification

    Get PDF
    Family is one of the most important entities in the community. Mining the genetic information through facial images is increasingly being utilized in wide range of real-world applications to facilitate family members tracing and kinship analysis to become remarkably easy, inexpensive, and fast as compared to the procedure of profiling Deoxyribonucleic acid (DNA). However, the opportunities of building reliable models for kinship recognition are still suffering from the insufficient determination of the familial features, unstable reference cues of kinship, and the genetic influence factors of family features. This research proposes enhanced methods for extracting and selecting the effective familial features that could provide evidences of kinship leading to improve the kinship verification accuracy through visual facial images. First, the Convolutional Neural Network based on Optimized Local Raw Pixels Similarity Representation (OLRPSR) method is developed to improve the accuracy performance by generating a new matrix representation in order to remove irrelevant information. Second, the Siamese Convolutional Neural Network and Fusion of the Best Overlapping Blocks (SCNN-FBOB) is proposed to track and identify the most informative kinship clues features in order to achieve higher accuracy. Third, the Siamese Convolutional Neural Network and Ensembling Models Based on Selecting Best Combination (SCNN-EMSBC) is introduced to overcome the weak performance of the individual image and classifier. To evaluate the performance of the proposed methods, series of experiments are conducted using two popular benchmarking kinship databases; the KinFaceW-I and KinFaceW-II which then are benchmarked against the state-of-art algorithms found in the literature. It is indicated that SCNN-EMSBC method achieves promising results with the average accuracy of 92.42% and 94.80% on KinFaceW-I and KinFaceW-II, respectively. These results significantly improve the kinship verification performance and has outperformed the state-of-art algorithms for visual image-based kinship verification

    Deep Multi-View Learning for Visual Understanding

    Get PDF
    PhD ThesisMulti-view data is the result of an entity being perceived or represented from multiple perspectives. Plenty of applications in visual understanding contain multi-view data. For example, the face images for training a recognition system are usually captured by different devices from multiple angles. This thesis focuses on the cross-view visual recognition problems, e.g., identifying the face images of the same person across different cameras. Several representative multi-view settings, from the supervised multi-view learning to the more challenging unsupervised domain adaptive (UDA) multi-view learning, are investigated. Novel multi-view learning algorithms are proposed correspondingly. To be more specific, the proposed methods are based on the advanced deep neural network (DNN) architectures for better handling visual data. However, directly combining the multi-view learning objectives with DNN can result in different issues, e.g., on scalability, and limit the application scenarios and model performance. Corresponding novelties in DNN methods are thus required to solve them. This thesis is organised into three parts. Each chapter focuses on a multi-view learning setting with novel solutions and is detailed as follows: Chapter 3 A supervised multi-view learning setting with two different views are studied. To recognise the data samples across views, one strategy is aligning them in a common feature space via correlation maximisation. It is also known as canonical correlation analysis (CCA). Deep CCA has been proposed for better performance with the non-linear projection via deep neural networks. Existing deep CCA models typically decorrelate the deep feature dimensions of each view before their Euclidean distances are minimised in the common space. This feature decorrelation is achieved by enforcing an exact decorrelation constraint which is computationally expensive due to the matrix inversion or SVD operations. Therefore, existing deep CCA models are inefficient and have scalability issues. Furthermore, the exact decorrelation is incompatible with the gradient based deep model training and results in sub-optimal solution. To overcome these aforementioned issues, a novel deep CCA model Soft CCA is introduced in this thesis. Specifically, the exact decorrelation is replaced by soft decorrelation via a mini-batch based Stochastic Decorrelation Loss (SDL). It can be jointly optimised with the other training objectives. In addition, our SDL loss can be applied to other deep models beyond multi-view learning. Chapter 4 The supervised multi-view learning setting, whereby more than two views exist, are studied in this chapter. Recently developed deep multi-view learning algorithms either learn a latent visual representation based on a single semantic level and/or require laborious human annotation of these factors as attributes. A novel deep neural network architecture, called Multi- Level Factorisation Net (MLFN), is proposed to automatically factorise the visual appearance into latent discriminative factors at multiple semantic levels without manual annotation. The main purpose is forcing different views share the same latent factors so that they are can be aligned at all layers. Specifically, MLFN is composed of multiple stacked blocks. Each block contains multiple factor modules to model latent factors at a specific level, and factor selection modules that dynamically select the factor modules to interpret the content of each input image. The outputs of the factor selection modules also provide a compact latent factor descriptor that is complementary to the conventional deeply learned feature, and they can be fused efficiently. The effectiveness of the proposed MLFN is demonstrated by not only the large-scale cross-view recognition problems but also the general object categorisation tasks. Chapter 5 The last problem is a special unsupervised domain adaptation setting called unsupervised domain adaptive (UDA) multi-view learning. It contains a fully annotated dataset as the source domain and another unsupervised dataset with relevant tasks as the target domain. The main purpose is to improve the performance of the unlabelled dataset with the annotated data from the other dataset. More importantly, this setting further requires both the source and target domains are multi-view datasets with relevant tasks. Therefore, the assumption of the aligned label space across domains is inappropriate in the UDA multi-view learning. For example, the person re-identification (Re-ID) datasets built on different surveillance scenarios are with images of different people captured and should be given disjoint person identity labels. Existing methods for UDA multi-view learning problems are aligning different domains either in the raw image space or a feature embedding space for domain alignment. In this thesis, a different framework, multi-task learning, is adopted with the domain specific objectives for a common space learning. Specifically, such common space is proposed to enable the knowledge transfer. The conventional supervised losses can be used for the labelled source data while the unsupervised objectives for the target domain play the key roles in domain adaptation. Two novel unsupervised objectives are introduced for UDA multi-view learning and result in two models as below. The first model, termed common factorised space model (CFSM), is built on the assumptions that the semantic latent attributes are shared between the source and target domains since they are relevant multi-view learning tasks. Different from the existing methods that based on domain alignment, CFSM emphasizes on transferring the information across domains via discovering discriminative latent factors in the proposed common space. However, the multi-view data from target domain is without labels. Therefore, an unsupervised factorisation loss is derived and applied on the common space for latent factors discovery across domains. The second model still learns a shared embedding space with multi-view data from both domains but with a different assumption. It attempts to discover the latent correspondence of multi-view data in the unsupervised target data. The target data’s contribution comes from a clustering process. Each cluster thus reveals the underlying cross-view correspondences across multiple views in target domain. To this end, a novel Stochastic Inference for Deep Clustering (SIDC) method is proposed. It reduces self-reinforcing errors that lead to premature convergence to a sub-optimal solution by changing the conventional deterministic cluster assignment to a stochastic one

    Moving-Camera Video Content Analysis via Action Recognition and Homography Transformation

    Get PDF
    Moving-camera video content analysis aims at interpreting useful information in videos taken by moving cameras, including wearable cameras and handy cameras. It is an essential problem in computer vision, and plays an important role in many real-life applications, including understanding social difficulties and enhancing public security. In this work, we study three sub-problems of moving-camera video content analysis, including two sub-problems for the analysis on wearable-camera videos which are a special type of moving camera videos: recognizing general actions and recognizing microactions in wearable-camera videos. And, the third sub-problem is estimating homographies along moving-camera videos. Recognizing general actions in wearable-camera videos is a challenging task, because the motion features extracted from videos of the same action may show very large variation and inconsistency, by mixing the complex and non-stop motion of the camera. It is very difficult to collect sufficient videos to cover all such variations and use them to train action classifiers with good generalization ability. To address this, we develop a new approach to train action classifiers on a relatively smaller set of fixed-camera videos with different views, and then apply them to recognize actions in wearable-camera videos. We conduct experiments by training on a set of fixed-camera videos and testing on a set of wearable-camera videos, with very promising results. Microactions such as small hand or head movements, can be difficult to be recognized in practice, especially from wearble-camera videos, because only subtle body motion is presented. To address this, we proposed a new deep-learning based method to effectively learn midlayer CNN features for enhancing microaction recognition. More specifically, we develop a new dual-branch network for microaction recognition: one branch uses the high-layer CNN features for classification, and the second branch with a novel subtle motion detector further explores the midlayer CNN features for classification. In the experiments, we build a new microaction video dataset, where the micromotions of interest are mixed with other larger general motions such as walking. Comprehensive experimental results verify that the proposed method yields new state-of-the-art performance in two microaction video datasets, while its performance on two general-action video datasets is also very promising. Homography is the invertible mapping between two images of the same planar surface. For estimating homographies along moving-camera videos, homography estimation between non-adjacent frames can be very challenging when their camera view angles show large difference. To handle this, we propose a new deep-learning based method for homography estimation along videos by exploiting temporal dynamics across frames. More specifically, we develop a recurrent convolutional regression network consisting of convolutional neural network and recurrent neural network with long short-term memory cells, followed by a regression layer for estimating the parameters of homography. In the experiments, we introduce a new approach to synthesize videos with known ground-truth homographies, and evaluate the proposed method on both the synthesized and real-world videos with good results
    corecore