1,008 research outputs found
Exploiting Multiple Detections for Person Re-Identification
Re-identification systems aim at recognizing the same individuals in multiple cameras, and one of the most relevant problems is that the appearance of same individual varies across cameras due to illumination and viewpoint changes. This paper proposes the use of cumulative weighted brightness transfer functions (CWBTFs) to model these appearance variations. Different from recently proposed methods which only consider pairs of images to learn a brightness transfer function, we exploit such a multiple-frame-based learning approach that leverages consecutive detections of each individual to transfer the appearance. We first present a CWBTF framework for the task of transforming appearance from one camera to another. We then present a re-identification framework where we segment the pedestrian images into meaningful parts and extract features from such parts, as well as from the whole body. Jointly, both of these frameworks contribute to model the appearance variations more robustly. We tested our approach on standard multi-camera surveillance datasets, showing consistent and significant improvements over existing methods on three different datasets without any other additional cost. Our approach is general and can be applied to any appearance-based metho
re-OBJ: Jointly Learning the Foreground and Background for Object Instance Re-identification
Conventional approaches to object instance re-identification rely on matching
appearances of the target objects among a set of frames. However, learning
appearances of the objects alone might fail when there are multiple objects
with similar appearance or multiple instances of same object class present in
the scene. This paper proposes that partial observations of the background can
be utilized to aid in the object re-identification task for a rigid scene,
especially a rigid environment with a lot of reoccurring identical models of
objects. Using an extension to the Mask R-CNN architecture, we learn to encode
the important and distinct information in the background jointly with the
foreground relevant to rigid real-world scenarios such as an indoor environment
where objects are static and the camera moves around the scene. We demonstrate
the effectiveness of our joint visual feature in the re-identification of
objects in the ScanNet dataset and show a relative improvement of around 28.25%
in the rank-1 accuracy over the deepSort method.Comment: Accepted to ICIAP 2019 and awarded the Best Student Pape
Unsupervised Adaptive Re-identification in Open World Dynamic Camera Networks
Person re-identification is an open and challenging problem in computer
vision. Existing approaches have concentrated on either designing the best
feature representation or learning optimal matching metrics in a static setting
where the number of cameras are fixed in a network. Most approaches have
neglected the dynamic and open world nature of the re-identification problem,
where a new camera may be temporarily inserted into an existing system to get
additional information. To address such a novel and very practical problem, we
propose an unsupervised adaptation scheme for re-identification models in a
dynamic camera network. First, we formulate a domain perceptive
re-identification method based on geodesic flow kernel that can effectively
find the best source camera (already installed) to adapt with a newly
introduced target camera, without requiring a very expensive training phase.
Second, we introduce a transitive inference algorithm for re-identification
that can exploit the information from best source camera to improve the
accuracy across other camera pairs in a network of multiple cameras. Extensive
experiments on four benchmark datasets demonstrate that the proposed approach
significantly outperforms the state-of-the-art unsupervised learning based
alternatives whilst being extremely efficient to compute.Comment: CVPR 2017 Spotligh
Machine Learning Advances for Practical Problems in Computer Vision
Convolutional neural networks (CNN) have become the de facto standard for computer vision tasks, due to their unparalleled performance and versatility. Although deep learning removes the need for extensive hand engineered features for every task, real world applications of CNNs still often require considerable engineering effort to produce usable results. In this thesis, we explore solutions to problems that arise in practical applications of CNNs.
We address a rarely acknowledged weakness of CNN object detectors: the tendency to emit many excess detection boxes per object, which must be pruned by non maximum suppression (NMS). This practice relies on the assumption that highly overlapping boxes are excess, which is problematic when objects are occluding overlapping detections are actually required. Therefore we propose a novel loss function that incentivises a CNN to emit exactly one detection per object, making NMS unnecessary.
Another common problem when deploying a CNN in the real world is domain shift - CNNs can be surprisingly vulnerable to sometimes quite subtle differences between the images they encounter at deployment and those they are trained on. We investigate the role that texture plays in domain shift, and propose a novel data augmentation technique using style transfer to train CNNs that are more robust against shifts in texture. We demonstrate that this technique results in better domain transfer on several datasets, without requiring any domain specific knowledge.
In collaboration with AstraZeneca, we develop an embedding space for cellular images collected in a high throughput imaging screen as part of a drug discovery project. This uses a combination of techniques to embed the images in 2D space such that similar images are nearby, for the purpose of visualization and data exploration. The images are also clustered automatically, splitting the large dataset into a smaller number of clusters that display a common phenotype. This allows biologists to quickly triage the high throughput screen, selecting a small subset of promising phenotypes for further investigation.
Finally, we investigate an unusual form of domain bias that manifested in a real-world visual binary classification project for counterfeit detection. We confirm that CNNs are able to ``cheat'' the task by exploiting a strong correlation between class label and the specific camera that acquired the image, and show that this reliably occurs when the correlation is present. We also investigate the question of how exactly the CNN is able to infer camera type from image pixels, given that this is impossible to the human eye.
The contributions in this thesis are of practical value to deep learning practitioners working on a variety of problems in the field of computer vision
Applications of a Graph Theoretic Based Clustering Framework in Computer Vision and Pattern Recognition
Recently, several clustering algorithms have been used to solve variety of
problems from different discipline. This dissertation aims to address different
challenging tasks in computer vision and pattern recognition by casting the
problems as a clustering problem. We proposed novel approaches to solve
multi-target tracking, visual geo-localization and outlier detection problems
using a unified underlining clustering framework, i.e., dominant set clustering
and its extensions, and presented a superior result over several
state-of-the-art approaches.Comment: doctoral dissertatio
- …