4 research outputs found

    Tracking, Detection and Registration in Microscopy Material Images

    Get PDF
    Fast and accurate characterization of fiber micro-structures plays a central role for material scientists to analyze physical properties of continuous fiber reinforced composite materials. In materials science, this is usually achieved by continuously crosssectioning a 3D material sample for a sequence of 2D microscopic images, followed by a fiber detection/tracking algorithm through the obtained image sequence. To speed up this process and be able to handle larger-size material samples, we propose sparse sampling with larger inter-slice distance in cross sectioning and develop a new algorithm that can robustly track large-scale fibers from such a sparsely sampled image sequence. In particular, the problem is formulated as multi-target tracking and Kalman filters are applied to track each fiber along the image sequence. One main challenge in this tracking process is to correctly associate each fiber to its observation given that 1) fiber observations are of large scale, crowded and show very similar appearances in a 2D slice, and 2) there may be a large gap between the predicted location of a fiber and its observation in the sparse sampling. To address this challenge, a novel group-wise association algorithm is developed by leveraging the fact that fibers are implanted in bundles and the fibers in the same bundle are highly correlated through the image sequence. Tracking-by-detection algorithms rely heavily on detection accuracy, especially the recall performance. The state-of-the-art fiber detection algorithms perform well under ideal conditions, but are not accurate where there are local degradations of image quality, due to contaminants on the material surface and/or defocus blur. Convolutional Neural Networks (CNN) could be used for this problem, but would require a large number of manual annotated fibers, which are not available. We propose an unsupervised learning method to accurately detect fibers on the large scale, which is robust against local degradations of image quality. The proposed method does not require manual annotations, but uses fiber shape/size priors and spatio-temporal consistency in tracking to simulate the supervision in the training of the CNN. Due to the significant microscope movement during the data acquisition, the sampled microscopy images might be not well aligned, which increases the difficulties for further large-scale fiber tracking. In this dissertation, we design an object tracking system which could accurately track large-scale fibers and simultaneously perform satisfactory image registration. Large-scale fiber tracking task is accomplished by Kalman filters based tracking methods. With the assistance of fiber tracking, the image registration is performed in a coarse-to-fine way. To evaluate the proposed methods, a dataset was collected by Air Force Research Laboratories (AFRL). The material scientists in AFRL used a serial sectioning instrument to cross-section the 3D material samples. During sample preparation, the samples are ground, cleaned, and then imaged. Experimental results on this collected dataset have demonstrated that the proposed methods yield significant improvements in large-scale fiber tracking and detection, together with satisfactory image registration

    Person Identification With Convolutional Neural Networks

    Get PDF
    Person identification aims at matching persons across images or videos captured by different cameras, without requiring the presence of persons’ faces. It is an important problem in computer vision community and has many important real-world applica- tions, such as person search, security surveillance, and no-checkout stores. However, this problem is very challenging due to various factors, such as illumination varia- tion, view changes, human pose deformation, and occlusion. Traditional approaches generally focus on hand-crafting features and/or learning distance metrics for match- ing to tackle these challenges. With Convolutional Neural Networks (CNNs), feature extraction and metric learning can be combined in a unified framework. In this work, we study two important sub-problems of person identification: cross- view person identification and visible-thermal person re-identification. Cross-view person identification aims to match persons from temporally synchronized videos taken by wearable cameras. Visible-thermal person re-identification aims to match persons between images taken by visible cameras under normal illumination condition and thermal cameras under poor illumination condition such as during night time. For cross-view person identification, we focus on addressing the challenge of view changes between cameras. Since the videos are taken by wearable cameras, the un- derlying 3D motion pattern of the same person should be consistent and thus can be used for effective matching. In light of this, we propose to extract view-invariant mo- tion features to match persons. Specifically, we propose a CNN-based triplet network to learn view-invariant features by establishing correspondences between 3D human MoCap data and the projected 2D optical flow data. After training, the triplet net-work is used to extract view-invariant features from 2D optical flows of videos for matching persons. We collect three datasets for evaluation. The experimental results demonstrate the effectiveness of this method. For visible-thermal person re-identification, we focus on the challenge of domain discrepancy between visible images and thermal images. We propose to address this issue at a class level with a CNN-based two-stream network. Specifically, our idea is to learn a center for features of each person in each domain (visible and thermal domains), using a new relaxed center loss. Instead of imposing constraints between pairs of samples, we enforce the centers of the same person in visible and thermal domains to be close, and the centers of different persons to be distant. We also enforce the feature vector from the center of one person to another in visible feature space to be similar to that in thermal feature space. Using this network, we can learn domain- independent features for visible-thermal person re-identification. Experiments on two public datasets demonstrate the effectiveness of this method
    corecore