197 research outputs found

    Beyond KernelBoost

    Get PDF
    In this Technical Report we propose a set of improvements with respect to the KernelBoost classifier presented in [Becker et al., MICCAI 2013]. We start with a scheme inspired by Auto-Context, but that is suitable in situations where the lack of large training sets poses a potential problem of overfitting. The aim is to capture the interactions between neighboring image pixels to better regularize the boundaries of segmented regions. As in Auto-Context [Tu et al., PAMI 2009] the segmentation process is iterative and, at each iteration, the segmentation results for the previous iterations are taken into account in conjunction with the image itself. However, unlike in [Tu et al., PAMI 2009], we organize our recursion so that the classifiers can progressively focus on difficult-to-classify locations. This lets us exploit the power of the decision-tree paradigm while avoiding over-fitting. In the context of this architecture, KernelBoost represents a powerful building block due to its ability to learn on the score maps coming from previous iterations. We first introduce two important mechanisms to empower the KernelBoost classifier, namely pooling and the clustering of positive samples based on the appearance of the corresponding ground-truth. These operations significantly contribute to increase the effectiveness of the system on biomedical images, where texture plays a major role in the recognition of the different image components. We then present some other techniques that can be easily integrated in the KernelBoost framework to further improve the accuracy of the final segmentation. We show extensive results on different medical image datasets, including some multi-label tasks, on which our method is shown to outperform state-of-the-art approaches. The resulting segmentations display high accuracy, neat contours, and reduced noise

    Multigranularity Representations for Human Inter-Actions: Pose, Motion and Intention

    Get PDF
    Tracking people and their body pose in videos is a central problem in computer vision. Standard tracking representations reason about temporal coherence of detected people and body parts. They have difficulty tracking targets under partial occlusions or rare body poses, where detectors often fail, since the number of training examples is often too small to deal with the exponential variability of such configurations. We propose tracking representations that track and segment people and their body pose in videos by exploiting information at multiple detection and segmentation granularities when available, whole body, parts or point trajectories. Detections and motion estimates provide contradictory information in case of false alarm detections or leaking motion affinities. We consolidate contradictory information via graph steering, an algorithm for simultaneous detection and co-clustering in a two-granularity graph of motion trajectories and detections, that corrects motion leakage between correctly detected objects, while being robust to false alarms or spatially inaccurate detections. We first present a motion segmentation framework that exploits long range motion of point trajectories and large spatial support of image regions. We show resulting video segments adapt to targets under partial occlusions and deformations. Second, we augment motion-based representations with object detection for dealing with motion leakage. We demonstrate how to combine dense optical flow trajectory affinities with repulsions from confident detections to reach a global consensus of detection and tracking in crowded scenes. Third, we study human motion and pose estimation. We segment hard to detect, fast moving body limbs from their surrounding clutter and match them against pose exemplars to detect body pose under fast motion. We employ on-the-fly human body kinematics to improve tracking of body joints under wide deformations. We use motion segmentability of body parts for re-ranking a set of body joint candidate trajectories and jointly infer multi-frame body pose and video segmentation. We show empirically that such multi-granularity tracking representation is worthwhile, obtaining significantly more accurate multi-object tracking and detailed body pose estimation in popular datasets

    Harmony Potentials: Fusing Global and Local Scale for Semantic Image Segmentation

    Get PDF
    The Hierarchical Conditional Random Field (HCRF) model have been successfully applied to a number of image labeling problems, including image segmentation. However, existing HCRF models of image segmentation do not allow multiple classes to be assigned to a single region, which limits their ability to incorporate contextual information across multiple scales. At higher scales in the image, this representation yields an oversimplified model since multiple classes can be reasonably expected to appear within large regions. This simplified model particularly limits the impact of information at higher scales. Since class-label information at these scales is usually more reliable than at lower, noisier scales, neglecting this information is undesirable. To address these issues, we propose a new consistency potential for image labeling problems, which we call the harmony potential. It can encode any possible combination of labels, penalizing only unlikely combinations of classes. We also propose an effective sampling strategy over this expanded label set that renders tractable the underlying optimization problem. Our approach obtains state-of-the-art results on two challenging, standard benchmark datasets for semantic image segmentation: PASCAL VOC 2010, and MSRC-2

    3D hand tracking.

    Get PDF
    The hand is often considered as one of the most natural and intuitive interaction modalities for human-to-human interaction. In human-computer interaction (HCI), proper 3D hand tracking is the first step in developing a more intuitive HCI system which can be used in applications such as gesture recognition, virtual object manipulation and gaming. However, accurate 3D hand tracking, remains a challenging problem due to the hand’s deformation, appearance similarity, high inter-finger occlusion and complex articulated motion. Further, 3D hand tracking is also interesting from a theoretical point of view as it deals with three major areas of computer vision- segmentation (of hand), detection (of hand parts), and tracking (of hand). This thesis proposes a region-based skin color detection technique, a model-based and an appearance-based 3D hand tracking techniques to bring the human-computer interaction applications one step closer. All techniques are briefly described below. Skin color provides a powerful cue for complex computer vision applications. Although skin color detection has been an active research area for decades, the mainstream technology is based on individual pixels. This thesis presents a new region-based technique for skin color detection which outperforms the current state-of-the-art pixel-based skin color detection technique on the popular Compaq dataset (Jones & Rehg 2002). The proposed technique achieves 91.17% true positive rate with 13.12% false negative rate on the Compaq dataset tested over approximately 14,000 web images. Hand tracking is not a trivial task as it requires tracking of 27 degreesof- freedom of hand. Hand deformation, self occlusion, appearance similarity and irregular motion are major problems that make 3D hand tracking a very challenging task. This thesis proposes a model-based 3D hand tracking technique, which is improved by using proposed depth-foreground-background ii feature, palm deformation module and context cue. However, the major problem of model-based techniques is, they are computationally expensive. This can be overcome by discriminative techniques as described below. Discriminative techniques (for example random forest) are good for hand part detection, however they fail due to sensor noise and high interfinger occlusion. Additionally, these techniques have difficulties in modelling kinematic or temporal constraints. Although model-based descriptive (for example Markov Random Field) or generative (for example Hidden Markov Model) techniques utilize kinematic and temporal constraints well, they are computationally expensive and hardly recover from tracking failure. This thesis presents a unified framework for 3D hand tracking, using the best of both methodologies, which out performs the current state-of-the-art 3D hand tracking techniques. The proposed 3D hand tracking techniques in this thesis can be used to extract accurate hand movement features and enable complex human machine interaction such as gaming and virtual object manipulation

    Brain MR Image Segmentation: From Multi-Atlas Method To Deep Learning Models

    Get PDF
    Quantitative analysis of the brain structures on magnetic resonance (MR) images plays a crucial role in examining brain development and abnormality, as well as in aiding the treatment planning. Although manual delineation is commonly considered as the gold standard, it suffers from the shortcomings in terms of low efficiency and inter-rater variability. Therefore, developing automatic anatomical segmentation of human brain is of importance in providing a tool for quantitative analysis (e.g., volume measurement, shape analysis, cortical surface mapping). Despite a large number of existing techniques, the automatic segmentation of brain MR images remains a challenging task due to the complexity of the brain anatomical structures and the great inter- and intra-individual variability among these anatomical structures. To address the existing challenges, four methods are proposed in this thesis. The first work proposes a novel label fusion scheme for the multi-atlas segmentation. A two-stage majority voting scheme is developed to address the over-segmentation problem in the hippocampus segmentation of brain MR images. The second work of the thesis develops a supervoxel graphical model for the whole brain segmentation, in order to relieve the dependencies on complicated pairwise registration for the multi-atlas segmentation methods. Based on the assumption that pixels within a supervoxel are supposed to have the same label, the proposed method converts the voxel labeling problem to a supervoxel labeling problem which is solved by a maximum-a-posteriori (MAP) inference in Markov random field (MRF) defined on supervoxels. The third work incorporates attention mechanism into convolutional neural networks (CNN), aiming at learning the spatial dependencies between the shallow layers and the deep layers in CNN and producing an aggregation of the attended local feature and high-level features to obtain more precise segmentation results. The fourth method takes advantage of the success of CNN in computer vision, combines the strength of the graphical model with CNN, and integrates them into an end-to-end training network. The proposed methods are evaluated on public MR image datasets, such as MICCAI2012, LPBA40, and IBSR. Extensive experiments demonstrate the effectiveness and superior performance of the three proposed methods compared with the other state-of-the-art methods

    A learning framework for higher-order consistency models in multi-class pixel labeling problems

    No full text
    Recently, higher-order Markov random field (MRF) models have been successfully applied to problems in computer vision, especially scene understanding problems. One successful higher-order MRF model for scene understanding is the consistency model [Kohli and Kumar, 2010; Kohli et al., 2009] and earlier work by Ladicky et al. [2009, 2013] which contain higher-order potentials composed of lower linear envelope functions. In semantic image segmentation problems, which seek to identify the pixels of images with pre-defined labels of objects and backgrounds, this model encourages consistent label assignments over segmented regions of images. However, solving this MRF problem exactly is generally NP-hard; instead, efficient approximate inference algorithms are used. Furthermore, the lower linear envelope functions involve a number of parameters to learn. But, the typical cross-validation used for pairwise MRF models is not a practical method for estimating such a large number of parameters. Nevertheless, few works have proposed efficient learning methods to deal with the large number of parameters in these consistency models. In this thesis, we propose a unified inference and learning framework for the consistency model. We investigate various issues and present solutions for inference and learning with this higher-order MRF model as follows. First, we derive two variants of the consistency model for multi-class pixel labeling tasks. Our model defines an energy function scoring any given label assignments over an image. In order to perform Maximum a posteriori (MAP) inference in this model, we minimize the energy function using move-making algorithms in which the higher-order problems are transformed into tractable pairwise problems. Then, we employ a max-margin framework for learning optimal parameters. This learning framework provides a generalized approach for searching the large parameter space. Second, we propose a novel use of the Gaussian mixture model (GMM) for encoding consistency constraints over a large set of pixels. Here, we use various oversegmentation methods to define coherent regions for the consistency potentials. In general, Mean shift (MS) produces locally coherent regions, and GMM provides globally coherent regions, which do not need to be contiguous. Our model exploits both local and global information together and improves the labeling accuracy on real data sets. Accordingly, we use multiple higher-order terms associated with each over-segmentation method. Our learning framework allows us to deal with the large number of parameters involved with multiple higher-order terms. Next, we explore a dual decomposition (DD) method for our multi-class consistency model. The dual decomposition MRF (DD-MRF) is an alternative method for optimizing the energy function. In dual decomposition, a complex MRF problem is decomposed into many easy subproblems and we optimize the relaxed dual problem using a projected subgradient method. At convergence, we expect a global optimum in the dual space because it is a concave maximization problem. To optimize our higher-order DD-MRF exactly, we propose an exact minimization algorithm for solving the higher-order subproblems. Moreover, the minimization algorithm is much more efficient than graph-cuts. The dual decomposition approach also solves the max-margin learning problem by minimizing the dual losses derived from DD-MRF. Here, our minimization algorithm allows us to optimize the DD learning exactly and efficiently, which in most cases finds better parameters than the previous learning approach. Last, we focus on improving labeling accuracies of our higher-order model by combining mid-level features, which we call region features. The region features help customize the general envelope functions for individual segmented regions. By assigning specified weights to the envelope functions, we can choose subsets of highly likely labels for each segmented region. We train multiple classifiers with region features and aggregate them to increase prediction performance of possible labels for each region. Importantly, introducing these region features does not change the previous inference and learning algorithms

    Incorporating Boltzmann Machine Priors for Semantic Labeling in Images and Videos

    Get PDF
    Semantic labeling is the task of assigning category labels to regions in an image. For example, a scene may consist of regions corresponding to categories such as sky, water, and ground, or parts of a face such as eyes, nose, and mouth. Semantic labeling is an important mid-level vision task for grouping and organizing image regions into coherent parts. Labeling these regions allows us to better understand the scene itself as well as properties of the objects in the scene, such as their parts, location, and interaction within the scene. Typical approaches for this task include the conditional random field (CRF), which is well-suited to modeling local interactions among adjacent image regions. However the CRF is limited in dealing with complex, global (long-range) interactions between regions in an image, and between frames in a video. This thesis presents approaches to modeling long-range interactions within images and videos, for use in semantic labeling. In order to model these long-range interactions, we incorporate priors based on the restricted Boltzmann machine (RBM). The RBM is a generative model which has demonstrated the ability to learn the shape of an object and the CRBM is a temporal extension which can learn the motion of an object. Although the CRF is a good baseline labeler, we show how the RBM and CRBM can be added to the architecture to model both the global object shape within an image and the temporal dependencies of the object from previous frames in a video. We demonstrate the labeling performance of our models for the parts of complex face images from the Labeled Faces in the Wild database (for images) and the YouTube Faces Database (for videos). Our hybrid models produce results that are both quantitatively and qualitatively better than the baseline CRF alone for both images and videos
    • …
    corecore