35,296 research outputs found

    Automated Visual Fin Identification of Individual Great White Sharks

    Get PDF
    This paper discusses the automated visual identification of individual great white sharks from dorsal fin imagery. We propose a computer vision photo ID system and report recognition results over a database of thousands of unconstrained fin images. To the best of our knowledge this line of work establishes the first fully automated contour-based visual ID system in the field of animal biometrics. The approach put forward appreciates shark fins as textureless, flexible and partially occluded objects with an individually characteristic shape. In order to recover animal identities from an image we first introduce an open contour stroke model, which extends multi-scale region segmentation to achieve robust fin detection. Secondly, we show that combinatorial, scale-space selective fingerprinting can successfully encode fin individuality. We then measure the species-specific distribution of visual individuality along the fin contour via an embedding into a global `fin space'. Exploiting this domain, we finally propose a non-linear model for individual animal recognition and combine all approaches into a fine-grained multi-instance framework. We provide a system evaluation, compare results to prior work, and report performance and properties in detail.Comment: 17 pages, 16 figures. To be published in IJCV. Article replaced to update first author contact details and to correct a Figure reference on page

    A supervised texton based approach for automatic segmentation and measurement of the fetal head and femur in 2D ultrasound images

    Get PDF
    This paper presents a supervised texton based approach for the accurate segmentation and measurement of ultrasound fetal head (BPD, OFD, HC) and femur (FL). The method consists of several steps. First, a non-linear diffusion technique is utilized to reduce the speckle noise. Then, based on the assumption that cross sectional intensity profiles of skull and femur can be approximated by Gaussian-like curves, a multi-scale and multi-orientation filter bank is designed to extract texton features specific to ultrasound fetal anatomic structure. The extracted texton cues, together with multi-scale local brightness, are then built into a unified framework for boundary detection of ultrasound fetal head and femur. Finally, for fetal head, a direct least square ellipse fitting method is used to construct a closed head contour, whilst, for fetal femur a closed contour is produced by connecting the detected femur boundaries. The presented method is demonstrated to be promising for clinical applications. Overall the evaluation results of fetal head segmentation and measurement from our method are comparable with the inter-observer difference of experts, with the best average precision of 96.85%, the maximum symmetric contour distance (MSD) of 1.46 mm, average symmetric contour distance (ASD) of 0.53 mm; while for fetal femur, the overall performance of our method is better than the inter-observer difference of experts, with the average precision of 84.37%, MSD of 2.72 mm and ASD of 0.31 mm

    Unsupervised level set parameterization using multi-scale filtering

    Get PDF
    This paper presents a novel framework for unsupervised level set parameterization using multi-scale filtering. A standard multi-scale, directional filtering algorithm is used in order to capture the orientation coherence in edge regions. The latter is encoded in entropy-based image `heatmaps', which are able to weight forces guiding level set evolution. Experiments are conducted on two large benchmark databases as well as on real proteomics images. The experimental results demonstrate that the proposed framework is capable of accelerating contour convergence, whereas it obtains a segmentation quality comparable to the one obtained with empirically optimized parameterization

    Instance-Level Salient Object Segmentation

    Full text link
    Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.Comment: To appear in CVPR201

    DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection

    Full text link
    Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection. We achieve this goal by means of a multi-scale deep network that consists of five convolutional layers and a bifurcated fully-connected sub-network. The section from the input layer to the fifth convolutional layer is fixed and directly lifted from a pre-trained network optimized over a large-scale object classification task. This section of the network is applied to four different scales of the image input. These four parallel and identical streams are then attached to a bifurcated sub-network consisting of two independently-trained branches. One branch learns to predict the contour likelihood (with a classification objective) whereas the other branch is trained to learn the fraction of human labelers agreeing about the contour presence at a given point (with a regression criterion). We show that without any feature engineering our multi-scale deep learning approach achieves state-of-the-art results in contour detection.Comment: Accepted to CVPR 201

    Statistical Model of Shape Moments with Active Contour Evolution for Shape Detection and Segmentation

    Get PDF
    This paper describes a novel method for shape representation and robust image segmentation. The proposed method combines two well known methodologies, namely, statistical shape models and active contours implemented in level set framework. The shape detection is achieved by maximizing a posterior function that consists of a prior shape probability model and image likelihood function conditioned on shapes. The statistical shape model is built as a result of a learning process based on nonparametric probability estimation in a PCA reduced feature space formed by the Legendre moments of training silhouette images. A greedy strategy is applied to optimize the proposed cost function by iteratively evolving an implicit active contour in the image space and subsequent constrained optimization of the evolved shape in the reduced shape feature space. Experimental results presented in the paper demonstrate that the proposed method, contrary to many other active contour segmentation methods, is highly resilient to severe random and structural noise that could be present in the data
    • …
    corecore