14 research outputs found

    Analysis of density based and fuzzy c-means clustering methods on lesion border extraction in dermoscopy images

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Computer-aided segmentation and border detection in dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopy images have become an important research field mainly because of inter- and intra-observer variations in human interpretation. In this study, we compare two approaches for automatic border detection in dermoscopy images: density based clustering (DBSCAN) and Fuzzy C-Means (FCM) clustering algorithms. In the first approach, if there exists enough density –greater than certain number of points- around a point, then either a new cluster is formed around the point or an existing cluster grows by including the point and its neighbors. In the second approach FCM clustering is used. This approach has the ability to assign one data point into more than one cluster.</p> <p>Results</p> <p>Each approach is examined on a set of 100 dermoscopy images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates; false positives and false negatives along with true positives and true negatives are quantified by comparing results with manually determined borders from a dermatologist. The assessments obtained from both methods are quantitatively analyzed over three accuracy measures: border error, precision, and recall. </p> <p>Conclusion</p> <p>As well as low border error, high precision and recall, visual outcome showed that the DBSCAN effectively delineated targeted lesion, and has bright future; however, the FCM had poor performance especially in border error metric.</p

    Using Multi-view Recognition and Meta-data Annotation to Guide a Robot's Attention

    Get PDF
    In the transition from industrial to service robotics, robots will have to deal with increasingly unpredictable and variable environments. We present a system that is able to recognize objects of a certain class in an image and to identify their parts for potential interactions. The method can recognize objects from arbitrary viewpoints and generalizes to instances that have never been observed during training, even if they are partially occluded and appear against cluttered backgrounds. Our approach builds on the implicit shape model of Leibe et al. We extend it to couple recognition to the provision of meta-dat

    Pillar-Based Object Detection for Autonomous Driving

    No full text
    © 2020, Springer Nature Switzerland AG. We present a simple and flexible object detection framework optimized for autonomous driving. Building on the observation that point clouds in this application are extremely sparse, we propose a practical pillar-based approach to fix the imbalance issue caused by anchors. In particular, our algorithm incorporates a cylindrical projection into multi-view feature learning, predicts bounding box parameters per pillar rather than per point or per anchor, and includes an aligned pillar-to-point projection module to improve the final prediction. Our anchor-free approach avoids hyperparameter search associated with past methods, simplifying 3D object detection while significantly improving upon state-of-the-art

    Learning Graph Laplacian for Image Segmentation

    No full text

    Active Visual Control by Stereo Active Vision Interface SAVI

    No full text
    Abstract A real-time vision system called SAVI is presented which detects faces in cluttered environments and performs particular active control tasks based on changes in the visual field. It is designed as a Perception-Action-Cycle (PAC), processing sensory data of different kinds and qualities in real-time. Hence, the system is able to react instantaneously to changing conditions in the visual scene. Firstly, connected skin colour regions are detected while the visual scene is actively observed by binocular vision system. The detected skin colour regions are merged if necessary and ranked by their order of saliency. Secondly, in the most salient skin colour region, facial features are searched for while the skin colour blob is actively kept in the centre of the visual field of the camera system. After a successful evaluation of the facial features the associated person is able to give control commands to the system. This control commands can either effect the observing system itself or any other active or robotic system wired to the principle observing system via TCP/IP sockets. 1

    Instance Segmentation of Indoor Scenes Using a Coverage Loss

    No full text
    Abstract. A major limitation of existing models for semantic segmen-tation is the inability to identify individual instances of the same class: when labeling pixels with only semantic classes, a set of pixels with the same label could represent a single object or ten. In this work, we in-troduce a model to perform both semantic and instance segmentation simultaneously. We introduce a new higher-order loss function that di-rectly minimizes the coverage metric and evaluate a variety of region features, including those from a convolutional network. We apply our model to the NYU Depth V2 dataset, obtaining state of the art results

    Superpixels and supervoxels in an energy optimization framework

    No full text
    Abstract. Many methods for object recognition, segmentation, etc., rely on tessellation of an image into “superpixels”. A superpixel is an image patch which is better aligned with intensity edges than a rectangular patch. Superpixels can be extracted with any segmentation algorithm, however, most of them produce highly irregular superpixels, with widely varying sizes and shapes. A more regular space tessellation may be desired. We formulate the superpixel partitioning problem in an energy minimization framework, and optimize with graph cuts. Our energy function explicitly encourages regular superpixels. We explore variations of the basic energy, which allow a trade-off between a less regular tessellation but more accurate boundaries or better efficiency. Our advantage over previous work is computational efficiency, principled optimization, and applicability to 3D “supervoxel ” segmentation. We achieve high boundary recall on 2D images and spatial coherence on video. We also show that compact superpixels improve accuracy on a simple application of salient object segmentation. Key words: Superpixels, supervoxels, graph cuts
    corecore