14 research outputs found
Analysis of density based and fuzzy c-means clustering methods on lesion border extraction in dermoscopy images
<p>Abstract</p> <p>Background</p> <p>Computer-aided segmentation and border detection in dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopy images have become an important research field mainly because of inter- and intra-observer variations in human interpretation. In this study, we compare two approaches for automatic border detection in dermoscopy images: density based clustering (DBSCAN) and Fuzzy C-Means (FCM) clustering algorithms. In the first approach, if there exists enough density âgreater than certain number of points- around a point, then either a new cluster is formed around the point or an existing cluster grows by including the point and its neighbors. In the second approach FCM clustering is used. This approach has the ability to assign one data point into more than one cluster.</p> <p>Results</p> <p>Each approach is examined on a set of 100 dermoscopy images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates; false positives and false negatives along with true positives and true negatives are quantified by comparing results with manually determined borders from a dermatologist. The assessments obtained from both methods are quantitatively analyzed over three accuracy measures: border error, precision, and recall. </p> <p>Conclusion</p> <p>As well as low border error, high precision and recall, visual outcome showed that the DBSCAN effectively delineated targeted lesion, and has bright future; however, the FCM had poor performance especially in border error metric.</p
Recommended from our members
Robots for Humanity: Using Assistive Robots to Empower People with Disabilities
Assistive mobile manipulators have the potential
to one day serve as surrogates and helpers for people with
disabilities, giving them the freedom to perform tasks such as
scratching an itch, picking up a cup, or socializing with their
families. This article introduces a collaborative project with the
goal of putting assistive mobile manipulators into real homes
to work with people with disabilities. Through a participatory
design process in which users have been actively involved from
day one, we are identifying and developing assistive capabilities
for the PR2 robot. Our approach is to develop a diverse suite
of open source software tools that blend the capabilities of the
user and the robot. Within this article, we introduce the project,
describe our progress, and discuss lessons we have learned.This is an author's peer-reviewed final manuscript, as accepted by the publisher. The published article is copyrighted by IEEE-Institute of Electrical and Electronics Engineers and can be found at: http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=100. ©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Keywords: Medical robotics, Biomedical equipment, Robots, Control systems, Handicapped aids, Software development, Exoskeletons, Sensory aids, Human factors, Prosthetic
Using Multi-view Recognition and Meta-data Annotation to Guide a Robot's Attention
In the transition from industrial to service robotics, robots will have to deal with increasingly unpredictable and variable environments. We present a system that is able to recognize objects of a certain class in an image and to identify their parts for potential interactions. The method can recognize objects from arbitrary viewpoints and generalizes to instances that have never been observed during training, even if they are partially occluded and appear against cluttered backgrounds. Our approach builds on the implicit shape model of Leibe et al. We extend it to couple recognition to the provision of meta-dat
Pillar-Based Object Detection for Autonomous Driving
© 2020, Springer Nature Switzerland AG. We present a simple and flexible object detection framework optimized for autonomous driving. Building on the observation that point clouds in this application are extremely sparse, we propose a practical pillar-based approach to fix the imbalance issue caused by anchors. In particular, our algorithm incorporates a cylindrical projection into multi-view feature learning, predicts bounding box parameters per pillar rather than per point or per anchor, and includes an aligned pillar-to-point projection module to improve the final prediction. Our anchor-free approach avoids hyperparameter search associated with past methods, simplifying 3D object detection while significantly improving upon state-of-the-art
Active Visual Control by Stereo Active Vision Interface SAVI
Abstract A real-time vision system called SAVI is presented which detects faces in cluttered environments and performs particular active control tasks based on changes in the visual field. It is designed as a Perception-Action-Cycle (PAC), processing sensory data of different kinds and qualities in real-time. Hence, the system is able to react instantaneously to changing conditions in the visual scene. Firstly, connected skin colour regions are detected while the visual scene is actively observed by binocular vision system. The detected skin colour regions are merged if necessary and ranked by their order of saliency. Secondly, in the most salient skin colour region, facial features are searched for while the skin colour blob is actively kept in the centre of the visual field of the camera system. After a successful evaluation of the facial features the associated person is able to give control commands to the system. This control commands can either effect the observing system itself or any other active or robotic system wired to the principle observing system via TCP/IP sockets. 1
Instance Segmentation of Indoor Scenes Using a Coverage Loss
Abstract. A major limitation of existing models for semantic segmen-tation is the inability to identify individual instances of the same class: when labeling pixels with only semantic classes, a set of pixels with the same label could represent a single object or ten. In this work, we in-troduce a model to perform both semantic and instance segmentation simultaneously. We introduce a new higher-order loss function that di-rectly minimizes the coverage metric and evaluate a variety of region features, including those from a convolutional network. We apply our model to the NYU Depth V2 dataset, obtaining state of the art results
Superpixels and supervoxels in an energy optimization framework
Abstract. Many methods for object recognition, segmentation, etc., rely on tessellation of an image into âsuperpixelsâ. A superpixel is an image patch which is better aligned with intensity edges than a rectangular patch. Superpixels can be extracted with any segmentation algorithm, however, most of them produce highly irregular superpixels, with widely varying sizes and shapes. A more regular space tessellation may be desired. We formulate the superpixel partitioning problem in an energy minimization framework, and optimize with graph cuts. Our energy function explicitly encourages regular superpixels. We explore variations of the basic energy, which allow a trade-off between a less regular tessellation but more accurate boundaries or better efficiency. Our advantage over previous work is computational efficiency, principled optimization, and applicability to 3D âsupervoxel â segmentation. We achieve high boundary recall on 2D images and spatial coherence on video. We also show that compact superpixels improve accuracy on a simple application of salient object segmentation. Key words: Superpixels, supervoxels, graph cuts