951 research outputs found
Skin Segmentation and Skull Segmentation for Medical Imaging
In this paper aims we present tools for medical imaging applications to do skin and skull segmentation in a short time. The desired output for skin segmentation is a 3D visualization of the facial skin without any cavities or holes inside the head, while skull segmentation aims to create a 3D visualization of the skull bones. The algorithm used for skin segmentation is thresholding the image, extracting the largest connected component, and holefilling to fill the unnecessary holes. As for the skull segmentation, the process is done by removing the spines which is connected to the skull, and then extracting the largest connected component. Afterwards, mesh generation is done to produce the 3D objects from the processed images. This mesh generation process is done using the marching cubes algorithm. The testing results show that the skin and skull segmentation process will work well when there are no other objects that are connected to the skin or the skull. Skin segmentation process takes a significant amount of time, primarily caused by the holefilling process
Skin Segmentation, Skull Segmentation, and Mesh Generation Tool of Medical Image
The development of information and computer technology (ICT) has reached to various fields in the world, one of them is medical field. There are new methods of medication and diagnoses currently based on ICT, like MRI and CT Scanner. Both MRI and CT produces data as volume image which contains the scan results of internal organs, also known as DICOM (Digital Imaging and Communications in Medicine) Image. For various needs in the medical world, DICOM image processing is necessary.This thesis aims to make a feature to load a volume image and do skin and skull segmentation in a short time, and also to do mesh generation from processed DICOM images. Skin segmentation is done by thresholding the image, extracting the largest connected component, and holefilling to fill the unnecessary holes. As for the skull segmentation, the process is done by removing the spines which is connected to the skull, and then extracting the largest connected component. Afterwards, mesh generation is done to produce the 3D objects from the processed images. This mesh generation process is done using the marching cubes algorithm.The testing results show that the skin and skull segmentation process will work well when there are no other objects that are connected to the skin or the skull. Skin segmentation process takes a significant amount of time, primarily caused by the holefilling process. The time required for mesh regeneration depends on the complexities of the image. The mesh generation result's quality is affected by resolution reduction ratio, relaxation factor and iteration of smoothing
Automatic skin segmentation for gesture recognition combining region and support vector machine active learning
Skin segmentation is the cornerstone of many applications such as gesture recognition, face detection, and objectionable image filtering. In this paper, we attempt to address the skin segmentation problem for gesture recognition. Initially, given a gesture video sequence, a generic skin model is applied to the first couple of frames to automatically collect the training data. Then, an SVM classifier based on active learning is used to identify the skin pixels. Finally, the results are improved by incorporating region segmentation. The proposed algorithm is fully automatic and adaptive to different signers. We have tested our approach on the ECHO database. Comparing with other existing algorithms, our method could achieve better performance
Driver Distraction Identification with an Ensemble of Convolutional Neural Networks
The World Health Organization (WHO) reported 1.25 million deaths yearly due
to road traffic accidents worldwide and the number has been continuously
increasing over the last few years. Nearly fifth of these accidents are caused
by distracted drivers. Existing work of distracted driver detection is
concerned with a small set of distractions (mostly, cell phone usage).
Unreliable ad-hoc methods are often used.In this paper, we present the first
publicly available dataset for driver distraction identification with more
distraction postures than existing alternatives. In addition, we propose a
reliable deep learning-based solution that achieves a 90% accuracy. The system
consists of a genetically-weighted ensemble of convolutional neural networks,
we show that a weighted ensemble of classifiers using a genetic algorithm
yields in a better classification confidence. We also study the effect of
different visual elements in distraction detection by means of face and hand
localizations, and skin segmentation. Finally, we present a thinned version of
our ensemble that could achieve 84.64% classification accuracy and operate in a
real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949
A hybrid technique for face detection in color images
In this paper, a hybrid technique for face detection in color images is presented. The proposed technique combines three analysis models, namely skin detection, automatic eye localization, and appearance-based face/nonface classification. Using a robust histogram-based skin detection model, skin-like pixels are first identified in the RGB color space. Based on this, face bounding-boxes are extracted from the image. On detecting a face bounding-box, approximate positions of the candidate mouth feature points are identified using the redness property of image pixels. A region-based eye localization step, based on the detected mouth feature points, is then applied to face bounding-boxes to locate possible eye feature points in the image. Based on the distance between the detected eye feature points, face/non-face classification is performed over a normalized search area using the Bayesian discriminating feature (BDF) analysis method. Some subjective evaluation results are presented on images taken using digital cameras and a Webcam, representing both indoor and outdoor scenes
SKIN SEGMENTATION AND SKULL SEGMENTATION FOR MEDICAL IMAGING
In this paper aims we present tools for medical imaging applications to do skin and skull segmentation in a short time. The desired output for skin segmentation is a 3D visualization of the facial skin without any cavities or holes inside the head, while skull segmentation aims to create a 3D visualization of the skull bones. The algorithm used for skin segmentation is thresholding the image, extracting the largest connected component, and holefilling to fill the unnecessary holes. As for the skull segmentation, the process is done by removing the spines which is connected to the skull, and then extracting the largest connected component. Afterwards, mesh generation is done to produce the 3D objects from the processed images. This mesh generation process is done using the marching cubes algorithm. The testing results show that the skin and skull segmentation process will work well when there are no other objects that are connected to the skin or the skull. Skin segmentation process takes a significant amount of time, primarily caused by the holefilling process
Fair comparison of skin detection approaches on publicly available datasets
Skin detection is the process of discriminating skin and non-skin regions in
a digital image and it is widely used in several applications ranging from hand
gesture analysis to track body parts and face detection. Skin detection is a
challenging problem which has drawn extensive attention from the research
community, nevertheless a fair comparison among approaches is very difficult
due to the lack of a common benchmark and a unified testing protocol. In this
work, we investigate the most recent researches in this field and we propose a
fair comparison among approaches using several different datasets. The major
contributions of this work are an exhaustive literature review of skin color
detection approaches, a framework to evaluate and combine different skin
detector approaches, whose source code is made freely available for future
research, and an extensive experimental comparison among several recent methods
which have also been used to define an ensemble that works well in many
different problems. Experiments are carried out in 10 different datasets
including more than 10000 labelled images: experimental results confirm that
the best method here proposed obtains a very good performance with respect to
other stand-alone approaches, without requiring ad hoc parameter tuning. A
MATLAB version of the framework for testing and of the methods proposed in this
paper will be freely available from https://github.com/LorisNann
- …