40,346 research outputs found

    Skin-color modeling and adaptation

    Get PDF

    Tracking Skin-Colored Objects in Real-Time

    Get PDF
    We present a methodology for tracking multiple skin-colored objects in a monocular image sequence. The proposed approach encompasses a collection of techniques that allow the modeling, detection and temporal association of skincolored objects across image sequences. A non-parametric model of skin color is employed. Skin-colored objects are detected with a Bayesian classifier that is bootstrapped with a small set of training data and refined through an off-line iterative training procedure. By using on-line adaptation of skin-color probabilities the classifier is able to cope with considerable illumination changes. Tracking over time is achieved by a novel technique that can handle multiple objects simultaneously. Tracked objects may move in complex trajectories, occlude each other in the field of view of a possibly moving camera and vary in number over time. A prototype implementation of the developed system operates on 320x240 live video in real time (28Hz), running on a conventional Pentium IV processor. Representative experimental results from the application of this prototype to image sequences are also presented. 1

    Light scattering and color adaptation that originate from a natural nanomaterial

    Get PDF
    Color is ubiquitous in nature; however, the ability to rapidly change color in response to environmental cues is unique to few biological systems. Cephalopods, including squid, octopus, and cuttlefish, are one such system; they use sophisticated optical organs that assist in color adaptation in different environments. While several attempts have been made to explore, understand, and exploit the adaptive coloration of cephalopods for materials applications, much of the progress to date has relied on modeling with assumptions that all light not reflected or transmitted is absorbed, which ignores the contribution of light scattering in the skin. We believe that scattering plays a significant role in color perception and should be included in discussions of new colors and color-changing materials. We argue that both forward and backward scattering must be accounted for in the optical analysis of a sample; otherwise, an incorrect absorption spectrum and resulting color analysis may be deduced from the experimental data. To test these hypotheses, we fabricated films comprising a distribution of bio-derived pigmented nanoparticles with multiple thicknesses. To achieve these different thicknesses, we casted a suspension (0.16 - 2.45 mg/ml) of nanoparticles which were first isolated and purified from squid Doryteuthis pealeii skin onto functionalized surfaces. We chose squid particles in our model system due to their unique refractive index (n =1.92) and ability to potentiate color change via translocation in the skin. The color quality and consistency of the films were measured using the International Commission on Illumination (CIE) tristimulus values. We observed that that both color and brightness in mimetic films could be controlled by varying particle layer thicknesses and by combining a back-reflector with a specific band pass, illustrating new materials applications for these biological nanostructures. Diffuse and specular scattering of the granules was also measured using experimental and theoretical approaches. We observed that the squid-derived pigments not only provide rich color but they can also scatter attenuated light. Combined, these characteristics make such bio-derived materials interesting candidates for future topical materials such as cosmetics and coatings designed to provide color or color-matching to a specific environment

    Fair comparison of skin detection approaches on publicly available datasets

    Full text link
    Skin detection is the process of discriminating skin and non-skin regions in a digital image and it is widely used in several applications ranging from hand gesture analysis to track body parts and face detection. Skin detection is a challenging problem which has drawn extensive attention from the research community, nevertheless a fair comparison among approaches is very difficult due to the lack of a common benchmark and a unified testing protocol. In this work, we investigate the most recent researches in this field and we propose a fair comparison among approaches using several different datasets. The major contributions of this work are an exhaustive literature review of skin color detection approaches, a framework to evaluate and combine different skin detector approaches, whose source code is made freely available for future research, and an extensive experimental comparison among several recent methods which have also been used to define an ensemble that works well in many different problems. Experiments are carried out in 10 different datasets including more than 10000 labelled images: experimental results confirm that the best method here proposed obtains a very good performance with respect to other stand-alone approaches, without requiring ad hoc parameter tuning. A MATLAB version of the framework for testing and of the methods proposed in this paper will be freely available from https://github.com/LorisNann

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 199

    Get PDF
    This bibliography lists 82 reports, articles, and other documents introduced into the NASA scientific and technical information system in October 1979

    Multi-Resolution Texture Coding for Multi-Resolution 3D Meshes

    Full text link
    We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder

    Automatic skin segmentation for gesture recognition combining region and support vector machine active learning

    Get PDF
    Skin segmentation is the cornerstone of many applications such as gesture recognition, face detection, and objectionable image filtering. In this paper, we attempt to address the skin segmentation problem for gesture recognition. Initially, given a gesture video sequence, a generic skin model is applied to the first couple of frames to automatically collect the training data. Then, an SVM classifier based on active learning is used to identify the skin pixels. Finally, the results are improved by incorporating region segmentation. The proposed algorithm is fully automatic and adaptive to different signers. We have tested our approach on the ECHO database. Comparing with other existing algorithms, our method could achieve better performance

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
    • …
    corecore