4,895 research outputs found

    Bag of Color Features For Color Constancy

    Get PDF
    In this paper, we propose a novel color constancy approach, called Bag of Color Features (BoCF), building upon Bag-of-Features pooling. The proposed method substantially reduces the number of parameters needed for illumination estimation. At the same time, the proposed method is consistent with the color constancy assumption stating that global spatial information is not relevant for illumination estimation and local information ( edges, etc.) is sufficient. Furthermore, BoCF is consistent with color constancy statistical approaches and can be interpreted as a learning-based generalization of many statistical approaches. To further improve the illumination estimation accuracy, we propose a novel attention mechanism for the BoCF model with two variants based on self-attention. BoCF approach and its variants achieve competitive, compared to the state of the art, results while requiring much fewer parameters on three benchmark datasets: ColorChecker RECommended, INTEL-TUT version 2, and NUS8.Comment: 12 pages, 5 figures, 6 table

    Convolutional Color Constancy

    Full text link
    Color constancy is the problem of inferring the color of the light that illuminated a scene, usually so that the illumination color can be removed. Because this problem is underconstrained, it is often solved by modeling the statistical regularities of the colors of natural objects and illumination. In contrast, in this paper we reformulate the problem of color constancy as a 2D spatial localization task in a log-chrominance space, thereby allowing us to apply techniques from object detection and structured prediction to the color constancy problem. By directly learning how to discriminate between correctly white-balanced images and poorly white-balanced images, our model is able to improve performance on standard benchmarks by nearly 40%

    Evaluating color texture descriptors under large variations of controlled lighting conditions

    Full text link
    The recognition of color texture under varying lighting conditions is still an open issue. Several features have been proposed for this purpose, ranging from traditional statistical descriptors to features extracted with neural networks. Still, it is not completely clear under what circumstances a feature performs better than the others. In this paper we report an extensive comparison of old and new texture features, with and without a color normalization step, with a particular focus on how they are affected by small and large variation in the lighting conditions. The evaluation is performed on a new texture database including 68 samples of raw food acquired under 46 conditions that present single and combined variations of light color, direction and intensity. The database allows to systematically investigate the robustness of texture descriptors across a large range of variations of imaging conditions.Comment: Submitted to the Journal of the Optical Society of America

    Monte Carlo Dropout Ensembles for Robust Illumination Estimation

    Get PDF
    Computational color constancy is a preprocessing step used in many camera systems. The main aim is to discount the effect of the illumination on the colors in the scene and restore the original colors of the objects. Recently, several deep learning-based approaches have been proposed to solve this problem and they often led to state-of-the-art performance in terms of average errors. However, for extreme samples, these methods fail and lead to high errors. In this paper, we address this limitation by proposing to aggregate different deep learning methods according to their output uncertainty. We estimate the relative uncertainty of each approach using Monte Carlo dropout and the final illumination estimate is obtained as the sum of the different model estimates weighted by the log-inverse of their corresponding uncertainties. The proposed framework leads to state-of-the-art performance on INTEL-TAU dataset.Comment: 7 pages,6 figure

    Performance analysis of machine learning and deep learning architectures for malaria detection on cell images

    Get PDF
    Plasmodium malaria is a parasitic protozoan that causes malaria in humans. Computer aided detection of Plasmodium is a research area attracting great interest. In this paper, we study the performance of various machine learning and deep learning approaches for the detection of Plasmodium on cell images from digital microscopy. We make use of a publicly available dataset composed of 27,558 cell images with equal instances of parasitized (contains Plasmodium) and uninfected (no Plasmodium) cells. We randomly split the dataset into groups of 80% and 20% for training and testing purposes, respectively. We apply color constancy and spatially resample all images to a particular size depending on the classification architecture implemented. We propose a fast Convolutional Neural Network (CNN) architecture for the classification of cell images. We also study and compare the performance of transfer learning algorithms developed based on well-established network architectures such as AlexNet, ResNet, VGG-16 and DenseNet. In addition, we study the performance of the bag-of-features model with Support Vector Machine for classification. The overall probability of a cell image comprising Plasmodium is determined based on the average of probabilities provided by all the CNN architectures implemented in this paper. Our proposed algorithm provided an overall accuracy of 96.7% on the testing dataset and area under the Receiver Operating Characteristic (ROC) curve value of 0.994 for 2756 parasitized cell images. This type of automated classification of cell images would enhance the workflow of microscopists and provide a valuable second opinion

    Color Constancy Beyond Bags of Pixels

    Get PDF
    Estimating the color of a scene illuminant often plays a central role in computational color constancy. While this problem has received significant attention, the methods that exist do not maximally leverage spatial dependencies between pixels. Indeed, most methods treat the observed color (or its spatial derivative) at each pixel independently of its neighbors. We propose an alternative approach to illuminant estimation-one that employs an explicit statistical model to capture the spatial dependencies between pixels induced by the surfaces they observe. The parameters of this model are estimated from a training set of natural images captured under canonical illumination, and for a new image, an appropriate transform is found such that the corrected image best fits our model.Engineering and Applied Science
    corecore