2,179 research outputs found

    Multiscale Discriminant Saliency for Visual Attention

    Full text link
    The bottom-up saliency, an early stage of humans' visual attention, can be considered as a binary classification problem between center and surround classes. Discriminant power of features for the classification is measured as mutual information between features and two classes distribution. The estimated discrepancy of two feature classes very much depends on considered scale levels; then, multi-scale structure and discriminant power are integrated by employing discrete wavelet features and Hidden markov tree (HMT). With wavelet coefficients and Hidden Markov Tree parameters, quad-tree like label structures are constructed and utilized in maximum a posterior probability (MAP) of hidden class variables at corresponding dyadic sub-squares. Then, saliency value for each dyadic square at each scale level is computed with discriminant power principle and the MAP. Finally, across multiple scales is integrated the final saliency map by an information maximization rule. Both standard quantitative tools such as NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed multiscale discriminant saliency method (MDIS) against the well-know information-based saliency method AIM on its Bruce Database wity eye-tracking data. Simulation results are presented and analyzed to verify the validity of MDIS as well as point out its disadvantages for further research direction.Comment: 16 pages, ICCSA 2013 - BIOCA sessio

    Multi-scale Discriminant Saliency with Wavelet-based Hidden Markov Tree Modelling

    Full text link
    The bottom-up saliency, an early stage of humans' visual attention, can be considered as a binary classification problem between centre and surround classes. Discriminant power of features for the classification is measured as mutual information between distributions of image features and corresponding classes . As the estimated discrepancy very much depends on considered scale level, multi-scale structure and discriminant power are integrated by employing discrete wavelet features and Hidden Markov Tree (HMT). With wavelet coefficients and Hidden Markov Tree parameters, quad-tree like label structures are constructed and utilized in maximum a posterior probability (MAP) of hidden class variables at corresponding dyadic sub-squares. Then, a saliency value for each square block at each scale level is computed with discriminant power principle. Finally, across multiple scales is integrated the final saliency map by an information maximization rule. Both standard quantitative tools such as NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed multi-scale discriminant saliency (MDIS) method against the well-know information based approach AIM on its released image collection with eye-tracking data. Simulation results are presented and analysed to verify the validity of MDIS as well as point out its limitation for further research direction.Comment: arXiv admin note: substantial text overlap with arXiv:1301.396

    S4Net: Single Stage Salient-Instance Segmentation

    Full text link
    We consider an interesting problem-salient instance segmentation in this paper. Other than producing bounding boxes, our network also outputs high-quality instance-level segments. Taking into account the category-independent property of each target, we design a single stage salient instance segmentation framework, with a novel segmentation branch. Our new branch regards not only local context inside each detection window but also its surrounding context, enabling us to distinguish the instances in the same scope even with obstruction. Our network is end-to-end trainable and runs at a fast speed (40 fps when processing an image with resolution 320x320). We evaluate our approach on a publicly available benchmark and show that it outperforms other alternative solutions. We also provide a thorough analysis of the design choices to help readers better understand the functions of each part of our network. The source code can be found at \url{https://github.com/RuochenFan/S4Net}

    Melhorias na segmentação de pele humana em imagens digitais

    Get PDF
    Orientador: Hélio PedriniDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Segmentação de pele humana possui diversas aplicações nas áreas de visão computacional e reconhecimento de padrões, cujo propósito principal é distinguir regiões de pele e não pele em imagens. Apesar do elevado número de métodos disponíveis na literatura, a segmentação de pele com precisão ainda é uma tarefa desafiadora. Muitos métodos contam somente com a informação de cor, o que não discrimina completamente as regiões da imagem devido a variações nas condições de iluminação e à ambiguidade entre a cor da pele e do plano de fundo. Dessa forma, há ainda a demanda em melhorar a segmentação. Este trabalho apresenta três contribuições com respeito a essa necessidade. A primeira é um método autocontido para segmentação adaptativa de pele que faz uso de análise espacial para produzir regiões nas quais a cor da pele é estimada e, dessa forma, ajusta o padrão da cor para uma imagem em particular. A segunda é a introdução da detecção de saliência para, combinada com detectores de pele baseados em cor, realizar a remoção do plano de fundo, o que elimina muitas regiões de não pele. A terceira é uma melhoria baseada em textura utilizando superpixels para capturar energia de regiões na imagem filtrada, que é então utilizada para caracterizar regiões de não pele e assim eliminar a ambiguidade da cor adicionando um segundo voto. Resultados experimentais obtidos em bases de dados públicas comprovam uma melhoria significativa nos métodos propostos para segmentação de pele humana em comparação com abordagens disponíveis na literaturaAbstract: Human skin segmentation has several applications on computer vision and pattern recognition fields, whose main purpose is to distinguish skin and non-skin regions. Despite the large number of methods available in the literature, accurate skin segmentation is still a challenging task. Many methods rely only on color information, which does not completely discriminate the image regions due to variations in lighting conditions and ambiguity between skin and background color. Therefore, there is still demand to improve the segmentation process. Three main contributions toward this need are presented in this work. The first is a self-contained method for adaptive skin segmentation that makes use of spatial analysis to produce regions from which the overall skin color can be estimated and such that the color model is adjusted to a particular image. The second is the combination of saliency detection with color skin segmentation, which performs a background removal to eliminate non-skin regions. The third is a texture-based improvement using superpixels to capture energy of regions in the filtered image, employed to characterize non-skin regions and thus eliminate color ambiguity adding a second vote. Experimental results on public data sets demonstrate a significant improvement of the proposed methods for human skin segmentation over state-of-the-art approachesMestradoCiência da ComputaçãoMestre em Ciência da Computaçã
    corecore