22 research outputs found

    BCN20000: dermoscopic lesions in the wild

    Get PDF
    This article summarizes the BCN20000 dataset, composed of 19424 dermoscopic images of skin lesions captured from 2010 to 2016 in the facilities of the Hospital Clínic in Barcelona. With this dataset, we aim to study the problem of unconstrained classification of dermoscopic images of skin cancer, including lesions found in hard-to-diagnose locations (nails and mucosa), large lesions which do not fit in the aperture of the dermoscopy device, and hypo-pigmented lesions. The BCN20000 will be provided to the participants of the ISIC Challenge 2019 [8], where they will be asked to train algorithms to classify dermoscopic images of skin cancer automatically.Peer ReviewedPreprin

    Uso de redes neuronales convolucionales para la detección remota de frutos con cámaras RGB-D

    Get PDF
    La detección remota de frutos será una herramienta indispensable para la gestión agronómica optimizada y sostenible de las plantaciones frutícolas del futuro, con aplicaciones en previsión de cosecha, robotización de la recolección y elaboración de mapas de producción. Este trabajo propone el uso de cámaras de profundidad RGB-D para la detección y la posterior localización 3D de los frutos. El material utilizado para la adquisición de datos consiste en una plataforma terrestre autopropulsada equipada con dos sensores Kinect v2 de Microsoft y un sistema de posicionamiento RTK-GNSS, ambos conectados a un ordenador de campo que se comunica con los sensores mediante un software desarrollado ad-hoc. Con este equipo se escanearon 3 filas de manzanos Fuji de una explotación comercial. El conjunto de datos adquiridos está compuesto por 110 capturas que contienen un total de 12,838 manzanas Fuji. La detección de frutos se realizó mediante los datos RGB (imágenes de color proporcionadas por el sensor). Para ello, se implementó y se entrenó una red neuronal convolucional de detección de objetos Faster R-CNN. Los datos de profundidad (imagen de profundidad proporcionada por el sensor) se utilizaron para generar las nubes de puntos 3D, mientras que los datos de posición permitieron georreferenciar cada captura. Los resultados de test muestran un porcentaje de detección del 91.4% de los frutos con un 15.9% de falsos positivos (F1-score = 0.876). La evaluación cualitativa de las detecciones muestra que los falsos positivos corresponden a zonas de la imagen que presentan un patrón muy similar a una manzana, donde, incluso a percepción del ojo humano, es difícil de determinar si existe o no manzana. Por otro lado, las manzanas no detectadas corresponden a aquellas que estaban ocultas casi en su totalidad por otros órganos vegetativos (hojas o ramas), a manzanas cortadas por los márgenes de la imagen, o bien a errores humanos en el proceso de etiquetaje del dataset. El tiempo de computación medio fue de 17.3 imágenes por segundo, lo que permite su aplicación en tiempo real. De los resultados experimentales se concluye que el sensor Kinect v2 tiene un gran potencial para la detección y localización 3D de frutos. La principal limitación del sistema es que el rendimiento del sensor de profundidad se ve afectado en condiciones de alta iluminación. Palabras clave: Cámaras de profundidad, RGB-D, Detección de frutos, Redes neuronales convolucionales, Robótica agrícol

    Standardized Assessment of Automatic Segmentation of White Matter Hyperintensities and Results of the WMH Segmentation Challenge

    Get PDF
    Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. The automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their methods on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge. Sixty T1 + FLAIR images from three MR scanners were released with the manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. The segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: 1) Dice similarity coefficient; 2) modified Hausdorff distance (95th percentile); 3) absolute log-transformed volume difference; 4) sensitivity for detecting individual lesions; and 5) F1-score for individual lesions. In addition, the methods were ranked on their inter-scanner robustness; 20 participants submitted their methods for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all the methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation

    ON BUILDING A HIERARCHICAL REGION-BASED REPRESENTATION FOR GENERIC IMAGE ANALYSIS

    No full text
    ABSTRACT This paper studies the procedure to create a hierarchical region-based image representation aiming at generic image analysis. This study is carried out in the context of bottom-up segmentation algorithms and, specifically, using the Binary Partition Tree implementation. The different steps necessary to create a hierarchical region-based representation are analyzed; namely, (i) the creation of the initial partition in the hierarchy, which is split into the definition of the initial merging criterion and the proposal of a stopping criterion, and (ii) the merging criteria used to produce the different regions in the final hierarchical representation. For both steps, the proposed approach is assessed and compared with previous existing ones over a large data set using well-established partition-based metrics

    Do you have a Pop face? Here is a Pop song. Using profile pictures to mitigate the cold-start problem in Music Recommender Systems

    No full text
    When a new user registers to a recommender system service, the system does not know her taste and cannot propose meaningful suggestions (cold-start problem). This preliminary work attempts to mitigate the cold-start problem using the profile picture of the user as a sole information, following the intuition that a correspondence may exist between the pictures that people use to represent themselves and their taste. We proved that, at least in the small music community we used for our experiments, our method can improve the precision of both a classifier and a Top-N music recommender system in a cold-start condition

    Caption text extraction for indexing purposes using a hierarchical region-based image model

    No full text
    This paper presents a technique for detecting caption text for index-ing purposes. This technique is to be included in a generic indexing system dealing with other semantic concepts. The various object detection algorithms are required to share a common image descrip-tion which, in our case, is a hierarchical region-based image model. Caption text objects are detected combining texture and geometric features, which are estimated using wavelet analysis and taking ad-vantage of the region-based image model, respectively. Analysis of the region hierarchy provides the final caption text objects. Index Terms — Image segmentation, feature extraction, object recognition, text recognitio

    BCN20000: dermoscopic lesions in the wild

    No full text
    This article summarizes the BCN20000 dataset, composed of 19424 dermoscopic images of skin lesions captured from 2010 to 2016 in the facilities of the Hospital Clínic in Barcelona. With this dataset, we aim to study the problem of unconstrained classification of dermoscopic images of skin cancer, including lesions found in hard-to-diagnose locations (nails and mucosa), large lesions which do not fit in the aperture of the dermoscopy device, and hypo-pigmented lesions. The BCN20000 will be provided to the participants of the ISIC Challenge 2019 [8], where they will be asked to train algorithms to classify dermoscopic images of skin cancer automatically.Peer Reviewe
    corecore