11 research outputs found

    Evaluation of Retinal Image Quality Assessment Networks in Different Color-spaces

    Full text link
    Retinal image quality assessment (RIQA) is essential for controlling the quality of retinal imaging and guaranteeing the reliability of diagnoses by ophthalmologists or automated analysis systems. Existing RIQA methods focus on the RGB color-space and are developed based on small datasets with binary quality labels (i.e., `Accept' and `Reject'). In this paper, we first re-annotate an Eye-Quality (EyeQ) dataset with 28,792 retinal images from the EyePACS dataset, based on a three-level quality grading system (i.e., `Good', `Usable' and `Reject') for evaluating RIQA methods. Our RIQA dataset is characterized by its large-scale size, multi-level grading, and multi-modality. Then, we analyze the influences on RIQA of different color-spaces, and propose a simple yet efficient deep network, named Multiple Color-space Fusion Network (MCF-Net), which integrates the different color-space representations at both a feature-level and prediction-level to predict image quality grades. Experiments on our EyeQ dataset show that our MCF-Net obtains a state-of-the-art performance, outperforming the other deep learning methods. Furthermore, we also evaluate diabetic retinopathy (DR) detection methods on images of different quality, and demonstrate that the performances of automated diagnostic systems are highly dependent on image quality.Comment: Accepted by MICCAI 2019. Corrected two typos in Table 1 as: (1) in training set, the number of "Usable + All" should be '1,876'; (2) In testing set, the number of "Total + DR-0" should be '11,362'. Project page: https://github.com/hzfu/Eye

    DEVELOPMENT OF A MOBILE BASED DIABETES RETINOPATHY DETECTION SYSTEM

    Get PDF
    Diabetes Retinopathy is a common retinal complication associated with diabetes. It is a major cause of blindness in the world most especially developing countries like Nigeria, which shares the largest percentage in Africa. Therefore early detection will be highly beneficial in effectively controlling the progress of the disease. The focus of this paper is to solve the problems of inadequate number of specialist who can handle growing number of people afflicted with the disease; and unavailability of mobile device that can aid early detection of diabetes retinopathy disease. Hence, in this paper a Mobile based Diabetes Retinopathy Detection System was developed to make it available for the masses for early detection of the disease

    Retinal area detector from Scanning Laser Ophthalmoscope (SLO) images for diagnosing retinal diseases

    Get PDF
    © 2014 IEEE. Scanning laser ophthalmoscopes (SLOs) can be used for early detection of retinal diseases. With the advent of latest screening technology, the advantage of using SLO is its wide field of view, which can image a large part of the retina for better diagnosis of the retinal diseases. On the other hand, during the imaging process, artefacts such as eyelashes and eyelids are also imaged along with the retinal area. This brings a big challenge on how to exclude these artefacts. In this paper, we propose a novel approach to automatically extract out true retinal area from an SLO image based on image processing and machine learning approaches. To reduce the complexity of image processing tasks and provide a convenient primitive image pattern, we have grouped pixels into different regions based on the regional size and compactness, called superpixels. The framework then calculates image based features reflecting textural and structural information and classifies between retinal area and artefacts. The experimental evaluation results have shown good performance with an overall accuracy of 92%

    Variational mode decomposition based retinal area detection and merging of superpixels in SLO image

    Get PDF
    Background: Scanning Laser Ophthalmoscope (SLO) image can be used to detect retinal diseases. However detecting retinal area is a major task as retina artefacts such as eyelashes and eyelids are also captured. Huge part of retina can be viewed if it is done with the help of encroachment of SLO.Vision loss can be avoided with the help of retinal disease treatment. In olden days retinal diseases are recognized using manual techniques. Alteration of zooming and contrast are imparted by Optometrists and ophthalmologists. It is done to deduce images and diagnose results based on familiarity and domain knowledge. These diagnostic methods are always a time consuming process. Thus execution time can be reduced using mechanical examination of retinal images. It is better to glimpse at the images which could screen more patients and more unswerving diagnoses can be given in a time efficient manner. Scanning Laser Ophthalmoscope images gives the outcome of 2-D retinal scans. However it contains artefacts such as eyelids and eyelashes along with true retinal area. So the main confront is to eliminate these artefacts from the captured retinal image. Objective: Scanning Laser Ophthalmoscope (SLO) image can be used to detect retinal diseases. However detecting retinal area is a major task as retina artefacts such as eyelashes and eyelids are also captured. Huge part of retina can be viewed if it is done with the help of encroachment of SLO. In this paper our novel technique helps in detecting the true retinal area based on image processing techniques. To the SLO image two dimensional Variational Mode Decomposition (VMD) is applied. Methods: In this paper our novel technique helps in detecting the true retinal area based on image processing techniques. To the SLO image two dimensional Variational Mode Decomposition (VMD) is applied. As a result of this different modes are obtained. Mode 1 is chosed as it has high frequency. Then mode1 is pre-processed using median filtering. After this preprocessed mode1 image is grouped into pixels based on regional size and compactness called superpixels. Superpixels are generated to reduce complexity. Superpixel merging is done subsequent to Superpixel generation. It is done to reduce further difficulty and to enhance the speed. From the merged superpixels feature generation is performed using Regional, Gradient and textural features. It is done to eliminate artefacts and to detect the retinal area. Also feature selection will reduce the processing time and increase the speed. A classifier is constructed using Adaptive Network Fuzzy Inference System (ANFIS) for classification of features and its performance is compared with Artificial Neural Network (ANN). Results: By this novel approach we got a classification accuracy of 98.5%. Conclusion: Thus 2D-VMD gives six different modes. Based on high frequency mode1 is chosen. This further makes the process easier and it helps to achieve accuracy level higher. ANFIS is able to achieve higher accuracy when compared with ANN. Using ANFIS 98.5

    Técnicas de análise de imagens para detecção de retinopatia diabética

    Get PDF
    Orientadores: Anderson de Rezende Rocha. Jacques WainerTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Retinopatia Diabética (RD) é uma complicação a longo prazo do diabetes e a principal causa de cegueira da população ativa. Consultas regulares são necessárias para diagnosticar a retinopatia em um estágio inicial, permitindo um tratamento com o melhor prognóstico capaz de retardar ou até mesmo impedir a cegueira. Alavancados pela evolução da prevalência do diabetes e pelo maior risco que os diabéticos têm de desenvolver doenças nos olhos, diversos trabalhos com abordagens bem estabelecidas e promissoras vêm sendo desenvolvidos para triagem automática de retinopatia. Entretanto, a maior parte dos trabalhos está focada na detecção de lesões utilizando características visuais particulares de cada tipo de lesão. Além do mais, soluções artesanais para avaliação de necessidade de consulta e de identificação de estágios da retinopatia ainda dependem bastante das lesões, cujo repetitivo procedimento de detecção é complexo e inconveniente, mesmo se um esquema unificado for adotado. O estado da arte para avaliação automatizada de necessidade de consulta é composto por abordagens que propõem uma representação altamente abstrata obtida inteiramente por meio dos dados. Usualmente, estas abordagens recebem uma imagem e produzem uma resposta ¿ que pode ser resultante de um único modelo ou de uma combinação ¿ e não são facilmente explicáveis. Este trabalho objetivou melhorar a detecção de lesões e reforçar decisões relacionadas à necessidade de consulta, fazendo uso de avançadas representações de imagens em duas etapas. Nós também almejamos compor um modelo sofisticado e direcionado pelos dados para triagem de retinopatia, bem como incorporar aprendizado supervisionado de características com representação orientada por mapa de calor, resultando em uma abordagem robusta e ainda responsável para triagem automatizada. Finalmente, tivemos como objetivo a integração das soluções em dispositivos portáteis de captura de imagens de retina. Para detecção de lesões, propusemos abordagens de caracterização de imagens que possibilitem uma detecção eficaz de diferentes tipos de lesões. Nossos principais avanços estão centrados na modelagem de uma nova técnica de codificação para imagens de retina, bem como na preservação de informações no processo de pooling ou agregação das características obtidas. Decidir automaticamente pela necessidade de encaminhamento do paciente a um especialista é uma investigação ainda mais difícil e muito debatida. Nós criamos um método mais simples e robusto para decisões de necessidade de consulta, e que não depende da detecção de lesões. Também propusemos um modelo direcionado pelos dados que melhora significativamente o desempenho na tarefa de triagem da RD. O modelo produz uma resposta confiável com base em respostas (locais e globais), bem como um mapa de ativação que permite uma compreensão de importância de cada pixel para a decisão. Exploramos a metodologia de explicabilidade para criar um descritor local codificado em uma rica representação em nível médio. Os modelos direcionados pelos dados são o estado da arte para triagem de retinopatia diabética. Entretanto, mapas de ativação são essenciais para interpretar o aprendizado em termos de importância de cada pixel e para reforçar pequenas características discriminativas que têm potencial de melhorar o diagnósticoAbstract: Diabetic Retinopathy (DR) is a long-term complication of diabetes and the leading cause of blindness among working-age adults. A regular eye examination is necessary to diagnose DR at an early stage, when it can be treated with the best prognosis and the visual loss delayed or deferred. Leveraged by the continuous expansion of diabetics and by the increased risk that those people have to develop eye diseases, several works with well-established and promising approaches have been proposed for automatic screening. Therefore, most existing art focuses on lesion detection using visual characteristics specific to each type of lesion. Additionally, handcrafted solutions for referable diabetic retinopathy detection and DR stages identification still depend too much on the lesions, whose repetitive detection is complex and cumbersome to implement, even when adopting a unified detection scheme. Current art for automated referral assessment resides on highly abstract data-driven approaches. Usually, those approaches receive an image and spit the response out ¿ that might be resulting from only one model or ensembles ¿ and are not easily explainable. Hence, this work aims at enhancing lesion detection and reinforcing referral decisions with advanced handcrafted two-tiered image representations. We also intended to compose sophisticated data-driven models for referable DR detection and incorporate supervised learning of features with saliency-oriented mid-level image representations to come up with a robust yet accountable automated screening approach. Ultimately, we aimed at integrating our software solutions with simple retinal imaging devices. In the lesion detection task, we proposed advanced handcrafted image characterization approaches to detecting effectively different lesions. Our leading advances are centered on designing a novel coding technique for retinal images and preserving information in the pooling process. Automatically deciding on whether or not the patient should be referred to the ophthalmic specialist is a more difficult, and still hotly debated research aim. We designed a simple and robust method for referral decisions that does not rely upon lesion detection stages. We also proposed a novel and effective data-driven model that significantly improves the performance for DR screening. Our accountable data-driven model produces a reliable (local- and global-) response along with a heatmap/saliency map that enables pixel-based importance comprehension. We explored this methodology to create a local descriptor that is encoded into a rich mid-level representation. Data-driven methods are the state of the art for diabetic retinopathy screening. However, saliency maps are essential not only to interpret the learning in terms of pixel importance but also to reinforce small discriminative characteristics that have the potential to enhance the diagnosticDoutoradoCiência da ComputaçãoDoutor em Ciência da ComputaçãoCAPE

    Retinal Image Quality Analysis For Automatic Diabetic Retinopathy Detection

    No full text
    Sufficient image quality is a necessary prerequisite for reliable automatic detection systems in several healthcare environments. Specifically for Diabetic Retinopathy (DR) detection, poor quality fund us makes more difficult the analysis of discontinuities that characterize lesions, as well as to generate evidence that can incorrectly diagnose the presence of anomalies. Several methods have been applied for classification of image quality and recently, have shown satisfactory results. However, most of the authors have focused only on the visibility of blood vessels through detection of blurring. Furthermore, these studies frequently only used fund us images from specific cameras which are not validated on datasets obtained from different retinographers. In this paper, we propose an approach to verify essential requirements of retinal image quality for DR screening: field definition and blur detection. The methods were developed and validated on two large, representative datasets collected by different cameras. The first dataset comprises 5,776 images and the second, 920 images. For field definition, the method yields a performance close to optimal with an area under the Receiver Operating Characteristic curve (ROC) of 96.0%. For blur detection, the method achieves an area under the ROC curve of 95.5%. © 2012 IEEE.229236Saaddine, J., Honeycutt, A., Narayan, K., Zhang, X., Klein, R., Boyle, J., Projection of diabetic retinopathy and other major eye diseases among people with diabetes mellitus: United states, 2005-2050 (2008) Arch Ophthalmol., 126 (12), pp. 1740-1747Spurling, G., Askew, D., Hansar, N.H.N., Cooney, A., Jackson, C., Retinal photography for diabetic retinopathy screening in indigenous primary health care: The inala experience (2010) Australian and New Zealand Journal of Public Health, 34, pp. S30-S33Pettitt, D.J., Wollitzer, A.O., Jovanovic, L., He, G., Ipp, E., Decreasing the risk of diabetic retinopathy in a study of case management: The California medi-cal type 2 diabetes study (2005) Diabetes Care, 28 (12), pp. 2819-2822. , http://care.diabetesjournals.org/cgi/reprint/28/12/2819, DOI 10.2337/diacare.28.12.2819Bragge, P., Gruen, R., Chau, M., Forbes, A., Taylor, H., Screening for presence or absence of diabetic retinopathy: A meta-analysis (2011) Arch Ophthalmol., 129 (4), pp. 435-444Maberley, D., Morris, A., Hay, D., Chang, A., Hall, L., Mandava, N., A comparison of digital retinal image quality among photographers with different levels of training using a non-mydriatic fundus camera (2004) Ophthalmic Epidemiology, 11 (3), pp. 191-197. , DOI 10.1080/09286580490514496Philip, S., Fleming, A.D., Goatman, K.A., Fonseca, S., Mcnamee, P., Scotland, G.S., Prescott, G.J., Olson, J.A., The efficacy of automated "disease/no disease" grading for diabetic retinopathy in a systematic screening programme (2007) British Journal of Ophthalmology, 91 (11), pp. 1512-1517. , DOI 10.1136/bjo.2007.119453Jelinek, H., Cree, M., (2010) Automated Image Detection of Retinal Pathology, , Boca Raton: CRC PressDavis, H., Russell, S., Barriga, E., Abramoff, M., Soliz, P., Visionbased, real-time retinal image quality assessment (2009) IEEE CMBS, pp. 1-6Giancardo, L., Meriaudeau, F., Karnowski, T., Chaum, E., Tobin, K., (2010) New Developments in Biomedical Engineering, pp. 201-224. , InTech, ch. Quality Assessment of Retinal Fundus Images using Elliptical Local Vessel DensityLalonde, M., Gagnon, L., Boucher, M.-C., Automatic visual quality assessment in optical fundus images (2001) Vision Interface, pp. 259-264Niemeijer, M., Abramoff, M.D., Van Ginneken, B., Image structure clustering for image quality verification of color retina images in diabetic retinopathy screening (2006) Medical Image Analysis, 10 (6), pp. 888-898. , DOI 10.1016/j.media.2006.09.006, PII S1361841506000739Patton, N., Aslam, T.M., MacGillivray, T., Deary, I.J., Dhillon, B., Eikelboom, R.H., Yogesan, K., Constable, I.J., Retinal image analysis: Concepts, applications and potential (2006) Progress in Retinal and Eye Research, 25 (1), pp. 99-127. , DOI 10.1016/j.preteyeres.2005.07.001, PII S1350946205000406Jelinek, H., Rocha, A., Carvalho, T., Goldenstein, S., Wainer, J., Machine learning and pattern classification in identification of indigenous retinal pathology (2011) IEEE EMBSFacey, K., (2002) Health Tech. Assessment: Organisation of Services for Diabetic Retinopathy Screening, , Health Tech. Board for ScotlandFleming, A.D., Philip, S., Goatman, K.A., Olson, J.A., Sharp, P.F., Automated assessment of diabetic retinal image quality based on clarity and field definition (2006) Investigative Ophthalmology and Visual Science, 47 (3), pp. 1120-1125. , DOI 10.1167/iovs.05-1155Winn, J., Criminisi, A., Minka, T., Object categorization by learned universal visual dictionary (2005) Proceedings of the IEEE International Conference on Computer Vision, 2, pp. 1800-1807. , DOI 10.1109/ICCV.2005.171, 1544935, Proceedings - 10th IEEE International Conference on Computer Vision, ICCV 2005Herbert, J., Pires, R., Padilha, R., Goldenstein, S., Wainer, J., Bossomaier, T., Rocha, A., Data fusion for multi-lesion diabetic retinopathy detection IEEE EMBS, 2012Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E., Image quality assessment: From error visibility to structural similarity (2004) IEEE Trans. on Image Processing, 13 (4), pp. 600-612Pizer Stephen, M., Amburn, E.P., Austin John, D., Cromartie, R., Geselowitz, A., Greer, T., Ter Haar Romeny, B., Zuiderveld, K., Adaptive histogram equalization and its variations (1987) Computer vision, graphics, and image processing, 39 (3), pp. 355-368Chang, C.-C., Lin, C.-J., LIBSVM: A library for support vector machines (2011) ACM Trans. on Intelligent Systems and Tech., 2, pp. 2701-2727Gonzalez, R., Woods, R., (2006) Digital Image Processing, , (3rd Ed.). Upper Saddle River, NJ, USA: Prentice-Hall, IncBay, H., Tuytelaars, T., Van Gool, L., SURF: Speeded up robust features (2006) Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3951, pp. 404-417. , DOI 10.1007/11744023-32, Computer Vision - ECCV 2006, 9th European Conference on Computer Vision, ProceedingsSivic, J., Zisserman, A., Video google: A text retrieval approach to object matching in videos (2003) IEEE ICCV, pp. 1470-1477Do Valle Jr., E.A., (2008) Local-descriptor Matching for Image Identification Systems, , Ph.D. dissertation, Université de Cergy-Pontoise École Doctorale Sciences et Ingénierie, Cergy-Pontoise, France, JuneRocha, A., Papa, J., Meira, L., How far do we get using machine learning black-boxes? Intl. Journal of Pattern Recognition and Artificial Intelligence, 2012, pp. 1-

    Retinal Image Quality Analysis for Automatic Diabetic Retinopathy Detection

    No full text

    Visual Words Dictionaries And Fusion Techniques For Searching People Through Textual And Visual Attributes

    No full text
    Using personal traits for searching people is paramount in several application areas and has attracted an ever-growing attention from the scientific community over the past years. Some practical applications in the realm of digital forensics and surveillance include locating a suspect or finding missing people in a public space. In this paper, we aim at assigning describable visual attributes (e.g., white chubby male wearing glasses and with bangs) as labels to images to describe their appearance and performing visual searches without relying on image annotations during testing. For that, we create mid-level image representations for face images based on visual dictionaries linking visual properties in the images to describable attributes. In addition, we take advantage of machine learning techniques for combining different attributes and performing a query. First, we propose three methods for building the visual dictionaries. Method #1 uses a sparse-sampling scheme to obtain low-level features with a clustering algorithm to build the visual dictionaries. Method #2 uses dense-sampling to obtain low-level features and random selection to build the visual dictionaries while Method #3 uses dense-sampling to obtain low-level features followed by a clustering algorithm to build the visual dictionaries. Thereafter, we train 2-class classifiers for the describable visual attributes of interest which assign to each image a decision score used to obtain its ranking. For more complex queries (2+ attributes), we use three state-of-the-art approaches for combining the rankings: (1) product of probabilities, (2) rank aggregation and (3) rank position. To date, we have considered fifteen attribute classifiers and, consequently, their direct counterparts theoretically allowing 2 15=32,768 different combined queries (the actual number is smaller since some attributes are contradictory or mutually exclusive). Notwithstanding, the method is easily extensible to include new attributes. Experimental results show that Method #3 greatly improves retrieval precision for some attributes in comparison with other methods in the literature. Finally, for combined attributes, product of probabilities, rank aggregation and rank position yield complementary results for rank fusion and the final decision making suggesting interesting possible combinations for further work. © 2013 Elsevier B.V. All rights reserved.39174842010/05647-4; Microsoft ResearchBay, H., Tuytelaars, T., Gool, L.V., Surf: Speeded up robust features (2006) European Conference on Computer Vision (ECCV), pp. 1-14Boureau, Y., Bach, F., Lecun, Y., Ponce, J., Learning mid-level features for recognition (2010) IEEE Intl. Conference on Computer Vision And, Pattern Recognition, pp. 2559-2566Carkacloglu, A., Yarman-Vural, F., SASI: A generic texture descriptor for image retrieval (2003) Pattern Recognition, 36 (11), pp. 2615-2633. , DOI 10.1016/S0031-3203(03)00171-7Cottrell, G.W., Metcalfe, J., Empath: Face, emotion, and gender recognition using holons (1990) Neural Information Processing Systems (NIPS), pp. 564-571Csurka, G., Dance, C., Fan, L., Willamowski, J., Bray, C., Visual categorization with bags of keypoints (2004) European Conference on Computer Vision (ECCV), pp. 1-14Datta, A., Feris, R., Vaquero, D., Hierarchical ranking of facial attributes (2011) IEEE International Conference on Face and Gesture (F&G), pp. 36-42Do Valle Jr., E.A., Local-descriptor matching for image identification systems (2008) Ph.D. Thesis, , Université de Cergy-Pontoise École Doctorale Sciences et Ingénierie, Cergy-Pontoise, France (June)Fabian, J., Pires, R., Rocha, A., Searching for people through textual and visual attributes (2012) 25th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), 2012, pp. 276-282Fei-Fei, L., Perona, P., A bayesian hierarchical model for learning natural scene categories (2005) IEEE Intl. Conference on Computer Vision and Pattern Recognition (CVPR), pp. 524-531Ferrari, V., Zisserman, A., Learning visual attributes (2007) Neural Information Processing Systems (NIPS), pp. 1-8Golomb, B., Lawrence, D., Sejnowski, T., Sexnet: A neural network identifies sex from human faces (1990) Neural Information Processing Systems (NIPS), pp. 572-577Gonzalez, R., Woods, R., (2007) Digital Image Processing, , third ed. Prentice-HallHaralick, R.M., Shanmugam, K., Textural features for image classification (1973) IEEE Transactions on Systems, Man, and Cybernetics (SMC-3), 6 (1), pp. 610-621Heflin, B., Scheirer, W., Rocha, A., Boult, T.E., (2011) Pattern Recognition, Machine Intelligence and Biometrics: Expanding Frontiers, No. ISBN 978-3-642-22406-5 in 1, pp. 361-387. , Springer, Ch. A Look at Eye Detection for Unconstrained EnvironmentsHong, B.-W., Soatto, S., Ni, K., Chan, T., The scale of a texture and its application to segmentation (2008) IEEE Intl. Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-8Huang, G., Ramesh, M., Berg, T., (2007) E. Learned-miller, Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments.Jelinek, H.F., Pires, R., Padilha, R., Goldenstein, S., Wainer, J., Bossomaier, T., Rocha, A., Data fusion for multi-lesion diabetic retinopathy detection (2012) IEEE International Symposium on Computer-based Medical System (CBMS), , Rome, Italy, (in press)Jurie, F., Triggs, B., Creating efficient codebooks for visual recognition (2005) Proceedings of the IEEE International Conference on Computer Vision, I, pp. 604-610. , DOI 10.1109/ICCV.2005.66, 1541309, Proceedings - 10th IEEE International Conference on Computer Vision, ICCV 2005Kemeny, J., Mathematics without numbers (1959) Daedalus, 88 (4), pp. 577-591Kumar, N., Belhumeur, P., Nayar, S., Facetracer: A search engine for collections of images with faces (2008) European Conference on Computer Vision (ECCV), pp. 340-353Kumar, N., Berg, A.C., Belhumeur, P., Nayar, S., Attribute and simile classifiers for face verification (2009) IEEE International Conference on Computer Vision (ICCV), pp. 365-372Kumar, N., Berg, A.C., Belhumeur, P.N., Nayar, S.K., Describable visual attributes for face verification and image search (2011) IEEE Transactions on Pattern Analysis and Machine Intelligence (T.PAMI), 33 (10), pp. 1962-1977Lam, L., Suen, C.Y., Optimal combinations of pattern classifiers (1995) PRL, 16 (9), pp. 945-954Lampert, C., Nickisch, H., Harmeling, S., Learning to detect unseen object classes by between-class attribute transfer (2009) IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 951-958Lowe, D., Distinctive image features from scale-invariant keypoints (2004) International Journal of Computer Vision (IJCV), 60 (2), pp. 91-110Nowak, E., Jurie, F., Triggs, B., Sampling strategies for bag-of-features image classification (2006) Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3954, pp. 490-503. , DOI 10.1007/11744085-38, Computer Vision - ECCV 2006, 9th European Conference on Computer Vision, ProceedingsNuray, R., Can, F., Automatic ranking of information retrieval systems using data fusion (2006) Information Processing and Management, 42 (3), pp. 595-614. , DOI 10.1016/j.ipm.2005.03.023, PII S0306457305000555Park, U., Liao, S., Klare, B., Voss, J., Jain, A.K., Face finder: Filtering a large face database using scars, marks and tattoos (2011) Tech. Rep. TR11, , Michigan State UnivPedronette, D., Da, R., Torres, S., Exploiting contextual information for image re-ranking and rank aggregation (2012) International Journal of Multimedia Information Retrieval (JMIR), 1 (1), pp. 115-128Pedronette, D., Da, R., Torres, S., Exploiting pairwise recommendation and clustering strategies for image re-ranking (2012) Information Sciences (IS), 207 (1), pp. 19-34Penatti, O.A.B., Valle, E., Torres, R.S., Comparative study of global color and texture descriptors for web image retrieval (2012) Journal of Visual Communication and Image Representation, 23 (2), pp. 359-380Pires, R., Wainer, J., Jelinek, H.F., Rocha, A., Retinal image quality analysis for automatic diabetic retinopathy detection (2012) 25th Conference on Graphics, Patterns and Images (SIBGRAPI), , Ouro Preto, Brazil (in press)Presti, L.L., Cascia, M.L., Entropy-based localization of textured regions (2011) Intl. Conference on Image Analysis and Processing (ICIAP), pp. 616-625Roberts, F., (1976) Discrete Mathematical Models with Applications to Social, Biological, and Environmental Problems, , Prentice HallRocha, A., Carvalho, T., Jelinek, H.F., Goldenstein, S., Wainer, J., Points of interest and visual dictionaries for automatic retinal lesion detection (2012) IEEE Transactions on Biomedical Engineering (T.BME), 59 (8), pp. 2244-2253Scheirer, W., Rocha, A., Michaels, R., Boult, T.E., Extreme value theory for recognition score normalization (2010) European Conference on Computer Vision (ECCV), pp. 481-495Scheirer, W., Kumar, N., Ricanek, K., Boult, T., Belhumeur, P., Fusing with context: A bayesian approach to combining descriptive attributes (2011) IEEE Intl. Joint Conference on Biometrics (IJCB), pp. 1-8Scheirer, W., Kumar, N., Belhumeur, P.N., Boult, T.E., Multi-attribute spaces: Calibration for attribute fusion and similarity search (2012) IEEE Intl. Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2933-2940Viola, P., Jones, M., Robust real-time face detection (2004) International Journal of Computer Vision (IJCV), 57, pp. 137-15
    corecore