110 research outputs found

    Abordagens multiescala para descrição de textura

    Get PDF
    Orientadores: Hélio Pedrini, William Robson SchwartzDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Visão computacional e processamento de imagens desempenham um papel importante em diversas áreas, incluindo detecção de objetos e classificação de imagens, tarefas muito importantes para aplicações em imagens médicas, sensoriamento remoto, análise forense, detecção de pele, entre outras. Estas tarefas dependem fortemente de informação visual extraída de imagens que possa ser utilizada para descrevê-las eficientemente. Textura é uma das principais propriedades usadas para descrever informação tal como distribuição espacial, brilho e arranjos estruturais de superfícies. Para reconhecimento e classificação de imagens, um grande grupo de descritores de textura foi investigado neste trabalho, sendo que apenas parte deles é realmente multiescala. Matrizes de coocorrência em níveis de cinza (GLCM) são amplamente utilizadas na literatura e bem conhecidas como um descritor de textura efetivo. No entanto, este descritor apenas discrimina informação em uma única escala, isto é, a imagem original. Escalas podem oferecer informações importantes em análise de imagens, pois textura pode ser percebida por meio de diferentes padrões em diferentes escalas. Dessa forma, duas estratégias diferentes para estender a matriz de coocorrência para múltiplas escalas são apresentadas: (i) uma representação de escala-espaço Gaussiana, construída pela suavização da imagem por um filtro passa-baixa e (ii) uma pirâmide de imagens, que é definida pelo amostragem de imagens em espaço e escala. Este descritor de textura é comparado com outros descritores em diferentes bases de dados. O descritor de textura proposto e então aplicado em um contexto de detecção de pele, como forma de melhorar a acurácia do processo de detecção. Resultados experimentais demonstram que a extensão multiescala da matriz de coocorrência exibe melhora considerável nas bases de dados testadas, exibindo resultados superiores em relação a diversos outros descritores, incluindo a versão original da matriz de coocorrência em escala únicaAbstract: Computer vision and image processing techniques play an important role in several fields, including object detection and image classification, which are very important tasks with applications in medical imagery, remote sensing, forensic analysis, skin detection, among others. These tasks strongly depend on visual information extracted from images that can be used to describe them efficiently. Texture is one of the main used characteristics that describes information such as spatial distribution, brightness and surface structural arrangements. For image recognition and classification, a large set of texture descriptors was investigated in this work, such that only a small fraction is actually multi-scale. Gray level co-occurrence matrices (GLCM) have been widely used in the literature and are known to be an effective texture descriptor. However, such descriptor only discriminates information on a unique scale, that is, the original image. Scales can offer important information in image analysis, since texture can be perceived as different patterns at distinct scales. For that matter, two different strategies for extending the GLCM to multiple scales are presented: (i) a Gaussian scale-space representation, constructed by smoothing the image with a low-pass filter and (ii) an image pyramid, which is defined by sampling the image both in space and scale. This texture descriptor is evaluated against others in different data sets. Then, the proposed texture descriptor is applied in skin detection context, as a mean of improving the accuracy of the detection process. Experimental results demonstrated that the GLCM multi-scale extension has remarkable improvements on tested data sets, outperforming many other feature descriptors, including the original GLCMMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Evaluating color texture descriptors under large variations of controlled lighting conditions

    Full text link
    The recognition of color texture under varying lighting conditions is still an open issue. Several features have been proposed for this purpose, ranging from traditional statistical descriptors to features extracted with neural networks. Still, it is not completely clear under what circumstances a feature performs better than the others. In this paper we report an extensive comparison of old and new texture features, with and without a color normalization step, with a particular focus on how they are affected by small and large variation in the lighting conditions. The evaluation is performed on a new texture database including 68 samples of raw food acquired under 46 conditions that present single and combined variations of light color, direction and intensity. The database allows to systematically investigate the robustness of texture descriptors across a large range of variations of imaging conditions.Comment: Submitted to the Journal of the Optical Society of America

    Pattern Classification of Human Epithelial Images

    Get PDF
    This project shows an important role to diagnosis autoimmune disorder which is by a comparative analysis on the most appropriate clustering technique for the segmentation and also to develop algorithm for positivity classification. In this project, there are four stages will be used to analyze pattern classification in human epithelial (HEp-2) images. First of all, image enhancement will take part in order to boost efficiency of algorithm by implementing some of the adjustment and filtering technique to increase the visibility of image. After that, the second stage will be the image segmentation by using most appropriate clustering technique. There will be a comparative analysis on clustering techniques for segmentation which are adaptive fuzzy c-mean and adaptive fuzzy moving k-mean. Then, for features extraction, by calculating the mean of each of the properties such as area, perimeter, major axis length, and minor axis length for each images. After that, will implementing a grouping based on properties dataset that has been calculated. Last but not least, from the mean of properties, it will classify into the pattern after ranging the value of mean properties of each of the pattern itself that has been done in classification stage

    The eNanoMapper database for nanomaterial safety information

    Get PDF
    Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR)

    SHIRAZ: an automated histology image annotation system for zebrafish phenomics

    Get PDF
    Histological characterization is used in clinical and research contexts as a highly sensitive method for detecting the morphological features of disease and abnormal gene function. Histology has recently been accepted as a phenotyping method for the forthcoming Zebrafish Phenome Project, a large-scale community effort to characterize the morphological, physiological, and behavioral phenotypes resulting from the mutations in all known genes in the zebrafish genome. In support of this project, we present a novel content-based image retrieval system for the automated annotation of images containing histological abnormalities in the developing eye of the larval zebrafish

    Pattern Classification of Human Epithelial Images

    Get PDF
    This project shows an important role to diagnosis autoimmune disorder which is by a comparative analysis on the most appropriate clustering technique for the segmentation and also to develop algorithm for positivity classification. In this project, there are four stages will be used to analyze pattern classification in human epithelial (HEp-2) images. First of all, image enhancement will take part in order to boost efficiency of algorithm by implementing some of the adjustment and filtering technique to increase the visibility of image. After that, the second stage will be the image segmentation by using most appropriate clustering technique. There will be a comparative analysis on clustering techniques for segmentation which are adaptive fuzzy c-mean and adaptive fuzzy moving k-mean. Then, for features extraction, by calculating the mean of each of the properties such as area, perimeter, major axis length, and minor axis length for each images. After that, will implementing a grouping based on properties dataset that has been calculated. Last but not least, from the mean of properties, it will classify into the pattern after ranging the value of mean properties of each of the pattern itself that has been done in classification stage

    Texture analysis and Its applications in biomedical imaging: a survey

    Get PDF
    Texture analysis describes a variety of image analysis techniques that quantify the variation in intensity and pattern. This paper provides an overview of several texture analysis approaches addressing the rationale supporting them, their advantages, drawbacks, and applications. This survey’s emphasis is in collecting and categorising over five decades of active research on texture analysis.Brief descriptions of different approaches are presented along with application examples. From a broad range of texture analysis applications, this survey’s final focus is on biomedical image analysis. An up-to-date list of biological tissues and organs in which disorders produce texture changes that may be used to spot disease onset and progression is provided. Finally, the role of texture analysis methods as biomarkers of disease is summarised.Manuscript received February 3, 2021; revised June 23, 2021; accepted September 21, 2021. Date of publication September 27, 2021; date of current version January 24, 2022. This work was supported in part by the Portuguese Foundation for Science and Technology (FCT) under Grants PTDC/EMD-EMD/28039/2017, UIDB/04950/2020, PestUID/NEU/04539/2019, and CENTRO-01-0145-FEDER-000016 and by FEDER-COMPETE under Grant POCI-01-0145-FEDER-028039. (Corresponding author: Rui Bernardes.)info:eu-repo/semantics/publishedVersio

    A textural deep neural network architecture for mechanical failure analysis

    Get PDF
    Nowadays, many classification problems are approached with deep learning architectures, and the results are outstanding compared to the ones obtained with traditional computer vision approaches. However, when it comes to texture, deep learning analysis has not had the same success as for other tasks. The texture is an inherent characteristic of objects, and it is the main descriptor for many applications in the computer vision field, however due to its stochastic appearance, it is difficult to obtain a mathematical model for it. According to the state of the art, deep learning techniques have some limitations when it comes to learning textural features; and, to classify texture using deep neural networks, it is essential to integrate them with handcrafted features or develop an architecture that resembles these features. By solving this problem, it would be possible to contribute in different applications, such as fractographic analysis. To achieve the best performance in any industry, it is important that the companies have a failure analysis, able to show the flaws’ causes, offer applications and solutions and generate alternatives that allow the customers to obtain more efficient components and productions. The failure of an industrial element has consequences such as significant economic losses, and in some cases, even human losses. With this analysis it is possible to examine the background of the damaged piece in order to find how and why it fails, and to help prevent future failures, in order to implement safer conditions. The visual inspection is the basis for the generation of every fractographic process in failure analysis and it is the main tool for fracture classification. This process is usually done by non-expert personnel on the topic, and normally they do not have the knowledge or experience required for the job, which, without question, increases the possibilities of generating a wrong classification and negatives results in the whole process. This research focuses on the development of a visual computer system that implements a textural deep learning architecture. Several approaches were taken into account, including combining deep learning techniques with traditional handcrafted features, and the development of a new architecture based on the wavelet transform and the multiresolution analysis. The algorithm was test on textural benchmark datasets and on the classification of mechanical fractures with particular texture and marks on surfaces of crystalline materials.Actualmente, diferentes problemas computacionales utilizan arquitecturas de aprendizaje profundo como enfoque principal. Obteniendo resultados sobresalientes comparados con los obtenidos por métodos tradicionales de visión por computador. Sin embargo, cuando se trata de texturas, los análisis de textura no han tenido el mismo éxito que para otras tareas. La textura es una característica inherente de los objetos y es el descriptor principal para diferentes aplicaciones en el campo de la visión por computador. Debido a su apariencia estocástica difícilmente se puede obtener un modelo matemático para describirla. De acuerdo con el estado-del-arte, las técnicas de aprendizaje profundo presentan limitaciones cuando se trata de aprender características de textura. Para clasificarlas, se hace esencial combinarlas con características tradicionales o desarrollar arquitecturas de aprendizaje profundo que reseemblen estas características. Al solucionar este problema es posible contribuir a diferentes aplicaciones como el análisis fractográfico. Para obtener el mejor desempeño en cualquier tipo de industria es importante obtener análisis fractográfico, el cual permite determinar las causas de los diferentes fallos y generar las alternativas para obtener componentes más eficientes. La falla de un elemento mecánico tiene consecuencias importantes tal como pérdidas económicas y en algunos casos incluso pérdidas humanas. Con estos análisis es posible examinar la historia de las piezas dañadas con el fin de entender porqué y cómo se dio el fallo en primer lugar y la forma de prevenirla. De esta forma implementar condiciones más seguras. La inspección visual es la base para la generación de todo proceso fractográfico en el análisis de falla y constituye la herramienta principal para la clasificación de fracturas. El proceso, usualmente, es realizado por personal no-experto en el tema, que normalmente, no cuenta con el conocimiento o experiencia necesarios requeridos para el trabajo, lo que sin duda incrementa las posibilidades de generar una clasificación errónea y, por lo tanto, obtener resultados negativos en todo el proceso. Esta investigación se centra en el desarrollo de un sistema visual de visión por computado que implementa una arquitectura de aprendizaje profundo enfocada en el análisis de textura. Diferentes enfoques fueron tomados en cuenta, incluyendo la combinación de técnicas de aprendizaje profundo con características tradicionales y el desarrollo de una nueva arquitectura basada en la transformada wavelet y el análisis multiresolución. El algorítmo fue probado en bases de datos de referencia en textura y en la clasificación de fracturas mecánicas en materiales cristalinos, las cuales presentan texturas y marcas características dependiendo del tipo de fallo generado sobre la pieza.Fundación CEIBADoctorad
    corecore