363 research outputs found

    Perceptual texture similarity estimation

    Get PDF
    This thesis evaluates the ability of computational features to estimate perceptual texture similarity. In the first part of this thesis, we conducted two evaluation experiments on the ability of 51 computational feature sets to estimate perceptual texture similarity using two differ-ent evaluation methods, namely, pair-of-pairs based and retrieval based evaluations. These experiments compared the computational features to two sets of human derived ground-truth data, both of which are higher resolution than those commonly used. The first was obtained by free-grouping and the second by pair-of-pairs experiments. Using these higher resolution data, we found that the feature sets do not perform well when compared to human judgements. Our analysis shows that these computational feature sets either (1) only exploit power spectrum information or (2) only compute higher order statistics (HoS) on, at most, small local neighbourhoods. In other words, they cannot capture aperiodic, long-range spatial relationships. As we hypothesise that these long-range interactions are important for the human perception of texture similarity we carried out two more pair-of-pairs ex-periments, the results of which indicate that long-range interactions do provide humans with important cues for the perception of texture similarity. In the second part of this thesis we develop new texture features that can encode such data. We first examine the importance of three different types of visual information for human perception of texture. Our results show that contours are the most critical type of information for human discrimination of textures. Finally, we report the development of a new set of contour-based features which performed well on the free-grouping data and outperformed the 51 feature sets and another contour type feature set with the pair-of-pairs data

    Classification d'images satellitaires hyperspectrales en zone rurale et périurbaine

    Get PDF
    L'observation satellitaire en zone rurale et périurbaine fournit des images hyperspectrales exploitables en vue de réaliser une cartographie ou une analyse du paysage. Nous avons appliqué une classification par maximum de vraisemblance sur des images de zone agricole. Afin de régulariser la classification, nous considérons la modélisation d'image par champs de Markov, dont l'équivalence avec les champs de Gibbs nous permet d'utiliser plusieurs algorithmes itératifs d'optimisation : l'ICM et le recuit simulé, qui convergent respectivement vers une classification sous-optimale ou optimale pour une certaine énergie. Un modèle d'énergie est proposé : le modèle de Potts, que nous améliorons pour le rendre adaptatif aux classes présentes dans l'image. L'étude de la texture dans l'image initiale permet d'introduire des critères artificiels qui s'ajoutent à la radiométrie de l'image en vue d'améliorer la classification. Ceci permet de bien segmenter les zones périurbaines, la forêt, la campagne, dans le cadre d'un plan d'occupation des sols. Trois images hyperspectrales et une vérité terrain ont été utilisées pour réaliser des tests, afin de mettre en évidence les méthodes et le paramétrage adéquats pour obtenir les résultats les plus satisfaisants

    Unsupervised segmentation of road images. A multicriteria approach

    Get PDF
    This paper presents a region-based segmentation algorithm which can be applied to various problems since it does not requir e a priori knowledge concerning the kind of processed images . This algorithm, based on a split and merge method, gives reliable results both on homogeneous grey level images and on textured images . First, images are divided into rectangular sectors . The splitting algorithm works independently on each sector, and uses a homogeneity criterion based only on grey levels . The mergin g is then achieved through assigning labels to each region obtained by the splitting step, using extracted feature measurements . We modeled exploited fields (data field and label field) by Markov Random Fields (MRF), the segmentation is then optimall y determined using the Iterated Conditional Modes (ICM) . Input data of the merging step are regions obtained by the splitting step and their corresponding features vector. The originality of this algorithm is that texture coefficients are directly computed from these regions . These regions will be elementary sites for the Markov relaxation process . Thus, a region- based segmentation algorith m using texture and grey level is obtained . Results from various images types are presented .Nous présentons ici un algorithme de segmentation en régions pouvant s'appliquer à des problèmes très variés car il ne tient compte d'aucune information a priori sur le type d'images traitées. Il donne de bons résultats aussi bien sur des images possédant des objets homogènes au sens des niveaux de gris que sur des images possédant des régions texturées. C'est un algorithme de type division-fusion. Lors d'une première étape, l'image est découpée en fenêtres, selon une grille. L'algorithme de division travaille alors indépendamment sur chaque fenêtre, et utilise un critère d'homogénéité basé uniquement sur les niveaux de gris. La texture de chacune des régions ainsi obtenues est alors calculée. A chaque région sera associé un vecteur de caractéristiques comprenant des paramètres de luminance, et des paramètres de texture. Les régions ainsi définies jouent alors le rôle de sites élémentaires pour le processus de fusion. Celui-ci est fondé sur la modélisation des champs exploités (champ d'observations et champ d'étiquettes) par des champs de Markov. Nous montrerons les résultats de segmentation obtenus sur divers types d'images

    Texture analysis and Its applications in biomedical imaging: a survey

    Get PDF
    Texture analysis describes a variety of image analysis techniques that quantify the variation in intensity and pattern. This paper provides an overview of several texture analysis approaches addressing the rationale supporting them, their advantages, drawbacks, and applications. This survey’s emphasis is in collecting and categorising over five decades of active research on texture analysis.Brief descriptions of different approaches are presented along with application examples. From a broad range of texture analysis applications, this survey’s final focus is on biomedical image analysis. An up-to-date list of biological tissues and organs in which disorders produce texture changes that may be used to spot disease onset and progression is provided. Finally, the role of texture analysis methods as biomarkers of disease is summarised.Manuscript received February 3, 2021; revised June 23, 2021; accepted September 21, 2021. Date of publication September 27, 2021; date of current version January 24, 2022. This work was supported in part by the Portuguese Foundation for Science and Technology (FCT) under Grants PTDC/EMD-EMD/28039/2017, UIDB/04950/2020, PestUID/NEU/04539/2019, and CENTRO-01-0145-FEDER-000016 and by FEDER-COMPETE under Grant POCI-01-0145-FEDER-028039. (Corresponding author: Rui Bernardes.)info:eu-repo/semantics/publishedVersio

    A textural deep neural network architecture for mechanical failure analysis

    Get PDF
    Nowadays, many classification problems are approached with deep learning architectures, and the results are outstanding compared to the ones obtained with traditional computer vision approaches. However, when it comes to texture, deep learning analysis has not had the same success as for other tasks. The texture is an inherent characteristic of objects, and it is the main descriptor for many applications in the computer vision field, however due to its stochastic appearance, it is difficult to obtain a mathematical model for it. According to the state of the art, deep learning techniques have some limitations when it comes to learning textural features; and, to classify texture using deep neural networks, it is essential to integrate them with handcrafted features or develop an architecture that resembles these features. By solving this problem, it would be possible to contribute in different applications, such as fractographic analysis. To achieve the best performance in any industry, it is important that the companies have a failure analysis, able to show the flaws’ causes, offer applications and solutions and generate alternatives that allow the customers to obtain more efficient components and productions. The failure of an industrial element has consequences such as significant economic losses, and in some cases, even human losses. With this analysis it is possible to examine the background of the damaged piece in order to find how and why it fails, and to help prevent future failures, in order to implement safer conditions. The visual inspection is the basis for the generation of every fractographic process in failure analysis and it is the main tool for fracture classification. This process is usually done by non-expert personnel on the topic, and normally they do not have the knowledge or experience required for the job, which, without question, increases the possibilities of generating a wrong classification and negatives results in the whole process. This research focuses on the development of a visual computer system that implements a textural deep learning architecture. Several approaches were taken into account, including combining deep learning techniques with traditional handcrafted features, and the development of a new architecture based on the wavelet transform and the multiresolution analysis. The algorithm was test on textural benchmark datasets and on the classification of mechanical fractures with particular texture and marks on surfaces of crystalline materials.Actualmente, diferentes problemas computacionales utilizan arquitecturas de aprendizaje profundo como enfoque principal. Obteniendo resultados sobresalientes comparados con los obtenidos por métodos tradicionales de visión por computador. Sin embargo, cuando se trata de texturas, los análisis de textura no han tenido el mismo éxito que para otras tareas. La textura es una característica inherente de los objetos y es el descriptor principal para diferentes aplicaciones en el campo de la visión por computador. Debido a su apariencia estocástica difícilmente se puede obtener un modelo matemático para describirla. De acuerdo con el estado-del-arte, las técnicas de aprendizaje profundo presentan limitaciones cuando se trata de aprender características de textura. Para clasificarlas, se hace esencial combinarlas con características tradicionales o desarrollar arquitecturas de aprendizaje profundo que reseemblen estas características. Al solucionar este problema es posible contribuir a diferentes aplicaciones como el análisis fractográfico. Para obtener el mejor desempeño en cualquier tipo de industria es importante obtener análisis fractográfico, el cual permite determinar las causas de los diferentes fallos y generar las alternativas para obtener componentes más eficientes. La falla de un elemento mecánico tiene consecuencias importantes tal como pérdidas económicas y en algunos casos incluso pérdidas humanas. Con estos análisis es posible examinar la historia de las piezas dañadas con el fin de entender porqué y cómo se dio el fallo en primer lugar y la forma de prevenirla. De esta forma implementar condiciones más seguras. La inspección visual es la base para la generación de todo proceso fractográfico en el análisis de falla y constituye la herramienta principal para la clasificación de fracturas. El proceso, usualmente, es realizado por personal no-experto en el tema, que normalmente, no cuenta con el conocimiento o experiencia necesarios requeridos para el trabajo, lo que sin duda incrementa las posibilidades de generar una clasificación errónea y, por lo tanto, obtener resultados negativos en todo el proceso. Esta investigación se centra en el desarrollo de un sistema visual de visión por computado que implementa una arquitectura de aprendizaje profundo enfocada en el análisis de textura. Diferentes enfoques fueron tomados en cuenta, incluyendo la combinación de técnicas de aprendizaje profundo con características tradicionales y el desarrollo de una nueva arquitectura basada en la transformada wavelet y el análisis multiresolución. El algorítmo fue probado en bases de datos de referencia en textura y en la clasificación de fracturas mecánicas en materiales cristalinos, las cuales presentan texturas y marcas características dependiendo del tipo de fallo generado sobre la pieza.Fundación CEIBADoctorad

    Characterization and classification of textures on natural images

    Get PDF
    The existing texture classification methods are generally based on a parameter extraction stage followed by a classifier stage . Using this kind of method,for an operational application requires to take into account the risk of classes mixture in the parameters space . We propose to take profit of Gagalowicz conjecture in order ta minimise this risk . The conjecture provides us with a set of parameters which totally describe the texture. We show that a connectionnist classifier is able to deal efficiently with these parameters .La plus grande partie des méthodes de classification de textures existantes consiste à alimenter un classifieur par un ensemble de paramètres caractéristiques calculés localement sur l'image texturée. La mise en œuvre de ces méthodes dans le cadre d'applications opérationnelles suppose la prise en compte d'un élément important : le risque de confusion de classes dans l'espace paramétrique. Pour éviter ce problème, nous proposons d'exploiter la conjecture de Gagalowicz [12], qui nous fournit un ensemble de paramètres suffisants pour caractériser totalement la texture. Nous montrons qu'un classifieur connexionniste est capable d'exploiter efficacement ces paramètre

    Texture classification by pattern knowledge discovery

    Get PDF
    Texture analysis has received a considerable amount of attention over the last few decades as it creates the basis of the most object recognition methods. Texture analysis mainly comprises texture classification, texture segmentation, and both of them require the important step: texture features extraction. Many approaches have been proposed either as spatial domain methods or frequency domain methods. Many texture features based on the spatial domain methods have been proposed as those methods are proven to be more superior. Texture can also be considered as a collection of patterns. Distances, directions and pixel gray-level values can determine the relationship among pixels within each pattern. Therefore, patterns are considered as the basis of textures and textures are considered to be different if they contain distinguished patterns. The procedure of pattern knowledge discovery has been started in order to find the distinctive texture patterns with gray-level deviations and distances deviations. An apriori algorithm with the joining step, cleaning step and pruning step has been introduced to find frequent patterns in order to generate higher order patterns which can be used to categorize textures. A large number of textures from the benchmark album of Brodatz have been applied and tested in the proposed method in order to prove the validity of this system and the performance is promising. The overall high accuracy shows the great encouragement from the testing procedure

    Modeling small objects under uncertainties : novel algorithms and applications.

    Get PDF
    Active Shape Models (ASM), Active Appearance Models (AAM) and Active Tensor Models (ATM) are common approaches to model elastic (deformable) objects. These models require an ensemble of shapes and textures, annotated by human experts, in order identify the model order and parameters. A candidate object may be represented by a weighted sum of basis generated by an optimization process. These methods have been very effective for modeling deformable objects in biomedical imaging, biometrics, computer vision and graphics. They have been tried mainly on objects with known features that are amenable to manual (expert) annotation. They have not been examined on objects with severe ambiguities to be uniquely characterized by experts. This dissertation presents a unified approach for modeling, detecting, segmenting and categorizing small objects under uncertainty, with focus on lung nodules that may appear in low dose CT (LDCT) scans of the human chest. The AAM, ASM and the ATM approaches are used for the first time on this application. A new formulation to object detection by template matching, as an energy optimization, is introduced. Nine similarity measures of matching have been quantitatively evaluated for detecting nodules less than 1 em in diameter. Statistical methods that combine intensity, shape and spatial interaction are examined for segmentation of small size objects. Extensions of the intensity model using the linear combination of Gaussians (LCG) approach are introduced, in order to estimate the number of modes in the LCG equation. The classical maximum a posteriori (MAP) segmentation approach has been adapted to handle segmentation of small size lung nodules that are randomly located in the lung tissue. A novel empirical approach has been devised to simultaneously detect and segment the lung nodules in LDCT scans. The level sets methods approach was also applied for lung nodule segmentation. A new formulation for the energy function controlling the level set propagation has been introduced taking into account the specific properties of the nodules. Finally, a novel approach for classification of the segmented nodules into categories has been introduced. Geometric object descriptors such as the SIFT, AS 1FT, SURF and LBP have been used for feature extraction and matching of small size lung nodules; the LBP has been found to be the most robust. Categorization implies classification of detected and segmented objects into classes or types. The object descriptors have been deployed in the detection step for false positive reduction, and in the categorization stage to assign a class and type for the nodules. The AAMI ASMI A TM models have been used for the categorization stage. The front-end processes of lung nodule modeling, detection, segmentation and classification/categorization are model-based and data-driven. This dissertation is the first attempt in the literature at creating an entirely model-based approach for lung nodule analysis
    corecore