40 research outputs found

    Mode Shape Description and Model Updating of Axisymmetric Structures Using Radial Tchebichef Moment Descriptors

    Get PDF
    A novel approach for mode shape feature extraction and model updating of axisymmetric structures based on radial Tchebichef moment (RTM) descriptors is proposed in this study. The mode shape features extracted by RTM descriptors can effectively compress the full-field modal vibration data and retain the most important information. The reconstruction of mode shapes using RTM descriptors can accurately describe the mode shapes, and the simulation shows that the RTM function is superior to Zernike moment function in terms of its mathematical properties and its shape reconstruction ability. In addition, the proposed modal correlation coefficient of the RTM amplitude can overcome the main disadvantage of using the modal assurance criterion (MAC), which has difficulty in identifying double or close modes of symmetric structures. Furthermore, the model updating of axisymmetric structures based on RTM descriptors appears to be more efficient and effective than the normal model updating method directly using modal vibration data, avoids manipulating large amounts of mode shape data, and speeds up the convergence of updating parameters. The RTM descriptors used in correlation analysis and model updating are demonstrated with a cover of an aeroengine rig. The frequency deviation between the test and the FE model was reduced from 17.13% to 1.23% for the first 13 modes via the model updating process. It verified the potential to industrial application with the proposed method

    On The Potential of Image Moments for Medical Diagnosis

    Get PDF
    Medical imaging is widely used for diagnosis and postoperative or post-therapy monitoring. The ever-increasing number of images produced has encouraged the introduction of automated methods to assist doctors or pathologists. In recent years, especially after the advent of convolutional neural networks, many researchers have focused on this approach, considering it to be the only method for diagnosis since it can perform a direct classification of images. However, many diagnostic systems still rely on handcrafted features to improve interpretability and limit resource consumption. In this work, we focused our efforts on orthogonal moments, first by providing an overview and taxonomy of their macrocategories and then by analysing their classification performance on very different medical tasks represented by four public benchmark data sets. The results confirmed that convolutional neural networks achieved excellent performance on all tasks. Despite being composed of much fewer features than those extracted by the networks, orthogonal moments proved to be competitive with them, showing comparable and, in some cases, better performance. In addition, Cartesian and harmonic categories provided a very low standard deviation, proving their robustness in medical diagnostic tasks. We strongly believe that the integration of the studied orthogonal moments can lead to more robust and reliable diagnostic systems, considering the performance obtained and the low variation of the results. Finally, since they have been shown to be effective on both magnetic resonance and computed tomography images, they can be easily extended to other imaging techniques

    Assessment of the Local Tchebichef Moments Method for Texture Classification by Fine Tuning Extraction Parameters

    Full text link
    In this paper we use machine learning to study the application of Local Tchebichef Moments (LTM) to the problem of texture classification. The original LTM method was proposed by Mukundan (2014). The LTM method can be used for texture analysis in many different ways, either using the moment values directly, or more simply creating a relationship between the moment values of different orders, producing a histogram similar to those of Local Binary Pattern (LBP) based methods. The original method was not fully tested with large datasets, and there are several parameters that should be characterised for performance. Among these parameters are the kernel size, the moment orders and the weights for each moment. We implemented the LTM method in a flexible way in order to allow for the modification of the parameters that can affect its performance. Using four subsets from the Outex dataset (a popular benchmark for texture analysis), we used Random Forests to create models and to classify texture images, recording the standard metrics for each classifier. We repeated the process using several variations of the LBP method for comparison. This allowed us to find the best combination of orders and weights for the LTM method for texture classification

    Multi-Technique Fusion for Shape-Based Image Retrieval

    Get PDF
    Content-based image retrieval (CBIR) is still in its early stages, although several attempts have been made to solve or minimize challenges associated with it. CBIR techniques use such visual contents as color, texture, and shape to represent and index images. Of these, shapes contain richer information than color or texture. However, retrieval based on shape contents remains more difficult than that based on color or texture due to the diversity of shapes and the natural occurrence of shape transformations such as deformation, scaling and orientation. This thesis presents an approach for fusing several shape-based image retrieval techniques for the purpose of achieving reliable and accurate retrieval performance. An extensive investigation of notable existing shape descriptors is reported. Two new shape descriptors have been proposed as means to overcome limitations of current shape descriptors. The first descriptor is based on a novel shape signature that includes corner information in order to enhance the performance of shape retrieval techniques that use Fourier descriptors. The second descriptor is based on the curvature of the shape contour. This invariant descriptor takes an unconventional view of the curvature-scale-space map of a contour by treating it as a 2-D binary image. The descriptor is then derived from the 2-D Fourier transform of the 2-D binary image. This technique allows the descriptor to capture the detailed dynamics of the curvature of the shape and enhances the efficiency of the shape-matching process. Several experiments have been conducted in order to compare the proposed descriptors with several notable descriptors. The new descriptors not only speed up the online matching process, but also lead to improved retrieval accuracy. The complexity and variety of the content of real images make it impossible for a particular choice of descriptor to be effective for all types of images. Therefore, a data- fusion formulation based on a team consensus approach is proposed as a means of achieving high accuracy performance. In this approach a select set of retrieval techniques form a team. Members of the team exchange information so as to complement each other’s assessment of a database image candidate as a match to query images. Several experiments have been conducted based on the MPEG-7 contour-shape databases; the results demonstrate that the performance of the proposed fusion scheme is superior to that achieved by any technique individually

    Texture classification using discrete Tchebichef moments

    Get PDF
    In this paper, a method to characterize texture images based on discrete Tchebichef moments is presented. A global signature vector is derived from the moment matrix by taking into account both the magnitudes of the moments and their order. The performance of our method in several texture classification problems was compared with that achieved through other standard approaches. These include Haralick's gray-level co-occurrence matrices, Gabor filters, and local binary patterns. An extensive texture classification study was carried out by selecting images with different contents from the Brodatz, Outex, and VisTex databases. The results show that the proposed method is able to capture the essential information about texture, showing comparable or even higher performance than conventional procedures. Thus, it can be considered as an effective and competitive technique for texture characterization. © 2013 Optical Society of America.J. Víctor Marcos is a Juan de la Cierva research fellow funded by the Spanish Ministry of Economy and Competitiveness.Peer Reviewe

    Construction of a complete set of orthogonal Fourier-Mellin moment invariants for pattern recognition applications

    No full text
    International audienceThe completeness property of a set of invariant descriptors is of fundamental importance from the theoretical as well as the practical points of view. In this paper, we propose a general approach to construct a complete set of orthogonal Fourier-Mellin moment (OFMM) invariants. By establishing a relationship between the OFMMs of the original image and those of the image having the same shape but distinct orientation and scale, a complete set of scale and rotation invariants is derived. The efficiency and the robustness to noise of the method for recognition tasks are shown by comparing it with some existing methods on several data sets

    Feature Extraction Methods for Character Recognition

    Get PDF
    Not Include

    Automatic Segmentation and Classification of Red and White Blood cells in Thin Blood Smear Slides

    Get PDF
    In this work we develop a system for automatic detection and classification of cytological images which plays an increasing important role in medical diagnosis. A primary aim of this work is the accurate segmentation of cytological images of blood smears and subsequent feature extraction, along with studying related classification problems such as the identification and counting of peripheral blood smear particles, and classification of white blood cell into types five. Our proposed approach benefits from powerful image processing techniques to perform complete blood count (CBC) without human intervention. The general framework in this blood smear analysis research is as follows. Firstly, a digital blood smear image is de-noised using optimized Bayesian non-local means filter to design a dependable cell counting system that may be used under different image capture conditions. Then an edge preservation technique with Kuwahara filter is used to recover degraded and blurred white blood cell boundaries in blood smear images while reducing the residual negative effect of noise in images. After denoising and edge enhancement, the next step is binarization using combination of Otsu and Niblack to separate the cells and stained background. Cells separation and counting is achieved by granulometry, advanced active contours without edges, and morphological operators with watershed algorithm. Following this is the recognition of different types of white blood cells (WBCs), and also red blood cells (RBCs) segmentation. Using three main types of features: shape, intensity, and texture invariant features in combination with a variety of classifiers is next step. The following features are used in this work: intensity histogram features, invariant moments, the relative area, co-occurrence and run-length matrices, dual tree complex wavelet transform features, Haralick and Tamura features. Next, different statistical approaches involving correlation, distribution and redundancy are used to measure of the dependency between a set of features and to select feature variables on the white blood cell classification. A global sensitivity analysis with random sampling-high dimensional model representation (RS-HDMR) which can deal with independent and dependent input feature variables is used to assess dominate discriminatory power and the reliability of feature which leads to an efficient feature selection. These feature selection results are compared in experiments with branch and bound method and with sequential forward selection (SFS), respectively. This work examines support vector machine (SVM) and Convolutional Neural Networks (LeNet5) in connection with white blood cell classification. Finally, white blood cell classification system is validated in experiments conducted on cytological images of normal poor quality blood smears. These experimental results are also assessed with ground truth manually obtained from medical experts
    corecore