1,816 research outputs found

    On The Potential of Image Moments for Medical Diagnosis

    Get PDF
    Medical imaging is widely used for diagnosis and postoperative or post-therapy monitoring. The ever-increasing number of images produced has encouraged the introduction of automated methods to assist doctors or pathologists. In recent years, especially after the advent of convolutional neural networks, many researchers have focused on this approach, considering it to be the only method for diagnosis since it can perform a direct classification of images. However, many diagnostic systems still rely on handcrafted features to improve interpretability and limit resource consumption. In this work, we focused our efforts on orthogonal moments, first by providing an overview and taxonomy of their macrocategories and then by analysing their classification performance on very different medical tasks represented by four public benchmark data sets. The results confirmed that convolutional neural networks achieved excellent performance on all tasks. Despite being composed of much fewer features than those extracted by the networks, orthogonal moments proved to be competitive with them, showing comparable and, in some cases, better performance. In addition, Cartesian and harmonic categories provided a very low standard deviation, proving their robustness in medical diagnostic tasks. We strongly believe that the integration of the studied orthogonal moments can lead to more robust and reliable diagnostic systems, considering the performance obtained and the low variation of the results. Finally, since they have been shown to be effective on both magnetic resonance and computed tomography images, they can be easily extended to other imaging techniques

    A neuro-genetic hybrid approach to automatic identification of plant leaves

    Get PDF
    Plants are essential for the existence of most living things on this planet. Plants are used for providing food, shelter, and medicine. The ability to identify plants is very important for several applications, including conservation of endangered plant species, rehabilitation of lands after mining activities and differentiating crop plants from weeds. In recent times, many researchers have made attempts to develop automated plant species recognition systems. However, the current computer-based plants recognition systems have limitations as some plants are naturally complex, thus it is difficult to extract and represent their features. Further, natural differences of features within the same plant and similarities between plants of different species cause problems in classification. This thesis developed a novel hybrid intelligent system based on a neuro-genetic model for automatic recognition of plants using leaf image analysis based on novel approach of combining several image descriptors with Cellular Neural Networks (CNN), Genetic Algorithm (GA), and Probabilistic Neural Networks (PNN) to address classification challenges in plant computer-based plant species identification using the images of plant leaves. A GA-based feature selection module was developed to select the best of these leaf features. Particle Swam Optimization (PSO) and Principal Component Analysis (PCA) were also used sideways for comparison and to provide rigorous feature selection and analysis. Statistical analysis using ANOVA and correlation techniques confirmed the effectiveness of the GA-based and PSO-based techniques as there were no redundant features, since the subset of features selected by both techniques correlated well. The number of principal components (PC) from the past were selected by conventional method associated with PCA. However, in this study, GA was used to select a minimum number of PC from the original PC space. This reduced computational cost with respect to time and increased the accuracy of the classifier used. The algebraic nature of the GA’s fitness function ensures good performance of the GA. Furthermore, GA was also used to optimize the parameters of a CNN (CNN for image segmentation) and then uniquely combined with PNN to improve and stabilize the performance of the classification system. The CNN (being an ordinary differential equation (ODE)) was solved using Runge-Kutta 4th order algorithm in order to minimize descritisation errors associated with edge detection. This study involved the extraction of 112 features from the images of plant species found in the Flavia dataset (publically available) using MATLAB programming environment. These features include Zernike Moments (20 ZMs), Fourier Descriptors (21 FDs), Legendre Moments (20 LMs), Hu 7 Moments (7 Hu7Ms), Texture Properties (22 TP) , Geometrical Properties (10 GP), and Colour features (12 CF). With the use of GA, only 14 features were finally selected for optimal accuracy. The PNN was genetically optimized to ensure optimal accuracy since it is not the best practise to fix the tunning parameters for the PNN arbitrarily. Two separate GA algorithms were implemented to optimize the PNN, that is, the GA provided by MATLAB Optimization Toolbox (GA1) and a separately implemented GA (GA2). The best chromosome (PNN spread) for GA1 was 0.035 with associated classification accuracy of 91.3740% while a spread value of 0.06 was obtained from GA2 giving rise to improved classification accuracy of 92.62%. The PNN-based classifier used in this study was benchmarked against other classifiers such as Multi-layer perceptron (MLP), K Nearest Neigbhour (kNN), Naive Bayes Classifier (NBC), Radial Basis Function (RBF), Ensemble classifiers (Adaboost). The best candidate among these classifiers was the genetically optimized PNN. Some computational theoretic properties on PNN are also presented

    A very simple framework for 3D human poses estimation using a single 2D image: Comparison of geometric moments descriptors.

    Get PDF
    In this paper, we propose a framework in order to automatically extract the 3D pose of an individual from a single silhouette image obtained with a classical low-cost camera without any depth information. By pose, we mean the configuration of human bones in order to reconstruct a 3D skeleton representing the 3D posture of the detected human. Our approach combines prior learned correspondences between silhouettes and skeletons extracted from simulated 3D human models publicly available on the internet. The main advantages of such approach are that silhouettes can be very easily extracted from video, and 3D human models can be animated using motion capture data in order to quickly build any movement training data. In order to match detected silhouettes with simulated silhouettes, we compared geometrics invariants moments. According to our results, we show that the proposed method provides very promising results with a very low time processing

    Advances in Manipulation and Recognition of Digital Ink

    Get PDF
    Handwriting is one of the most natural ways for a human to record knowledge. Recently, this type of human-computer interaction has received increasing attention due to the rapid evolution of touch-based hardware and software. While hardware support for digital ink reached its maturity, algorithms for recognition of handwriting in certain domains, including mathematics, are lacking robustness. Simultaneously, users may possess several pen-based devices and sharing of training data in adaptive recognition setting can be challenging. In addition, resolution of pen-based devices keeps improving making the ink cumbersome to process and store. This thesis develops several advances for efficient processing, storage and recognition of handwriting, which are applicable to the classification methods based on functional approximation. In particular, we propose improvements to classification of isolated characters and groups of rotated characters, as well as symbols of substantially different size. We then develop an algorithm for adaptive classification of handwritten mathematical characters of a user. The adaptive algorithm can be especially useful in the cloud-based recognition framework, which is described further in the thesis. We investigate whether the training data available in the cloud can be useful to a new writer during the training phase by extracting styles of individuals with similar handwriting and recommending styles to the writer. We also perform factorial analysis of the algorithm for recognition of n-grams of rotated characters. Finally, we show a fast method for compression of linear pieces of handwritten strokes and compare it with an enhanced version of the algorithm based on functional approximation of strokes. Experimental results demonstrate validity of the theoretical contributions, which form a solid foundation for the next generation handwriting recognition systems

    Feature Extraction Methods for Character Recognition

    Get PDF
    Not Include

    Efficient Local Comparison Of Images Using Krawtchouk Descriptors

    Get PDF
    It is known that image comparison can prove cumbersome in both computational complexity and runtime, due to factors such as the rotation, scaling, and translation of the object in question. Due to the locality of Krawtchouk polynomials, relatively few descriptors are necessary to describe a given image, and this can be achieved with minimal memory usage. Using this method, not only can images be described efficiently as a whole, but specific regions of images can be described as well without cropping. Due to this property, queries can be found within a single large image, or collection of large images, which serve as a database for search. Krawtchouk descriptors can also describe collections of patches of 3D objects, which is explored in this paper, as well as a theoretical methodology of describing nD hyperobjects. Test results for an implementation of 3D Krawtchouk descriptors in GNU Octave, as well as statistics regarding effectiveness and runtime, are included, and the code used for testing will be published open source in the near future

    A parallel implementation of 3D Zernike moment analysis

    Get PDF
    Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system
    • …
    corecore