346 research outputs found

    Spectrum skeletonization : a new method for acoustic signal extraction

    Get PDF
    Vibration Analysis Tests (VAT) and Acoustic Emission tests (AE) are used in several industrial applications. Many of them perform analysis in the frequency domain. Peaks in the power density spectrum hold relevant information about acoustic events. In this paper we propose a novel method for feature extraction of vibration samples by analyzing the shape of their auto power spectrum density function. The approach uses skeletonization techniques in order to find the hierarchical structure of the spectral peaks. The proposed method can be applied as a preprocessing step for spectrum analysis of vibration signals

    Classification of Plants Using Images of their Leaves

    Get PDF
    Plant recognition is a matter of interest for scientists as well as laymen. Computer aided technologies can make the process of plant recognition much easier; botanists use morphological features of plants to recognize them. These features can also be used as a basis for an automated classification tool. For example, images of leaves of different plants can be studied to determine effective algorithms that could be used in classifying different plants. In this thesis, those salient features of plant leaves are studied that may be used as a basis for plant classification and recognition. These features are independent of leaf maturity and image translation, rotation and scaling and are studied to develop an approach that produces the best classification algorithm. First, the developed algorithms are used to classify a training set of images; then, a testing set of images is used for verifying the classification algorithms

    A novel procedure for medial axis reconstruction of vessels from Medical Imaging segmentation

    Get PDF
    A procedure for reconstructing the central axis from diagnostic image processing is presented here, capable of solving the widespread problem of stepped shape effect that characterizes the most common algorithmic tools for processing the central axis for diagnostic imaging applications through the development of an algorithm correcting the spatial coordinates of each point belonging to the axis from the use of a common discrete image skeleton algorithm. The procedure is applied to the central axis traversing the vascular branch of the cerebral system, appropriately reconstructed from the processing of diagnostic images, using investigations of the local intensity values identified in adjacent voxels. The percentage intensity of the degree of adherence to a specific anatomical tissue acts as an attraction pole in the identification of the spatial center on which to place each point of the skeleton crossing the investigated anatomical structure. The results were shown in terms of the number of vessels identified overall compared to the original reference model. The procedure demonstrates high accuracy margin in the correction of the local coordinates of the central points that permits to allocate precise dimensional measurement of the anatomy under examination. The reconstruction of a central axis effectively centered in the region under examination represents a fundamental starting point in deducing, with a high margin of accuracy, key informations of a geometric and dimensional nature that favours the recognition of phenomena of shape alterations ascribable to the presence of clinical pathologies

    Document preprocessing and fuzzy unsupervised character classification

    Get PDF
    This dissertation presents document preprocessing and fuzzy unsupervised character classification for automatically reading daily-received office documents that have complex layout structures, such as multiple columns and mixed-mode contents of texts, graphics and half-tone pictures. First, the block segmentation algorithm is performed based on a simple two-step run-length smoothing to decompose a document into single-mode blocks. Next, the block classification is performed based on the clustering rules to classify each block into one of the types such as text, horizontal or vertical lines, graphics, and pictures. The mean white-to-black transition is shown as an invariance for textual blocks, and is useful for block discrimination. A fuzzy model for unsupervised character classification is designed to improve the robustness, correctness, and speed of the character recognition system. The classification procedures are divided into two stages. The first stage separates the characters into seven typographical categories based on word structures of a text line. The second stage uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. A fuzzy model of unsupervised character classification, which is more natural in the representation of prototypes for character matching, is defined and the weighted fuzzy similarity measure is explored. The characteristics of the fuzzy model are discussed and used in speeding up the classification process. After classification, the character recognition procedure is simply applied on the limited versions of the fuzzy prototypes. To avoid information loss and extra distortion, an topography-based approach is proposed to apply directly on the fuzzy prototypes to extract the skeletons. First, a convolution by a bell-shaped function is performed to obtain a smooth surface. Second, the ridge points are extracted by rule-based topographic analysis of the structure. Third, a membership function is assigned to ridge points with values indicating the degrees of membership with respect to the skeleton of an object. Finally, the significant ridge points are linked to form strokes of skeleton, and the clues of eigenvalue variation are used to deal with degradation and preserve connectivity. Experimental results show that our algorithm can reduce the deformation of junction points and correctly extract the whole skeleton although a character is broken into pieces. For some characters merged together, the breaking candidates can be easily located by searching for the saddle points. A pruning algorithm is then applied on each breaking position. At last, a multiple context confirmation can be applied to increase the reliability of breaking hypotheses

    High-throughput phenotyping of above and below ground elements of plants using feature detection, extraction and image analysis techniques

    Get PDF
    Plant phenotyping is now being widely used to study and increase the yield of row-crop plants. Phenotyping is defined as a set of observable characteristics of an individual that results from its interaction of its genome with the environment. Therefore, the collection of physical and observable traits is the primary task of any phenotyping study. While current phenotyping methods are painstakingly slow and tedious, advances in digital imagery and computer technology have unlocked new avenues for this arduous task. High-resolution im-ages can now easily be obtained with practically any camera whereas improvements in com-puter technology mean that images taken can be processed at a shorter time. Phenotyping generally can be classified into two categories, below ground phenotyp-ing and above ground phenotyping. Below ground phenotyping typically pertains to roots or parasites that are in the soil. The study results from below-ground phenotyping are of the root system architecture of a plant or the cause and effect of below ground parasites. Above ground phenotyping encompasses more variety of traits which includes flowers, fruits, leaves and more. This thesis discusses a computational platform for rapid phenotyping of two prob-lems: root phenotyping and maize flower phenotyping. Both of these phenotyping studies involved collaborative works with a plant science group. The first phenotyping platform was intended for a study of seedling root traits, which offer an opportunity to study Root System Architecture of a plant without having to wait for the plant to be fully grown. A framework was developed that would take root images and output traits of the plants using image segmentation and graph-based algorithms. The frame-work can also be extended easily to any another kind of roots. The input to the framework would just be a picture of a root with great contrast to the background, and the program would output the traits out in a simple and easily understandable manner. The ease of use not only means that phenotyping can be done in a very time, cost and labor efficient manner, but also just about anyone could use the program. The next phenotyping platform was intended to extract phenotyping traits of maize tassels. On field, time series images from two different plantings were provided by the Plant Science Institute for the development of the framework. The planting consisted of nearly four thousand different genotypes. The developed framework could identify the object of interest (the tassels) and analyzed it using image analysis techniques and deployed on the ISU super-computer, CyEnce. Utilizing feature detection and extraction along with segmentation meth-ods, the tassel location could be identified and separated from the background. Then, graph-based techniques and morphological operations were used to extract the various traits of the tassels. By plotting the extracted traits, the growth, and development of the maize tassel over time could be seen and further studied. This framework is also easily extendable to other types of above ground phenotyping. However, due to the nature of having feature detection, significantly more dataset is needed for training the detection algorithm. This thesis will illustrate how the combination of high-performance computers, image analysis, and machine learning are ushering a revolution in the field of agriculture. The fact that computer processing speed are almost doubling every 18 months provides access to new methods that were not possible before. Just as the landscape of technology is constantly be-ing innovated, phenotyping studies will ensure that the field of agronomy not be left behind

    A Relaxation Scheme for Mesh Locality in Computer Vision.

    Get PDF
    Parallel processing has been considered as the key to build computer systems of the future and has become a mainstream subject in Computer Science. Computer Vision applications are computationally intensive that require parallel approaches to exploit the intrinsic parallelism. This research addresses this problem for low-level and intermediate-level vision problems. The contributions of this dissertation are a unified scheme based on probabilistic relaxation labeling that captures localities of image data and the ability of using this scheme to develop efficient parallel algorithms for Computer Vision problems. We begin with investigating the problem of skeletonization. The technique of pattern match that exhausts all the possible interaction patterns between a pixel and its neighboring pixels captures the locality of this problem, and leads to an efficient One-pass Parallel Asymmetric Thinning Algorithm (OPATA\sb8). The use of 8-distance in this algorithm, or chessboard distance, not only improves the quality of the resulting skeletons, but also improves the efficiency of the computation. This new algorithm plays an important role in a hierarchical route planning system to extract high level typological information of cross-country mobility maps which greatly speeds up the route searching over large areas. We generalize the neighborhood interaction description method to include more complicated applications such as edge detection and image restoration. The proposed probabilistic relaxation labeling scheme exploit parallelism by discovering local interactions in neighboring areas and by describing them effectively. The proposed scheme consists of a transformation function and a dictionary construction method. The non-linear transformation function is derived from Markov Random Field theory. It efficiently combines evidences from neighborhood interactions. The dictionary construction method provides an efficient way to encode these localities. A case study applies the scheme to the problem of edge detection. The relaxation step of this edge-detection algorithm greatly reduces noise effects, gets better edge localization such as line ends and corners, and plays a crucial rule in refining edge outputs. The experiments on both synthetic and natural images show that our algorithm converges quickly, and is robust in noisy environment

    Keeping track of worm trackers

    Get PDF
    C. elegans is used extensively as a model system in the neurosciences due to its well defined nervous system. However, the seeming simplicity of this nervous system in anatomical structure and neuronal connectivity, at least compared to higher animals, underlies a rich diversity of behaviors. The usefulness of the worm in genome-wide mutagenesis or RNAi screens, where thousands of strains are assessed for phenotype, emphasizes the need for computational methods for automated parameterization of generated behaviors. In addition, behaviors can be modulated upon external cues like temperature, O2 and CO2 concentrations, mechanosensory and chemosensory inputs. Different machine vision tools have been developed to aid researchers in their efforts to inventory and characterize defined behavioral “outputs”. Here we aim at providing an overview of different worm-tracking packages or video analysis tools designed to quantify different aspects of locomotion such as the occurrence of directional changes (turns, omega bends), curvature of the sinusoidal shape (amplitude, body bend angles) and velocity (speed, backward or forward movement)

    The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures

    Full text link
    A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.Comment: 32 pages, 21 figure

    Flaw Simulation in Product Radiographs

    Get PDF
    • …
    corecore