34 research outputs found

    Quantitative characterization of pore structure of several biochars with 3D imaging

    Full text link
    Pore space characteristics of biochars may vary depending on the used raw material and processing technology. Pore structure has significant effects on the water retention properties of biochar amended soils. In this work, several biochars were characterized with three-dimensional imaging and image analysis. X-ray computed microtomography was used to image biochars at resolution of 1.14 μ\mum and the obtained images were analysed for porosity, pore-size distribution, specific surface area and structural anisotropy. In addition, random walk simulations were used to relate structural anisotropy to diffusive transport. Image analysis showed that considerable part of the biochar volume consist of pores in size range relevant to hydrological processes and storage of plant available water. Porosity and pore-size distribution were found to depend on the biochar type and the structural anisotopy analysis showed that used raw material considerably affects the pore characteristics at micrometre scale. Therefore attention should be paid to raw material selection and quality in applications requiring optimized pore structure.Comment: 16 pages, 4 figures. The final publication is available at Springer via http://dx.doi.org/10.1007/s11356-017-8823-

    Methodology for automatic classification of atypical lymphoid cells from peripheral blood cell images

    Get PDF
    Morphological analysis is the starting point for the diagnostic approach of more than 80% of the hematological diseases. However, the morphological differentiation among different types of abnormal lymphoid cells in peripheral blood is a difficult task, which requires high experience and skill. Objective values do not exist to define cytological variables, which sometimes results in doubts on the correct cell classification in the daily hospital routine. Automated systems exist which are able to get an automatic preclassification of the normal blood cells, but fail in the automatic recognition of the abnormal lymphoid cells. The general objective of this thesis is to develop a complete methodology to automatically recognize images of normal and reactive lymphocytes, and several types of neoplastic lymphoid cells circulating in peripheral blood in some mature B-cell neoplasms using digital image processing methods. This objective follows two directions: (1) with engineering and mathematical background, transversal methodologies and software tools are developed; and (2) with a view towards the clinical laboratory diagnosis, a system prototype is built and validated, whose input is a set of pathological cell images from individual patients, and whose output is the automatic classification in one of the groups of the different pathologies included in the system. This thesis is the evolution of various works, starting with a discrimination between normal lymphocytes and two types of neoplastic lymphoid cells, and ending with the design of a system for the automatic recognition of normal lymphocytes and five types of neoplastic lymphoid cells. All this work involves the development of a robust segmentation methodology using color clustering, which is able to separate three regions of interest: cell, nucleus and peripheral zone around the cell. A complete lymphoid cell description is developed by extracting features related to size, shape, texture and color. To reduce the complexity of the process, a feature selection is performed using information theory. Then, several classifiers are implemented to automatically recognize different types of lymphoid cells. The best classification results are achieved using support vector machines with radial basis function kernel. The methodology developed, which combines medical, engineering and mathematical backgrounds, is the first step to design a practical hematological diagnosis support tool in the near future.Los análisis morfológicos son el punto de partida para la orientación diagnóstica en más del 80% de las enfermedades hematológicas. Sin embargo, la clasificación morfológica entre diferentes tipos de células linfoides anormales en la sangre es una tarea difícil que requiere gran experiencia y habilidad. No existen valores objetivos para definir variables citológicas, lo que en ocasiones genera dudas en la correcta clasificación de las células en la práctica diaria en un laboratorio clínico. Existen sistemas automáticos que realizan una preclasificación automática de las células sanguíneas, pero no son capaces de diferenciar automáticamente las células linfoides anormales. El objetivo general de esta tesis es el desarrollo de una metodología completa para el reconocimiento automático de imágenes de linfocitos normales y reactivos, y de varios tipos de células linfoides neoplásicas circulantes en sangre periférica en algunos tipos de neoplasias linfoides B maduras, usando métodos de procesamiento digital de imágenes. Este objetivo sigue dos direcciones: (1) con una orientación propia de la ingeniería y la matemática de soporte, se desarrollan las metodologías transversales y las herramientas de software para su implementación; y (2) con un enfoque orientado al diagnóstico desde el laboratorio clínico, se construye y se valida un prototipo de un sistema cuya entrada es un conjunto de imágenes de células patológicas de pacientes analizados de forma individual, obtenidas mediante microscopía y cámara digital, y cuya salida es la clasificación automática en uno de los grupos de las distintas patologías incluidas en el sistema. Esta tesis es el resultado de la evolución de varios trabajos, comenzando con una discriminación entre linfocitos normales y dos tipos de células linfoides neoplásicas, y terminando con el diseño de un sistema para el reconocimiento automático de linfocitos normales y reactivos, y cinco tipos de células linfoides neoplásicas. Todo este trabajo involucra el desarrollo de una metodología de segmentación robusta usando agrupamiento por color, la cual es capaz de separar tres regiones de interés: la célula, el núcleo y la zona externa alrededor de la célula. Se desarrolla una descripción completa de la célula linfoide mediante la extracción de descriptores relacionados con el tamaño, la forma, la textura y el color. Para reducir la complejidad del proceso, se realiza una selección de descriptores usando teoría de la información. Posteriormente, se implementan varios clasificadores para reconocer automáticamente diferentes tipos de células linfoides. Los mejores resultados de clasificación se logran utilizando máquinas de soporte vectorial con núcleo de base radial. La metodología desarrollada, que combina conocimientos médicos, matemáticos y de ingeniería, es el primer paso para el diseño de una herramienta práctica de soporte al diagnóstico hematológico en un futuro cercano

    Document preprocessing and fuzzy unsupervised character classification

    Get PDF
    This dissertation presents document preprocessing and fuzzy unsupervised character classification for automatically reading daily-received office documents that have complex layout structures, such as multiple columns and mixed-mode contents of texts, graphics and half-tone pictures. First, the block segmentation algorithm is performed based on a simple two-step run-length smoothing to decompose a document into single-mode blocks. Next, the block classification is performed based on the clustering rules to classify each block into one of the types such as text, horizontal or vertical lines, graphics, and pictures. The mean white-to-black transition is shown as an invariance for textual blocks, and is useful for block discrimination. A fuzzy model for unsupervised character classification is designed to improve the robustness, correctness, and speed of the character recognition system. The classification procedures are divided into two stages. The first stage separates the characters into seven typographical categories based on word structures of a text line. The second stage uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. A fuzzy model of unsupervised character classification, which is more natural in the representation of prototypes for character matching, is defined and the weighted fuzzy similarity measure is explored. The characteristics of the fuzzy model are discussed and used in speeding up the classification process. After classification, the character recognition procedure is simply applied on the limited versions of the fuzzy prototypes. To avoid information loss and extra distortion, an topography-based approach is proposed to apply directly on the fuzzy prototypes to extract the skeletons. First, a convolution by a bell-shaped function is performed to obtain a smooth surface. Second, the ridge points are extracted by rule-based topographic analysis of the structure. Third, a membership function is assigned to ridge points with values indicating the degrees of membership with respect to the skeleton of an object. Finally, the significant ridge points are linked to form strokes of skeleton, and the clues of eigenvalue variation are used to deal with degradation and preserve connectivity. Experimental results show that our algorithm can reduce the deformation of junction points and correctly extract the whole skeleton although a character is broken into pieces. For some characters merged together, the breaking candidates can be easily located by searching for the saddle points. A pruning algorithm is then applied on each breaking position. At last, a multiple context confirmation can be applied to increase the reliability of breaking hypotheses

    A survey of computer uses in music

    Full text link
    This thesis covers research into the mathematical basis inherent in music including review of projects related to optical character recognition (OCR) of musical symbols. Research was done about fractals creating new pieces by assigning pitches to numbers. Existing musical pieces can be taken apart and reassembled creating new ideas for composers. Musical notation understanding is covered and its requirement for the recognition of a music sheet by the computer for editing and reproduction purposes is explained. The first phase of a musical OCR was created in this thesis with the recognition of staff lines on a good quality image. Modifications will need to be made to take care of noise and tilted images that may result from scanning

    Visualization and analysis of diffusion tensor fields

    Get PDF
    technical reportThe power of medical imaging modalities to measure and characterize biological tissue is amplified by visualization and analysis methods that help researchers to see and understand the structures within their data. Diffusion tensor magnetic resonance imaging can measure microstructural properties of biological tissue, such as the coherent linear organization of white matter of the central nervous system, or the fibrous texture of muscle tissue. This dissertation describes new methods for visualizing and analyzing the salient structure of diffusion tensor datasets. Glyphs from superquadric surfaces and textures from reactiondiffusion systems facilitate inspection of data properties and trends. Fiber tractography based on vector-tensor multiplication allows major white matter pathways to be visualized. The generalization of direct volume rendering to tensor data allows large-scale structures to be shaded and rendered. Finally, a mathematical framework for analyzing the derivatives of tensor values, in terms of shape and orientation change, enables analytical shading in volume renderings, and a method of feature detection important for feature-preserving filtering of tensor fields. Together, the combination of methods enhances the ability of diffusion tensor imaging to provide insight into the local and global structure of biological tissue

    Genetic programming applied to morphological image processing

    Get PDF
    This thesis presents three approaches to the automatic design of algorithms for the processing of binary images based on the Genetic Programming (GP) paradigm. In the first approach the algorithms are designed using the basic Mathematical Morphology (MM) operators, i.e. erosion and dilation, with a variety of Structuring Elements (SEs). GP is used to design algorithms to convert a binary image into another containing just a particular characteristic of interest. In the study we have tested two similarity fitness functions, training sets with different numbers of elements and different sizes of the training images over three different objectives. The results of the first approach showed some success in the evolution of MM algorithms but also identifed problems with the amount of computational resources the method required. The second approach uses Sub-Machine-Code GP (SMCGP) and bitwise operators as an attempt to speed-up the evolution of the algorithms and to make them both feasible and effective. The SMCGP approach was successful in the speeding up of the computation but it was not successful in improving the quality of the obtained algorithms. The third approach presents the combination of logical and morphological operators in an attempt to improve the quality of the automatically designed algorithms. The results obtained provide empirical evidence showing that the evolution of high quality MM algorithms using GP is possible and that this technique has a broad potential that should be explored further. This thesis includes an analysis of the potential of GP and other Machine Learning techniques for solving the general problem of Signal Understanding by means of exploring Mathematical Morphology

    Visual Analysis of Variability and Features of Climate Simulation Ensembles

    Get PDF
    This PhD thesis is concerned with the visual analysis of time-dependent scalar field ensembles as occur in climate simulations. Modern climate projections consist of multiple simulation runs (ensemble members) that vary in parameter settings and/or initial values, which leads to variations in the resulting simulation data. The goal of ensemble simulations is to sample the space of possible futures under the given climate model and provide quantitative information about uncertainty in the results. The analysis of such data is challenging because apart from the spatiotemporal data, also variability has to be analyzed and communicated. This thesis presents novel techniques to analyze climate simulation ensembles visually. A central question is how the data can be aggregated under minimized information loss. To address this question, a key technique applied in several places in this work is clustering. The first part of the thesis addresses the challenge of finding clusters in the ensemble simulation data. Various distance metrics lend themselves for the comparison of scalar fields which are explored theoretically and practically. A visual analytics interface allows the user to interactively explore and compare multiple parameter settings for the clustering and investigate the resulting clusters, i.e. prototypical climate phenomena. A central contribution here is the development of design principles for analyzing variability in decadal climate simulations, which has lead to a visualization system centered around the new Clustering Timeline. This is a variant of a Sankey diagram that utilizes clustering results to communicate climatic states over time coupled with ensemble member agreement. It can reveal several interesting properties of the dataset, such as: into how many inherently similar groups the ensemble can be divided at any given time, whether the ensemble diverges in general, whether there are different phases in the time lapse, maybe periodicity, or outliers. The Clustering Timeline is also used to compare multiple climate simulation models and assess their performance. The Hierarchical Clustering Timeline is an advanced version of the above. It introduces the concept of a cluster hierarchy that may group the whole dataset down to the individual static scalar fields into clusters of various sizes and densities recording the nesting relationship between them. One more contribution of this work in terms of visualization research is, that ways are investigated how to practically utilize a hierarchical clustering of time-dependent scalar fields to analyze the data. To this end, a system of different views is proposed which are linked through various interaction possibilities. The main advantage of the system is that a dataset can now be inspected at an arbitrary level of detail without having to recompute a clustering with different parameters. Interesting branches of the simulation can be expanded to reveal smaller differences in critical clusters or folded to show only a coarse representation of the less interesting parts of the dataset. The last building block of the suit of visual analysis methods developed for this thesis aims at a robust, (largely) automatic detection and tracking of certain features in a scalar field ensemble. Techniques are presented that I found can identify and track super- and sub-levelsets. And I derive “centers of action” from these sets which mark the location of extremal climate phenomena that govern the weather (e.g. Icelandic Low and Azores High). The thesis also presents visual and quantitative techniques to evaluate the temporal change of the positions of these centers; such a displacement would be likely to manifest in changes in weather. In a preliminary analysis with my collaborators, we indeed observed changes in the loci of the centers of action in a simulation with increased greenhouse gas concentration as compared to pre-industrial concentration levels

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    Some Implications of Constraints in Phasefield Models

    Get PDF
    In dieser Arbeit werden verschiedene Aspekte, die sich durch Zwangsbedingungen in der Phasenfeldmodellierung ergeben, untersucht. Zum einen wird, im Rahmen eines reinen Phasenfeldmodells, der Einfluss des häufig verwendeten Hindernispotentials in Bezug auf die Diskretisierung und algorithmische Gesichtspunkte der Verwendung von Projektions-basierten Algorithmen in nicht-gewichteten und gewichteten Mobilitätsformulierungen betrachtet. Zum anderen werden "Grandchem"-artige Modelle in einem chemischen, mechanischen und chemomechanischem Kontext diskutiert, in denen eine gegebene phasenunabhängige Größe innerhalb von Mehrphasenbereichen als gewichtetes Mittel der entsprechenden Größen innerhalb der Einzelphasen aufgefasst wird. Die so eingeführten zusätzlichen Freiheitsgrade ermöglichen durch eine geschickte Festlegung der phasenspezifischen Werte in Abhängigkeit der restlichen Parameter eine verbesserte Modellbildung, durch welche sich der Einfluss der Breite der Übergangsbereiche auf die Ergebnisse deutlich reduzieren lässt. In vielen Fällen lässt sich die meistens direkt physikalisch motivierte Festlegung der phasenspezifischen Größen zugleich als die Lösung eines parametrisierten Minimierungs- oder Maximierungsproblems unter der Nebenbedingung des vorgegebenen Mittelwerts interpretieren. Hier wird untersucht, welche Konsequenzen sich aus dieser Interpretation ergeben und weshalb das Zusammenspiel dieses lokalen Extremalproblems mit dem globalen variationellen Ansatz des Phasenfeldmodells von entscheidender Bedeutung ist
    corecore