41 research outputs found

    Diszkrét módszerek a digitális képfeldolgozásban = Discrete methods in digital image processing

    Get PDF
    A beszámolási időszakban több területen végeztünk eredményes kutatómunkát. Szomszédsági szekvenciák. Korábbi eredményeket általánosítva jellemeztük a metrikus végperiodikus szekvenciákat. Meghatároztuk az euklideszi metrikát felülről legjobban közelítő oktagonális metrikát, választ adva Rosenfeld és Pfaltz egy régi problémájára. Leírtuk az oktagonális metrikus és általános szomszédsági szekvenciák több tulajdonságát. Algoritmust adtunk egy szekvencia által meghatározott legrövidebb út előállítására. Kidolgoztuk a szomszédsági szekvenciák különböző képfeldolgozási eljárásokban való alkalmazásának lehetőségeit. A háromszög-, hatszög-, BCC- és FCC- rácsokon is hasonló kutatásokat folytattunk. Diszkrét tomográfia. Megmutattuk, hogy egy tetszőleges digitális halmaz részhalmazait négy megfelelő irányban vett vonalösszegek egyértelműen meghatározzák. Vázkijelölés. Egy korábbi eredményünkre támaszkodva kidolgoztunk egy szekvencia alapú középtengely-transzformációt. Orvosi képfeldolgozás, virtuális műtéttervezés. Közreműködtünk egy klinikai diagnosztikát és műtétek tervezését segítő számítógépes rendszer kialakításában. Több eredményünket hatékonyan implementáltuk. Multi-modális ember-gép kapcsolat. Kidolgoztunk egy SVM-alapú módszert, mely az arcon az irodalmi öt osztályba sorolható érzelmeket legfeljebb 25%-os hibával felismeri. Célunk egy virtuális sakkozó létrehozása, aki képes az arci gesztusok felismerésére is. | We conducted research during the reported period in several topics. Neighborhood sequences. Generalizing previous results, we characterized the metrical ultimately periodic sequences. We determined the octagonal metrics best approximating the Euclidean one from above, answering an old problem of Rosenfeld and Pfaltz. We described several properties of octagonal metrical and general neighborhood sequences. We gave an algorithm to construct the shortest path corresponding to a sequence. We elaborated several applications of neighborhood sequences for various image processing methods. We did similar research for triangular-, hexagonal, BCC- and FCC-grids. Discrete tomography. We proved that the subsets of any digital set are uniquely determined by its line sums corresponding to four appropriately chosen directions. Skeletonization. Using one of our previous results, we worked out a sequence-based middle axis transformation. Medical image processing, virtual surgery. We took part in the construction of a software system helping clinical diagnostics and surgery planning. Many of our results have been efficiently implemented. Multi-modal human-machine interaction. Based on SVM algorithm we worked out a method to classify the emotions of the face into the five basic classes, with at most an error of 25%. Our purpose is to construct a virtual chess player that is able to recognize face gestures, too

    Significantly improved precision of cell migration analysis in time-lapse video microscopy through use of a fully automated tracking system

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions.</p> <p>Results</p> <p>We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells.</p> <p>Conclusion</p> <p>We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures.</p

    Distance Transforms: Academics Versus Industry

    Full text link

    Document preprocessing and fuzzy unsupervised character classification

    Get PDF
    This dissertation presents document preprocessing and fuzzy unsupervised character classification for automatically reading daily-received office documents that have complex layout structures, such as multiple columns and mixed-mode contents of texts, graphics and half-tone pictures. First, the block segmentation algorithm is performed based on a simple two-step run-length smoothing to decompose a document into single-mode blocks. Next, the block classification is performed based on the clustering rules to classify each block into one of the types such as text, horizontal or vertical lines, graphics, and pictures. The mean white-to-black transition is shown as an invariance for textual blocks, and is useful for block discrimination. A fuzzy model for unsupervised character classification is designed to improve the robustness, correctness, and speed of the character recognition system. The classification procedures are divided into two stages. The first stage separates the characters into seven typographical categories based on word structures of a text line. The second stage uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. A fuzzy model of unsupervised character classification, which is more natural in the representation of prototypes for character matching, is defined and the weighted fuzzy similarity measure is explored. The characteristics of the fuzzy model are discussed and used in speeding up the classification process. After classification, the character recognition procedure is simply applied on the limited versions of the fuzzy prototypes. To avoid information loss and extra distortion, an topography-based approach is proposed to apply directly on the fuzzy prototypes to extract the skeletons. First, a convolution by a bell-shaped function is performed to obtain a smooth surface. Second, the ridge points are extracted by rule-based topographic analysis of the structure. Third, a membership function is assigned to ridge points with values indicating the degrees of membership with respect to the skeleton of an object. Finally, the significant ridge points are linked to form strokes of skeleton, and the clues of eigenvalue variation are used to deal with degradation and preserve connectivity. Experimental results show that our algorithm can reduce the deformation of junction points and correctly extract the whole skeleton although a character is broken into pieces. For some characters merged together, the breaking candidates can be easily located by searching for the saddle points. A pruning algorithm is then applied on each breaking position. At last, a multiple context confirmation can be applied to increase the reliability of breaking hypotheses

    Extraction of Unfoliaged Trees from Terrestrial Image Sequences

    Get PDF
    This thesis presents a generative statistical approach for the fully automatic three-dimensional (3D) extraction and reconstruction of unfoliaged deciduous trees from wide-baseline image sequences. Tree models improve the realism of 3D Geoinformation systems (GIS) by adding a natural touch. Unfoliaged trees are, however, difficult to reconstruct from images due to partially weak contrast, background clutter, occlusions, and particularly the possibly varying order of branches in images from different viewpoints. The proposed approach combines generative modeling by L-systems and statistical maximum a posteriori (MAP) estimation for the extraction of the 3D branching structure of trees. Background estimation is conducted by means of mathematical (gray scale) morphology as basis for generative modeling. A Gaussian likelihood function based on intensity differences is employed to evaluate the hypotheses. A mechanism has been devised to control the sampling sequence of multiple parameters in the Markov Chain considering their characteristics and the performance in the previous step. A tree is classified into three typical branching types after the extraction of the first level of branches and more specific Production Rules of L-systems are used accordingly. Generic prior distributions for parameters are refined based on already extracted branches in a Bayesian framework and integrated into the MAP estimation. By these means most of the branching structure besides tiny twigs can be reconstructed. Results are presented in the form of VRML (Virtual Reality Modeling Language) models demonstrating the potential of the approach as well as its current shortcomings.Diese Dissertationsschrift stellt einen generativen statistischen Ansatz für die vollautomatische drei-dimensionale (3D) Extraktion und Rekonstruktion unbelaubter Laubbäume aus Bildsequenzen mit großer Basis vor. Modelle für Bäume verbessern den Realismus von 3D Geoinformationssystemen (GIS), indem sie Letzteren eine natürliche Note geben. Wegen z.T. schwachem Kontrast, Störobjekten im Hintergrund, Verdeckungen und insbesondere der möglicherweise unterschiedlichen Ordnung der Äste in Bildern von verschiedenen Blickpunkten sind unbelaubte Bäume aber schwierig zu rekonstruieren. Der vorliegende Ansatz kombiniert generative Modellierung mittels L-Systemen und statistische Maximum A Posteriori (MAP) Schätzung für die Extraktion der 3D Verzweigungsstruktur von Bäumen. Hintergrund-Schätzung wird auf Grundlage von mathematischer (Grauwert) Morphologie als Basis für die generative Modellierung durchgeführt. Für die Bewertung der Hypothesen wird eine Gaußsche Likelihood-Funktion basierend auf Intensitätsunterschieden benutzt. Es wurde ein Mechanismus entworfen, der die Reihenfolge der Verwendung mehrerer Parameter für die Markoff-Kette basierend auf deren Charakteristik und Performance im letzten Schritt kontrolliert. Ein Baum wird nach der Extraktion der ersten Stufe von Ästen in drei typische Verzweigungstypen klassifiziert und es werden entsprechend Produktionsregeln von spezifischen L-Systemen verwendet. Basierend auf bereits extrahierten Ästen werden generische Prior-Verteilungen für die Parameter in einem Bayes’schen Rahmen verfeinert und in die MAP Schätzung integriert. Damit kann ein großer Teil der Verzweigungsstruktur außer kleinen Ästen extrahiert werden. Die Ergebnisse werden als VRML (Virtual Reality Modeling Language) Modelle dargestellt. Sie zeigen das Potenzial aber auch die noch vorhandenen Defizite des Ansatzes

    Matching hierarchical structures for shape recognition

    Get PDF
    In this thesis we aim to develop a framework for clustering trees and rep- resenting and learning a generative model of graph structures from a set of training samples. The approach is applied to the problem of the recognition and classification of shape abstracted in terms of its morphological skeleton. We make five contributions. The first is an algorithm to approximate tree edit-distance using relaxation labeling. The second is the introduction of the tree union, a representation capable of representing the modes of structural variation present in a set of trees. The third is an information theoretic approach to learning a generative model of tree structures from a training set. While the skeletal abstraction of shape was chosen mainly as a exper- imental vehicle, we, nonetheless, make some contributions to the fields of skeleton extraction and its graph representation. In particular, our fourth contribution is the development of a skeletonization method that corrects curvature effects in the Hamilton-Jacobi framework, improving its localiza- tion and noise sensitivity. Finally, we propose a shape-measure capable of characterizing shapes abstracted in terms of their skeleton. This measure has a number of interesting properties. In particular, it varies smoothly as the shape is deformed and can be easily computed using the presented skeleton extraction algorithm. Each chapter presents an experimental analysis of the proposed approaches applied to shape recognition problems

    Image Analysis of the Carotid Artery: A (Semi-)Automatic Approach

    Get PDF
    In this thesis we presented several (semi-)automatic image processing techniques for analyzing the carotid artery wall and carotid artery plaque in MRI and Ultrasound. The presented methods include image segmentation, registration, centerline extraction, and quantification

    Washington University Senior Undergraduate Research Digest (WUURD), Spring 2018

    Get PDF
    From the Washington University Office of Undergraduate Research Digest (WUURD), Vol. 13, 05-01-2018. Published by the Office of Undergraduate Research. Joy Zalis Kiefer, Director of Undergraduate Research and Associate Dean in the College of Arts & Scienc

    Computational Geometric and Algebraic Topology

    Get PDF
    Computational topology is a young, emerging field of mathematics that seeks out practical algorithmic methods for solving complex and fundamental problems in geometry and topology. It draws on a wide variety of techniques from across pure mathematics (including topology, differential geometry, combinatorics, algebra, and discrete geometry), as well as applied mathematics and theoretical computer science. In turn, solutions to these problems have a wide-ranging impact: already they have enabled significant progress in the core area of geometric topology, introduced new methods in applied mathematics, and yielded new insights into the role that topology has to play in fundamental problems surrounding computational complexity. At least three significant branches have emerged in computational topology: algorithmic 3-manifold and knot theory, persistent homology and surfaces and graph embeddings. These branches have emerged largely independently. However, it is clear that they have much to offer each other. The goal of this workshop was to be the first significant step to bring these three areas together, to share ideas in depth, and to pool our expertise in approaching some of the major open problems in the field
    corecore