335 research outputs found

    Morphology for matrix data : ordering versus PDE-based approach

    Get PDF
    Matrix fields are becoming increasingly important in digital imaging. In order to perform shape analysis, enhancement or segmentation of such matrix fields, appropriate image processing tools must be developed. This paper extends fundamental morphological operations to the setting of matrices, in the literature sometimes referred to as tensors despite the fact that matrices are only rank two tensors. The goal of this paper is to introduce and explore two approaches to mathematical morphology for matrix-valued data: One is based on a partial ordering, the other utilises nonlinear partial differential equations (PDEs). We start by presenting definitions for the maximum and minimum of a set of symmetric matrices since these notions are the cornerstones of the morphological operations. Our first approach is based on the Loewner ordering for symmetric matrices, and is in contrast to the unsatisfactory component-wise techniques. The notions of maximum and minimum deduced from the Loewner ordering satisfy desirable properties such as rotation invariance, preservation of positive semidefiniteness, and continuous dependence on the input data. These properties are also shared by the dilation and erosion processes governed by a novel nonlinear system of PDEs we are proposing for our second approach to morphology on matrix data. These PDEs are a suitable counterpart of the nonlinear equations known from scalar continuous-scale morphology. Both approaches incorporate information simultaneously from all matrix channels rather than treating them independently. In experiments on artificial and real medical positive semidefinite matrix-valued images we contrast the resulting notions of erosion, dilation, opening, closing, top hats, morphological derivatives, and shock filters stemming from these two alternatives. Using a ball shaped structuring element we illustrate the properties and performance of our ordering- or PDE-driven morphological operators for matrix-valued data

    Mathematical morphology for tensor data induced by the Loewner orderingin higher dimensions

    Get PDF
    Positive semidefinite matrix fields are becoming increasingly important in digital imaging. One reason for this tendency consists of the introduction of diffusion tensor magnetic resonance imaging (DTMRI). In order to perform shape analysis, enhancement or segmentation of such tensor fields, appropriate image processing tools must be developed. This paper extends fundamental morphological operations to the matrix-valued setting. We start by presenting novel definitions for the maximum and minimum of a set of matrices since these notions lie at the heart of the morphological operations. In contrast to naive approaches like the component-wise maximum or minimum of the matrix channels, our approach is based on the Loewner ordering for symmetric matrices. The notions of maximum and minimum deduced from this partial ordering satisfy desirable properties such as rotation invariance, preservation of positive semidefiniteness, and continuous dependence on the input data. We introduce erosion, dilation, opening, closing, top hats, morphological derivatives, shock filters, and mid-range filters for positive semidefinite matrix-valued images. These morphological operations incorporate information simultaneously from all matrix channels rather than treating them independently. Experiments on DT-MRI images with ball- and rod-shaped structuring elements illustrate the properties and performance of our morphological operators for matrix-valued data

    Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    Get PDF
    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems

    G-CSC Report 2010

    Get PDF
    The present report gives a short summary of the research of the Goethe Center for Scientific Computing (G-CSC) of the Goethe University Frankfurt. G-CSC aims at developing and applying methods and tools for modelling and numerical simulation of problems from empirical science and technology. In particular, fast solvers for partial differential equations (i.e. pde) such as robust, parallel, and adaptive multigrid methods and numerical methods for stochastic differential equations are developed. These methods are highly adanvced and allow to solve complex problems.. The G-CSC is organised in departments and interdisciplinary research groups. Departments are localised directly at the G-CSC, while the task of interdisciplinary research groups is to bridge disciplines and to bring scientists form different departments together. Currently, G-CSC consists of the department Simulation and Modelling and the interdisciplinary research group Computational Finance

    Computer assisted surgery for fracture reduction and deformity correction of the pelvis and long bones

    Get PDF
    Many orthopaedic operations, for example osteotomies, are not preoperative planned. The operation result depends on the experience of the operating surgeon. In the industry new developments are not longer curried out without CAD planning or computer simulations. Only in medicine the operation technology of corrective osteotomies are still in their infant stage in the last 30 years. Two dimensional analysis is not accurate that results in operation errors in the operating room. The surgeon usually obtains the preoperative information about the current bone state by radiographs. In case of complex operations (also inserting implants) planning is required. Planning based on radiographs has some system-dependent disadvantages like small accuracy, requirement of time for corrections ( distortions due to the projection) and restrictions, if complex corrections are necessary. Today the computer tomography is used as a solution. It is the only modality that allows to reach the accuracy and the resolution required for a good 3D-planning. However its a high dose rate for the patient is the serious disadvantage. Therefore in dilemma between the low dose rate and an adequate planning the first is often preferred. However in future it is expected that good operation results are guarantied only with implementation of 3D-planung. MR systems provide image information too, from which indirectly bones can be extracted. But due to their large distortions (susceptibility, non non-homogeneity of magnetic field), small spatial dissolution and the high costs, it is not expected that MRI represents an alternative in next time. The solution is the use of other image modalities. Ultrasound is here a good compromise both of the costs of the accuracy. In this work I developed an algorithm, which can produce 3D bone models from ultrasonic data. They have good resolution and accuracy compared with CT, and therefore can be used for 3D planning. In the work an improved procedure for segmenting bone surfaces is realised in combination with methods for the fusion for a three-dimensional model. The novelty of the presented work is in new approaches to realising an operation planning system, based on 3D computations, and implementing the intraoperative control by a guided ultrasound system for bone tracking. To realise these ideas the following tasks are solved: - bone modelling from CT data; - real-time extraction of bone surfaces from ultrasound imaging; - tracking the bone with respect to CT bone model. - integrating and implementing the above results in the development of an operation planning system for osteotomy corrections that supports on-line measurements, different types of deformity correction, a bone geometry design and a high level of automation. The developed osteotomy planning system allows to investigate the pathology, makes its analysis, finds an optimal way to realise surgery and provides visual and quantitative information about the results of the virtual operation. Therefore, the implementation of the proposed system can be considered as an additional significant tool for the diagnosis and orthopaedic surgery. The major parts of the planning system are: bone modelling from 3D data derived from CT, MRI or other modalities, visualisation of the elements of the 3D scene in real-time, and the geometric design of bone elements. A high level of automation allows the surgeon to reduce significantly the time of the operation plane development

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Amélioration des ouvertures par chemins pour l'analyse d'images à N dimensions et implémentations optimisées

    Get PDF
    La détection de structures fines et orientées dans une image peut mener à un très large champ d'applications en particulier dans le domaine de l'imagerie médicale, des sciences des matériaux ou de la télédétection. Les ouvertures et fermetures par chemins sont des opérateurs morphologiques utilisant des chemins orientés et flexibles en guise d'éléments structurants. Ils sont utilisés de la même manière que les opérateurs morphologiques utilisant des segments orientés comme éléments structurants mais sont plus efficaces lorsqu'il s'agit de détecter des structures pouvant être localement non rigides. Récemment, une nouvelle implémentation des opérateurs par chemins a été proposée leur permettant d'être appliqués à des images 2D et 3D de manière très efficace. Cependant, cette implémentation est limitée par le fait qu'elle n'est pas robuste au bruit affectant les structures fines. En effet, pour être efficaces, les opérateurs par chemins doivent être suffisamment longs pour pouvoir correspondre à la longueur des structures à détecter et deviennent de ce fait beaucoup plus sensibles au bruit de l'image. La première partie de ces travaux est dédiée à répondre à ce problème en proposant un algorithme robuste permettant de traiter des images 2D et 3D. Nous avons proposé les opérateurs par chemins robustes, utilisant une famille plus grande d'éléments structurants et qui, donnant une longueur L et un paramètre de robustesse G, vont permettre la propagation du chemin à travers des déconnexions plus petites ou égales à G, rendant le paramètre G indépendant de L. Cette simple proposition mènera à une implémentation plus efficace en terme de complexité de calculs et d'utilisation mémoire que l'état de l'art. Les opérateurs développés ont été comparés avec succès avec d'autres méthodes classiques de la détection des structures curvilinéaires de manière qualitative et quantitative. Ces nouveaux opérateurs ont été par la suite intégrés dans une chaîne complète de traitement d'images et de modélisation pour la caractérisation des matériaux composite renforcés avec des fibres de verres. Notre étude nous a ensuite amenés à nous intéresser à des filtres morphologiques récents basés sur la mesure de caractéristiques géodésiques. Ces filtres sont une bonne alternative aux ouvertures par chemins car ils sont très efficaces lorsqu'il s'agit de détecter des structures présentant de fortes tortuosités ce qui est précisément la limitation majeure des ouvertures par chemins. La combinaison de la robustesse locale des ouvertures par chemins robustes et la capacité des filtres par attributs géodésiques à recouvrer les structures tortueuses nous ont permis de proposer un nouvel algorithme, les ouvertures par chemins robustes et sélectives.The detection of thin and oriented features in an image leads to a large field of applications specifically in medical imaging, material science or remote sensing. Path openings and closings are efficient morphological operators that use flexible oriented paths as structuring elements. They are employed in a similar way to operators with rotated line segments as structuring elements, but are more effective as they can detect linear structures that are not necessarily locally perfectly straight. While their theory has always allowed paths in arbitrary dimensions, de facto implementations were only proposed in 2D. Recently, a new implementation was proposed enabling the computation of efficient d-dimensional path operators. However this implementation is limited in the sense that it is not robust to noise. Indeed, in practical applications, for path operators to be effective, structuring elements must be sufficiently long so that they correspond to the length of the desired features to be detected. Yet, path operators are increasingly sensitive to noise as their length parameter L increases. The first part of this work is dedicated to cope with this limitation. Thus, we will propose an efficient d-dimensional algorithm, the robust path operators, which use a larger family of flexible structuring elements. Given an arbitrary length parameter G, path propagation is allowed if disconnections between two pixels belonging to a path is less or equal to G and so, render it independent of L. This simple assumption leads to a constant memory bookkeeping and results in a low complexity. The developed operators have been compared qualitatively and quantitatively to other efficient methods for the detection of line-like features. As an application, robust path openings have been integrated into a complete chain of image processing for the modelling and the characterization of glass fibers reinforced polymer. Our study has also led us to focus our interest on recent morphological connected filters based on geodesic measurements. These filters are a good alternative to path operators as they are efficient at detecting the so-called "tortuous" shapes in an image which is precisely the main limitation of path operators. Combining the local robustness of the robust path operators with the ability of geodesic attribute-based filters to recover "tortuous" shapes have enabled us to propose another original algorithm, the selective and robust path operators.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    2D and 3D digital shape modelling strategies

    Get PDF
    Image segmentation of organs in medical images using model-based approaches requires a priori information which is often given by manually tagging landmarks on a training set of shapes. This is a tedious, time-consuming, and error prone task. To overcome some of these drawbacks, several automatic methods were devised. Identification of the same homologous set of points in a training set of object shapes is the most crucial step in Active Shape Modelling, which has encountered several challenges. The most crucial among these are: (C1) defining and characterizing landmarks; (C2) obtaining landmarks at the desired level of detail; (C3) ensuring homology; (C4) generalizing to n>2 dimensions; (C5) achieving practical computations. This thesis proposes several novel modelling techniques attempting to meet C1-C5. In this process, this thesis makes the following key contributions: the concept of local scale for shapes; the idea of allowing level of detail for selecting landmarks; the concept of equalization of shape variance for selecting landmarks; the idea of recursively subdividing shapes and letting the sub-shapes guide landmark selection, which is a very general n-dimensional strategy; the idea of virtual landmarks, which may be situated anywhere relative to, not necessarily on, the shape boundary; a new compactness measure that considers both the number of landmarks and the number of modes selected as independent variables. The first of three methods uses the c-scale shape descriptor, based on the new concept of curvature-scale, to automatically locate mathematical landmarks on the mean of the training shapes. The landmarks are propagated to the training shapes to establish correspondence among shapes. Since all shapes of the same family do not necessarily present exactly the same shape features, another novel method was devised that takes into account the real shape variability existing in the training set and that is guided by the strategy of equalization of the variance observed in the training set for selecting landmarks. By incorporating the above basic concepts into modelling, a third family of methods with numerous possibilities was developed, taking into account shape features, and the variability among shapes, while being easily generalized to the 3D space. Its output is multi-resolutional allowing landmark selection at any lower resolution trivially as a subset of those found at a higher resolution. The best strategy to use within the family will have to be determined according to the clinical application at hand. All methods were evaluated in terms of compactness on two data sets - 40 CT images of the liver and 40 MR images of the talus bone of the foot. Further, numerous artificial shapes with known salient points were also used for testing the accuracy of the proposed methods. The results show that, for the same number of landmarks, the proposed methods are more compact than manual and equally spaced annotations. Besides, the accuracy (in terms of false positives and negatives and the location of landmarks) of the proposed shape descriptor on artificial shapes is considerably superior to a state-of-the-art scale space approach to finding salient points on shapes
    • …
    corecore