79 research outputs found

    Shape parametrization of bio-mechanical finite element models based on medical images

    Get PDF
    The main objective of this study is to combine the statistical shape analysis with a morphing procedure in order to generate shape-parametric finite element models of tissues and organs and to explore the reliability and the limitations of this approach when applied to databases of real medical images. As classical statistical shape models are not always adapted to the morphing procedure, a new registration method was developed in order to maximize the morphing efficiency. The method was compared to the traditional iterative thin plate spline (iTPS). Two data sets of 33 proximal femora shapes and 385 liver shapes were used for the comparison. The principal component analysis was used to get the principal morphing modes. In terms of anatomical shape reconstruction (evaluated through the criteria of generalization, compactness and specificity), our approach compared fairly well to the iTPS method, while performing remarkably better in terms of mesh quality, since it was less prone to generate invalid meshes in the interior. This was particularly true in the liver case. Such methodology offers a potential application for the generation of automated finite element (FE) models from medical images. Parametrized anatomical models can also be used to assess the influence of inter-patient variability on the biomechanical response of the tissues. Indeed, thanks to the shape parametrization the user would easily have access to a valid FE model for any shape belonging to the parameters subspace

    Novel Approaches to the Representation and Analysis of 3D Segmented Anatomical Districts

    Get PDF
    Nowadays, image processing and 3D shape analysis are an integral part of clinical practice and have the potentiality to support clinicians with advanced analysis and visualization techniques. Both approaches provide visual and quantitative information to medical practitioners, even if from different points of view. Indeed, shape analysis is aimed at studying the morphology of anatomical structures, while image processing is focused more on the tissue or functional information provided by the pixels/voxels intensities levels. Despite the progress obtained by research in both fields, a junction between these two complementary worlds is missing. When working with 3D models analyzing shape features, the information of the volume surrounding the structure is lost, since a segmentation process is needed to obtain the 3D shape model; however, the 3D nature of the anatomical structure is represented explicitly. With volume images, instead, the tissue information related to the imaged volume is the core of the analysis, while the shape and morphology of the structure are just implicitly represented, thus not clear enough. The aim of this Thesis work is the integration of these two approaches in order to increase the amount of information available for physicians, allowing a more accurate analysis of each patient. An augmented visualization tool able to provide information on both the anatomical structure shape and the surrounding volume through a hybrid representation, could reduce the gap between the two approaches and provide a more complete anatomical rendering of the subject. To this end, given a segmented anatomical district, we propose a novel mapping of volumetric data onto the segmented surface. The grey-levels of the image voxels are mapped through a volume-surface correspondence map, which defines a grey-level texture on the segmented surface. The resulting texture mapping is coherent to the local morphology of the segmented anatomical structure and provides an enhanced visual representation of the anatomical district. The integration of volume-based and surface-based information in a unique 3D representation also supports the identification and characterization of morphological landmarks and pathology evaluations. The main research contributions of the Ph.D. activities and Thesis are: \u2022 the development of a novel integration algorithm that combines surface-based (segmented 3D anatomical structure meshes) and volume-based (MRI volumes) information. The integration supports different criteria for the grey-levels mapping onto the segmented surface; \u2022 the development of methodological approaches for using the grey-levels mapping together with morphological analysis. The final goal is to solve problems in real clinical tasks, such as the identification of (patient-specific) ligament insertion sites on bones from segmented MR images, the characterization of the local morphology of bones/tissues, the early diagnosis, classification, and monitoring of muscle-skeletal pathologies; \u2022 the analysis of segmentation procedures, with a focus on the tissue classification process, in order to reduce operator dependency and to overcome the absence of a real gold standard for the evaluation of automatic segmentations; \u2022 the evaluation and comparison of (unsupervised) segmentation methods, finalized to define a novel segmentation method for low-field MR images, and for the local correction/improvement of a given segmentation. The proposed method is simple but effectively integrates information derived from medical image analysis and 3D shape analysis. Moreover, the algorithm is general enough to be applied to different anatomical districts independently of the segmentation method, imaging techniques (such as CT), or image resolution. The volume information can be integrated easily in different shape analysis applications, taking into consideration not only the morphology of the input shape but also the real context in which it is inserted, to solve clinical tasks. The results obtained by this combined analysis have been evaluated through statistical analysis

    Towards a Brain-inspired Information Processing System: Modelling and Analysis of Synaptic Dynamics: Towards a Brain-inspired InformationProcessing System: Modelling and Analysis ofSynaptic Dynamics

    Get PDF
    Biological neural systems (BNS) in general and the central nervous system (CNS) specifically exhibit a strikingly efficient computational power along with an extreme flexible and adaptive basis for acquiring and integrating new knowledge. Acquiring more insights into the actual mechanisms of information processing within the BNS and their computational capabilities is a core objective of modern computer science, computational sciences and neuroscience. Among the main reasons of this tendency to understand the brain is to help in improving the quality of life of people suffer from loss (either partial or complete) of brain or spinal cord functions. Brain-computer-interfaces (BCI), neural prostheses and other similar approaches are potential solutions either to help these patients through therapy or to push the progress in rehabilitation. There is however a significant lack of knowledge regarding the basic information processing within the CNS. Without a better understanding of the fundamental operations or sequences leading to cognitive abilities, applications like BCI or neural prostheses will keep struggling to find a proper and systematic way to help patients in this regard. In order to have more insights into these basic information processing methods, this thesis presents an approach that makes a formal distinction between the essence of being intelligent (as for the brain) and the classical class of artificial intelligence, e.g. with expert systems. This approach investigates the underlying mechanisms allowing the CNS to be capable of performing a massive amount of computational tasks with a sustainable efficiency and flexibility. This is the essence of being intelligent, i.e. being able to learn, adapt and to invent. The approach used in the thesis at hands is based on the hypothesis that the brain or specifically a biological neural circuitry in the CNS is a dynamic system (network) that features emergent capabilities. These capabilities can be imported into spiking neural networks (SNN) by emulating the dynamic neural system. Emulating the dynamic system requires simulating both the inner workings of the system and the framework of performing the information processing tasks. Thus, this work comprises two main parts. The first part is concerned with introducing a proper and a novel dynamic synaptic model as a vital constitute of the inner workings of the dynamic neural system. This model represents a balanced integration between the needed biophysical details and being computationally inexpensive. Being a biophysical model is important to allow for the abilities of the target dynamic system to be inherited, and being simple is needed to allow for further implementation in large scale simulations and for hardware implementation in the future. Besides, the energy related aspects of synaptic dynamics are studied and linked to the behaviour of the networks seeking for stable states of activities. The second part of the thesis is consequently concerned with importing the processing framework of the dynamic system into the environment of SNN. This part of the study investigates the well established concept of binding by synchrony to solve the information binding problem and to proposes the concept of synchrony states within SNN. The concepts of computing with states are extended to investigate a computational model that is based on the finite-state machines and reservoir computing. Biological plausible validations of the introduced model and frameworks are performed. Results and discussions of these validations indicate that this study presents a significant advance on the way of empowering the knowledge about the mechanisms underpinning the computational power of CNS. Furthermore it shows a roadmap on how to adopt the biological computational capabilities in computation science in general and in biologically-inspired spiking neural networks in specific. Large scale simulations and the development of neuromorphic hardware are work-in-progress and future work. Among the applications of the introduced work are neural prostheses and bionic automation systems

    Large-Eddy Simulation of Arctic Stratocumulus: Process Representation and Surface Heterogeneity

    Get PDF
    Small-scale processes are crucial for the evolution of Stratocumulus and act on scales reaching down to less than one meter. Most large-eddy simulation studies still apply a horizontal resolution of tens of meters, limiting the ability to resolve cloud-driving processes. I investigate such small-scale processes in a reference case that is—based on the recent field campaigns ACLOUD and PASCAL—defined within this thesis to represent a mixed-phase Stratocumulus during Arctic spring. I apply large-eddy simulations with horizontal resolutions of 35m, 10m, 3.5m, and 3m and a vertical resolution of about 3m. My analysis focuses on the resolution sensitivity of cloud-top entrainment processes and the effects of surface heterogeneity structure on the atmospheric boundary layer. First, I find that for a horizontal grid spacing larger than 10m, the effects of small-scale microphysical cooling and turbulent engulfment on cloud-top entrainment are only represented sufficiently for the atmospheric boundary layer bulk profiles but not on a process level. The stratification-limited size of energy-containing eddies violates the assumptions underlying many sub-grid scale models of turbulent mixing. Second, I observe a decrease in cloud-top entrainment for a horizontal resolution coarser than 10m, which results in 15% more cloud water after six hours of simulation and a corresponding optical thickening of the Stratocumulus. Third, I find that structuring surface heterogeneity does not affect zero- and first-order bulk quantities outside the surface layer. A notable sensibility in higher altitudes is only observed for higher-order quantities, which show increased values over structured surface heterogeneity. Fourth, I observe structured surface heterogeneity to form a streamwise elongated, roll-like, secondary circulation perpendicular to the mean wind. Its formation is neither captured by traditional Arctic lead theory nor by the theory of surface heterogeneity effects on cloud-free atmospheric boundary layers. It turns out that the streamwise elongated structure evolves due to streamwise "smudging" of the surface signals at the lower cloud boundary. This "smudging" is a consequence of weak vertical motion and cloud-induced turbulence—a unique feature compared to other studies investigating the effects of surface heterogeneity structure

    Digital Twins of production systems - Automated validation and update of material flow simulation models with real data

    Get PDF
    Um eine gute Wirtschaftlichkeit und Nachhaltigkeit zu erzielen, müssen Produktionssysteme über lange Zeiträume mit einer hohen Produktivität betrieben werden. Dies stellt produzierende Unternehmen insbesondere in Zeiten gesteigerter Volatilität, die z.B. durch technologische Umbrüche in der Mobilität, sowie politischen und gesellschaftlichen Wandel ausgelöst wird, vor große Herausforderungen, da sich die Anforderungen an das Produktionssystem ständig verändern. Die Frequenz von notwendigen Anpassungsentscheidungen und folgenden Optimierungsmaßnahmen steigt, sodass der Bedarf nach Bewertungsmöglichkeiten von Szenarien und möglichen Systemkonfigurationen zunimmt. Ein mächtiges Werkzeug hierzu ist die Materialflusssimulation, deren Einsatz aktuell jedoch durch ihre aufwändige manuelle Erstellung und ihre zeitlich begrenzte, projektbasierte Nutzung eingeschränkt wird. Einer längerfristigen, lebenszyklusbegleitenden Nutzung steht momentan die arbeitsintensive Pflege des Simulationsmodells, d.h. die manuelle Anpassung des Modells bei Veränderungen am Realsystem, im Wege. Das Ziel der vorliegenden Arbeit ist die Entwicklung und Umsetzung eines Konzeptes inkl. der benötigten Methoden, die Pflege und Anpassung des Simulationsmodells an die Realität zu automatisieren. Hierzu werden die zur Verfügung stehenden Realdaten genutzt, die aufgrund von Trends wie Industrie 4.0 und allgemeiner Digitalisierung verstärkt vorliegen. Die verfolgte Vision der Arbeit ist ein Digitaler Zwilling des Produktionssystems, der durch den Dateninput zu jedem Zeitpunkt ein realitätsnahes Abbild des Systems darstellt und zur realistischen Bewertung von Szenarien verwendet werden kann. Hierfür wurde das benötigte Gesamtkonzept entworfen und die Mechanismen zur automatischen Validierung und Aktualisierung des Modells entwickelt. Im Fokus standen dabei unter anderem die Entwicklung von Algorithmen zur Erkennung von Veränderungen in der Struktur und den Abläufen im Produktionssystem, sowie die Untersuchung des Einflusses der zur Verfügung stehenden Daten. Die entwickelten Komponenten konnten an einem realen Anwendungsfall der Robert Bosch GmbH erfolgreich eingesetzt werden und führten zu einer Steigerung der Realitätsnähe des Digitalen Zwillings, der erfolgreich zur Produktionsplanung und -optimierung eingesetzt werden konnte. Das Potential von Lokalisierungsdaten für die Erstellung von Digitalen Zwillingen von Produktionssystem konnte anhand der Versuchsumgebung der Lernfabrik des wbk Instituts für Produktionstechnik demonstriert werden

    LiDAR-Based Object Tracking and Shape Estimation

    Get PDF
    Umfeldwahrnehmung stellt eine Grundvoraussetzung für den sicheren und komfortablen Betrieb automatisierter Fahrzeuge dar. Insbesondere bewegte Verkehrsteilnehmer in der unmittelbaren Fahrzeugumgebung haben dabei große Auswirkungen auf die Wahl einer angemessenen Fahrstrategie. Dies macht ein System zur Objektwahrnehmung notwendig, welches eine robuste und präzise Zustandsschätzung der Fremdfahrzeugbewegung und -geometrie zur Verfügung stellt. Im Kontext des automatisierten Fahrens hat sich das Box-Geometriemodell über die Zeit als Quasistandard durchgesetzt. Allerdings stellt die Box aufgrund der ständig steigenden Anforderungen an Wahrnehmungssysteme inzwischen häufig eine unerwünscht grobe Approximation der tatsächlichen Geometrie anderer Verkehrsteilnehmer dar. Dies motiviert einen Übergang zu genaueren Formrepräsentationen. In der vorliegenden Arbeit wird daher ein probabilistisches Verfahren zur gleichzeitigen Schätzung von starrer Objektform und -bewegung mittels Messdaten eines LiDAR-Sensors vorgestellt. Der Vergleich dreier Freiform-Geometriemodelle mit verschiedenen Detaillierungsgraden (Polygonzug, Dreiecksnetz und Surfel Map) gegenüber dem einfachen Boxmodell zeigt, dass die Reduktion von Modellierungsfehlern in der Objektgeometrie eine robustere und präzisere Parameterschätzung von Objektzuständen ermöglicht. Darüber hinaus können automatisierte Fahrfunktionen, wie beispielsweise ein Park- oder Ausweichassistent, von einem genaueren Wissen über die Fremdobjektform profitieren. Es existieren zwei Einflussgrößen, welche die Auswahl einer angemessenen Formrepräsentation maßgeblich beeinflussen sollten: Beobachtbarkeit (Welchen Detaillierungsgrad lässt die Sensorspezifikation theoretisch zu?) und Modell-Adäquatheit (Wie gut bildet das gegebene Modell die tatsächlichen Beobachtungen ab?). Auf Basis dieser Einflussgrößen wird in der vorliegenden Arbeit eine Strategie zur Modellauswahl vorgestellt, die zur Laufzeit adaptiv das am besten geeignete Formmodell bestimmt. Während die Mehrzahl der Algorithmen zur LiDAR-basierten Objektverfolgung ausschließlich auf Punktmessungen zurückgreift, werden in der vorliegenden Arbeit zwei weitere Arten von Messungen vorgeschlagen: Information über den vermessenen Freiraum wird verwendet, um über Bereiche zu schlussfolgern, welche nicht von Objektgeometrie belegt sein können. Des Weiteren werden LiDAR-Intensitäten einbezogen, um markante Merkmale wie Nummernschilder und Retroreflektoren zu detektieren und über die Zeit zu verfolgen. Eine ausführliche Auswertung auf über 1,5 Stunden von aufgezeichneten Fremdfahrzeugtrajektorien im urbanen Bereich und auf der Autobahn zeigen, dass eine präzise Modellierung der Objektoberfläche die Bewegungsschätzung um bis zu 30%-40% verbessern kann. Darüber hinaus wird gezeigt, dass die vorgestellten Methoden konsistente und hochpräzise Rekonstruktionen von Objektgeometrien generieren können, welche die häufig signifikante Überapproximation durch das einfache Boxmodell vermeiden

    Approximation methods in geometry and topology: learning, coarsening, and sampling

    Get PDF
    Data materialize in many different forms and formats. These can be continuous or discrete, from algebraic expressions to unstructured pointclouds and highly structured graphs and simplicial complexes. Their sheer volume and plethora of different modalities used to manipulate and understand them highlight the need for expressive abstractions and approximations, enabling novel insights and efficiency. Geometry and topology provide powerful and intuitive frameworks for modelling structure, form, and connectivity. Acting as a multi-focal lens, they enable inspection and manipulation at different levels of detail, from global discriminant features to local intricate characteristics. However, these fundamentally algebraic theories do not scale well in the digital world. Adjusting topology and geometry to the computational setting is a non-trivial task, adhering to the “no free lunch” adage. The necessary discretizations can be inaccurate, the underlying combinatorial structures can grow unmanageably in size, and computing salient topological and geometric features can become computationally taxing. Approximations are a necessity when theory cannot accommodate for efficient algorithms. This thesis explores different approaches to simplifying computations pertaining to geometry and topology via approximations. Our methods contribute to the approximation of topological features on discrete domains, and employ geometry and topology to efficiently guide discretizations and approximations. This line of work fits un der the umbrella of Topological Data Analysis (TDA) and Discrete Geometry, which aim to bridge the continuous algebraic mindset with the discrete. We construct topological and geometric approximation methods operating on three different levels. We approximate topological features on discrete combinatorial spaces; we approximate the combinatorial spaces themselves; and we guide processes that allow us to discretize domains via sampling. With our Dist2Cycle model we learn geometric manifestations of topological features, the “optimal” homology generating cycles. This is achieved by a novel simplicial complex neural network that exploits the kernel of Hodge Laplacian operators to localize concise homology generators. Compression of meshes and arbitrary simplicial complexes is made possible by our general spectral coarsening strategy. Functional and structural properties are preserved by optimizing for important eigenspaces of general differential operators, the Hodge Laplacians, at multiple dimensions. Finally, we offer a geometry-driven sampling strategy for data accumulation and stochastic integration. By employing the kd-tree geometric partitioning algorithm we construct a sample set with provable equidistribution guarantees. Our findings are contextualized within prior and recent work, and our methods are thoroughly discussed and evaluated on diverse settings. Ultimately, we are making a claim towards the usefulness of examining the ever-present topological and geometric properties of data, not only in terms of feature discovery, but also as informed generation, manipulation, and simplification tools

    Spatiotemporal Estuarine Water Quality Parameterization Using Remote Sensing and in-situ Characteristics

    Get PDF
    This dissertation develops a new paradigm in a water quality monitoring approach to parameterize spatiotemporal estuarine water quality with sustainable reliability, less cost and less time. A key underpinning of this paradigm of the spatiotemporal estuarine water quality parameterization is various water quality parameters\u27 interrelationship with ambient water temperature as a common factor, their time dependent characteristics, and spatiotemporal characteristics of remote sensing. It has two core models to provide input data of water quality parameterization model in a system; the transfer function models of the physical system and an analytical temperature time series model. The objective of this dissertation is to provide an alternative tool for monitoring water quality and decision-making in estuaries with time and space, to identify system components contributing to physical water quality, and to demonstrate the feasibility, reproducibility and applicability of the proposed model. The spatiotemporal estuarine water quality parameterization model monitors chlorophyll concentration using remote sensing, transfer function models of dissolved oxygen (DO) and orthophosphate (PO4) and ambient water temperature in spring and fall in the James River Estuary Mesohaline segment in Virginia. The proposed model is applicable in the temperature range between 6°C and 23°C in spring and in the temperature range between 21°C and 32°C in fall. The optimal operational temperature range of the proposed model is between 19°C and 25°C based on the relative sensitivity analysis of DO transfer function model. The proposed models in two seasons are compared with the models that use different approaches such as a conventional approach and a previously proposed approach based on various criteria. The results show that the proposed models present the variability of chlorophyll concentration better over time and temperature than other approaches. The results also support that the transfer function models can be successfully applied to estimate chlorophyll instead of using monitored water quality data directly. The proposed models present difficulty to estimate extremely high concentrations of chlorophyll; however, they produce estimations comparable to observed chlorophyll concentrations that are less than the extreme outliers in each season. The mean chlorophyll concentration that is produced by the best proposed model is 7.937μg/L and the +/- 95% confidence intervals of the mean are 7.977μg/L and 7.897μg/L after eliminating the extreme outliers (371μg/L) in spring. The mean, 7.937μg/L, is compatible with the mean of the observed concentrations that are less than the extreme outliers, 7.572μg/L. The mean chlorophyll concentration that is produced by the best proposed model is 5.520μg/L, and the +/- 95% confidence intervals (C.I.) of the mean are 5.538μg/L and 5.502μg/L after eliminating the extreme outliers (22μg/L) in fall. The mean, 5.520μg/L, is compatible with the mean of the observed concentrations that are less than the extreme outliers, 6.117μg/L. This dissertation demonstrates the feasibility, reproducibility and applicability of the paradigm in spatiotemporal estuarine water quality parameterization using remote sensing data and field measured water quality data in estuaries. The spatiotemporal estuarine water quality parameterization model can enhance an existing water quality monitoring and assessment program in estuaries that are managed by municipal agencies and local water quality decision makers. The spatiotemporal estuarine water quality parameterization model can be employed as a tool to guide management, since a systematic process of estimating water quality targets is difficult in a complex estuary. Over time, the model provides appropriate, up-to-date guidance. Careful consideration is necessary when applying transfer function models and seasonal spatiotemporal estuarine water quality parameterization models to the different estuaries directly. Although the models appear feasible with significant potential, direct implementation of the model requires a site-specific quality assurance/quality control (QA/QC) check
    corecore