45 research outputs found

    Experimental and Data-driven Workflows for Microstructure-based Damage Prediction

    Get PDF
    Materialermüdung ist die häufigste Ursache für mechanisches Versagen. Die Degradationsmechanismen, welche die Lebensdauer von Bauteilen bei vergleichsweise ausgeprägten zyklischen Belastungen bestimmen, sind gut bekannt. Bei Belastungen im makroskopisch elastischen Bereich hingegen, der (sehr) hochzyklischen Ermüdung, bestimmen die innere Struktur eines Werkstoffs und die Wechselwirkung kristallografischer Defekte die Lebensdauer. Unter diesen Umständen sind die inneren Degradationsphänomene auf der mikroskopischen Skala weitgehend reversibel und führen nicht zur Bildung kritischer Schädigungen, die kontinuierlich wachsen können. Allerdings sind einige Kornensembles in polykristallinen Metallen, je nach den lokalen mikrostrukturellen Gegebenheiten, anfällig für Schädigungsinitiierung, Rissbildung und -wachstum und wirken daher als Schwachstellen. Daher weisen Bauteile, die solchen Belastungen ausgesetzt sind, oft eine ausgeprägte Lebensdauerstreuung auf. Die Tatsache, dass ein umfassendes mechanistisches Verständnis für diese Degradationsprozesse in verschiedenen Werkstoffen nicht vorliegt, hat zur Folge, dass die derzeitigen Modellierungsbemühungen die mittlere Lebensdauer und ihre Varianz in der Regel nur mit unbefriedigender Genauigkeit vorhersagen. Dies wiederum erschwert die Bauteilauslegung und macht die Nutzung von Sicherheitsfaktoren während des Dimensionierungsprozesses erforderlich. Abhilfe kann geschaffen werden, indem umfangreiche Daten zu Einflussfaktoren und deren Wirkung auf die Bildung initialer Ermüdungsschädigungen erhoben werden. Die Datenknappheit wirkt sich nach wie vor negativ auf Datenwissenschaftler und Modellierungsexperten aus, die versuchen, trotz geringer Stichprobengröße und unvollständigen Merkmalsräumen, mikrostrukturelle Abhängigkeiten abzuleiten, datengetriebene Vorhersagemodelle zu trainieren oder physikalische, regelbasierte Modelle zu parametrisieren. Die Tatsache, dass nur wenige kritische Schädigungen bezogen auf das gesamte Probenvolumen auftreten und die hochzyklische Ermüdung eine Vielzahl unterschiedlicher Abhängigkeiten aufweist, impliziert einige Anforderungen an die Datenerfassung und -verarbeitung. Am wichtigsten ist, dass die Messtechniken so empfindlich sind, dass nuancierte Schwankungen im Probenzustand erfasst werden können, dass die gesamte Routine effizient ist und dass die korrelative Mikroskopie räumliche Informationen aus verschiedenen Messungen miteinander verbindet. Das Hauptziel dieser Arbeit besteht darin, einen Workflow zu etablieren, der den Datenmangel behebt, so dass die zukünftige virtuelle Auslegung von Komponenten effizienter, zuverlässiger und nachhaltiger gestaltet werden kann. Zu diesem Zweck wird in dieser Arbeit ein kombinierter experimenteller und datenverarbeitender Workflow vorgeschlagen, um multimodale Datensätze zu Ermüdungsschädigungen zu erzeugen. Der Schwerpunkt liegt dabei auf dem Auftreten von lokalen Gleitbändern, der Rissinitiierung und dem Wachstum mikrostrukturell kurzer Risse. Der Workflow vereint die Ermüdungsprüfung von mesoskaligen Proben, um die Empfindlichkeit der Schädigungsdetektion zu erhöhen, die ergänzende Charakterisierung, die multimodale Registrierung und Datenfusion der heterogenen Daten, sowie die bildverarbeitungsbasierte Schädigungslokalisierung und -bewertung. Mesoskalige Biegeresonanzprüfung ermöglicht das Erreichen des hochzyklischen Ermüdungszustands in vergleichsweise kurzen Zeitspannen bei gleichzeitig verbessertem Auflösungsvermögen der Schädigungsentwicklung. Je nach Komplexität der einzelnen Bildverarbeitungsaufgaben und Datenverfügbarkeit werden entweder regelbasierte Bildverarbeitungsverfahren oder Repräsentationslernen gezielt eingesetzt. So sorgt beispielsweise die semantische Segmentierung von Schädigungsstellen dafür, dass wichtige Ermüdungsmerkmale aus mikroskopischen Abbildungen extrahiert werden können. Entlang des Workflows wird auf einen hohen Automatisierungsgrad Wert gelegt. Wann immer möglich, wurde die Generalisierbarkeit einzelner Workflow-Elemente untersucht. Dieser Workflow wird auf einen ferritischen Stahl (EN 1.4003) angewendet. Der resultierende Datensatz verknüpft unter anderem große verzerrungskorrigierte Mikrostrukturdaten mit der Schädigungslokalisierung und deren zyklischer Entwicklung. Im Zuge der Arbeit wird der Datensatz wird im Hinblick auf seinen Informationsgehalt untersucht, indem detaillierte, analytische Studien zur einzelnen Schädigungsbildung durchgeführt werden. Auf diese Weise konnten unter anderem neuartige, quantitative Erkenntnisse über mikrostrukturinduzierte plastische Verformungs- und Rissstopmechanismen gewonnen werden. Darüber hinaus werden aus dem Datensatz abgeleitete kornweise Merkmalsvektoren und binäre Schädigungskategorien verwendet, um einen Random-Forest-Klassifikator zu trainieren und dessen Vorhersagegüte zu bewerten. Der vorgeschlagene Workflow hat das Potenzial, die Grundlage für künftiges Data Mining und datengetriebene Modellierung mikrostrukturempfindlicher Ermüdung zu legen. Er erlaubt die effiziente Erhebung statistisch repräsentativer Datensätze mit gleichzeitig hohem Informationsgehalt und kann auf eine Vielzahl von Werkstoffen ausgeweitet werden

    Natural ventilation design attributes application effect on, indoor natural ventilation performance of a double storey, single unit residential building

    Get PDF
    In establishing a good indoor thermal condition, air movement is one of the important parameter to be considered to provide indoor fresh air for occupants. Due to the public awareness on environment impact, people has been increasingly attentive to passive design in achieving good condition of indoor building ventilation. Throughout case studies, significant building attributes were found giving effect on building indoor natural ventilation performance. The studies were categorized under vernacular houses, contemporary houses with vernacular element and contemporary houses. The indoor air movement of every each spaces in the houses were compared with the outdoor air movement surrounding the houses to indicate the space’s indoor natural ventilation performance. Analysis found the wind catcher element appears to be the most significant attribute to contribute most to indoor natural ventilation. Wide opening was also found to be significant especially those with louvers. Whereas it is also interesting to find indoor layout design is also significantly giving impact on the performance. The finding indicates that a good indoor natural ventilation is not only dictated by having proper openings at proper location of a building, but also on how the incoming air movement is managed throughout the interior spaces by proper layout. Understanding on the air pressure distribution caused by indoor windward and leeward side is important in directing the air flow to desired spaces in producing an overall good indoor natural ventilation performance

    Restauration d'images en IRM anatomique pour l'étude préclinique des marqueurs du vieillissement cérébral

    Get PDF
    Les maladies neurovasculaires et neurodégénératives liées à l'âge sont en forte augmentation. Alors que ces changements pathologiques montrent des effets sur le cerveau avant l'apparition de symptômes cliniques, une meilleure compréhension du processus de vieillissement normal du cerveau aidera à distinguer l'impact des pathologies connues sur la structure régionale du cerveau. En outre, la connaissance des schémas de rétrécissement du cerveau dans le vieillissement normal pourrait conduire à une meilleure compréhension de ses causes et peut-être à des interventions réduisant la perte de fonctions cérébrales associée à l'atrophie cérébrale. Par conséquent, ce projet de thèse vise à détecter les biomarqueurs du vieillissement normal et pathologique du cerveau dans un modèle de primate non humain, le singe marmouset (Callithrix Jacchus), qui possède des caractéristiques anatomiques plus proches de celles des humains que de celles des rongeurs. Cependant, les changements structurels (par exemple, de volumes, d'épaisseur corticale) qui peuvent se produire au cours de leur vie adulte peuvent être minimes à l'échelle de l'observation. Dans ce contexte, il est essentiel de disposer de techniques d'observation offrant un contraste et une résolution spatiale suffisamment élevés et permettant des évaluations détaillées des changements morphométriques du cerveau associé au vieillissement. Cependant, l'imagerie de petits cerveaux dans une plateforme IRM 3T dédiée à l'homme est une tâche difficile car la résolution spatiale et le contraste obtenus sont insuffisants par rapport à la taille des structures anatomiques observées et à l'échelle des modifications attendues. Cette thèse vise à développer des méthodes de restauration d'image pour les images IRM précliniques qui amélioreront la robustesse des algorithmes de segmentation. L'amélioration de la résolution spatiale des images à un rapport signal/bruit constant limitera les effets de volume partiel dans les voxels situés à la frontière entre deux structures et permettra une meilleure segmentation tout en augmentant la reproductibilité des résultats. Cette étape d'imagerie computationnelle est cruciale pour une analyse morphométrique longitudinale fiable basée sur les voxels et l'identification de marqueurs anatomiques du vieillissement cérébral en suivant les changements de volume dans la matière grise, la matière blanche et le liquide cérébral.Age-related neurovascular and neurodegenerative diseases are increasing significantly. While such pathological changes show effects on the brain before clinical symptoms appear, a better understanding of the normal aging brain process will help distinguish known pathologies' impact on regional brain structure. Furthermore, knowledge of the patterns of brain shrinkage in normal aging could lead to a better understanding of its causes and perhaps to interventions reducing the loss of brain functions. Therefore, this thesis project aims to detect normal and pathological brain aging biomarkers in a non-human primate model, the marmoset monkey (Callithrix Jacchus) which possesses anatomical characteristics more similar to humans than rodents. However, structural changes (e.g., volumes, cortical thickness) that may occur during their adult life may be minimal with respect to the scale of observation. In this context, it is essential to have observation techniques that offer sufficiently high contrast and spatial resolution and allow detailed assessments of the morphometric brain changes associated with aging. However, imaging small brains in a 3T MRI platform dedicated to humans is a challenging task because the spatial resolution and the contrast obtained are insufficient compared to the size of the anatomical structures observed and the scale of the xpected changes with age. This thesis aims to develop image restoration methods for preclinical MR images that will improve the robustness of the segmentation algorithms. Improving the resolution of the images at a constant signal-to-noise ratio will limit the effects of partial volume in voxels located at the border between two structures and allow a better segmentation while increasing the results' reproducibility. This computational imaging step is crucial for a reliable longitudinal voxel-based morphometric analysis and for the identification of anatomical markers of brain aging by following the volume changes in gray matter, white matter and cerebrospinal fluid

    Hierarchical processing, editing and rendering of acquired geometry

    Get PDF
    La représentation des surfaces du monde réel dans la mémoire d’une machine peut désormais être obtenue automatiquement via divers périphériques de capture tels que les scanners 3D. Ces nouvelles sources de données, précises et rapides, amplifient de plusieurs ordres de grandeur la résolution des surfaces 3D, apportant un niveau de précision élevé pour les applications nécessitant des modèles numériques de surfaces telles que la conception assistée par ordinateur, la simulation physique, la réalité virtuelle, l’imagerie médicale, l’architecture, l’étude archéologique, les effets spéciaux, l’animation ou bien encore les jeux video. Malheureusement, la richesse de la géométrie produite par ces méthodes induit une grande, voire gigantesque masse de données à traiter, nécessitant de nouvelles structures de données et de nouveaux algorithmes capables de passer à l’échelle d’objets pouvant atteindre le milliard d’échantillons. Dans cette thèse, je propose des solutions performantes en temps et en espace aux problèmes de la modélisation, du traitement géométrique, de l’édition intéractive et de la visualisation de ces surfaces 3D complexes. La méthodologie adoptée pendant l’élaboration transverse de ces nouveaux algorithmes est articulée autour de 4 éléments clés : une approche hiérarchique systématique, une réduction locale de la dimension des problèmes, un principe d’échantillonage-reconstruction et une indépendance à l’énumération explicite des relations topologiques aussi appelée approche basée-points. En pratique, ce manuscrit propose un certain nombre de contributions, parmi lesquelles : une nouvelle structure hiérarchique hybride de partitionnement, l’Arbre Volume-Surface (VS-Tree) ainsi que de nouveaux algorithmes de simplification et de reconstruction ; un système d’édition intéractive de grands objets ; un noyau temps-réel de synthèse géométrique par raffinement et une structure multi-résolution offrant un rendu efficace de grands objets. Ces structures, algorithmes et systèmes forment une chaîne capable de traiter les objets en provenance du pipeline d’acquisition, qu’ils soient représentés par des nuages de points ou des maillages, possiblement non 2-variétés. Les solutions obtenues ont été appliquées avec succès aux données issues des divers domaines d’application précités.Digital representations of real-world surfaces can now be obtained automatically using various acquisition devices such as 3D scanners and stereo camera systems. These new fast and accurate data sources increase 3D surface resolution by several orders of magnitude, borrowing higher precision to applications which require digital surfaces. All major computer graphics applications can take benefit of this automatic modeling process, including: computer-aided design, physical simulation, virtual reality, medical imaging, architecture, archaeological study, special effects, computer animation and video games. Unfortunately, the richness of the geometry produced by these media comes at the price of a large, possibility gigantic, amount of data which requires new efficient data structures and algorithms offering scalability for processing such objects. This thesis proposes time and space efficient solutions for modeling, editing and rendering such complex surfaces, solving these problems with new algorithms sharing 4 fundamental elements: a systematic hierarchical approach, a local dimension reduction, a sampling-reconstruction paradigm and a pointbased basis. Basically, this manuscript proposes several contributions, including: a new hierarchical space subdivision structure, the Volume-Surface Tree, for geometry processing such as simplification and reconstruction; a streaming system featuring new algorithms for interactive editing of large objects, an appearancepreserving multiresolution structure for efficient rendering of large point-based surfaces, and a generic kernel for real-time geometry synthesis by refinement. These elements form a pipeline able to process acquired geometry, either represented by point clouds or non-manifold meshes. Effective results have been successfully obtained with data coming from the various applications mentioned

    Interactive freeform editing techniques for large-scale, multiresolution level set models

    Get PDF
    Level set methods provide a volumetric implicit surface representation with automatic smooth blending properties and no self-intersections. They can handle arbitrary topology changes easily, and the volumetric implicit representation does not require the surface to be re-adjusted after extreme deformations. Even though they have found some use in movie productions and some medical applications, level set models are not highly utilized in either special effects industry or medical science. Lack of interactive modeling tools makes working with level set models difficult for people in these application areas.This dissertation describes techniques and algorithms for interactive freeform editing of large-scale, multiresolution level set models. Algorithms are developed to map intuitive user interactions into level set speed functions producing specific, desired surface movements. Data structures for efficient representation of very high resolution volume datasets and associated algorithms for rapid access and processing of the information within the data structures are explained. A hierarchical, multiresolution representation of level set models that allows for rapid decomposition and reconstruction of the complete full-resolution model is created for an editing framework that allows level-of-detail editing. We have developed a framework that identifies surface details prior to editing and introduces them back afterwards. Combining these two features provides a detail-preserving level set editing capability that may be used for multi-resolution modeling and texture transfer. Given the complex data structures that are required to represent large-scale, multiresolution level set models and the compute-intensive numerical methods to evaluate them, optimization techniques and algorithms have been developed to evaluate and display the dynamic isosurface embedded in the volumetric data.Ph.D., Computer Science -- Drexel University, 201

    Real-time Visual Flow Algorithms for Robotic Applications

    Get PDF
    Vision offers important sensor cues to modern robotic platforms. Applications such as control of aerial vehicles, visual servoing, simultaneous localization and mapping, navigation and more recently, learning, are examples where visual information is fundamental to accomplish tasks. However, the use of computer vision algorithms carries the computational cost of extracting useful information from the stream of raw pixel data. The most sophisticated algorithms use complex mathematical formulations leading typically to computationally expensive, and consequently, slow implementations. Even with modern computing resources, high-speed and high-resolution video feed can only be used for basic image processing operations. For a vision algorithm to be integrated on a robotic system, the output of the algorithm should be provided in real time, that is, at least at the same frequency as the control logic of the robot. With robotic vehicles becoming more dynamic and ubiquitous, this places higher requirements to the vision processing pipeline. This thesis addresses the problem of estimating dense visual flow information in real time. The contributions of this work are threefold. First, it introduces a new filtering algorithm for the estimation of dense optical flow at frame rates as fast as 800 Hz for 640x480 image resolution. The algorithm follows a update-prediction architecture to estimate dense optical flow fields incrementally over time. A fundamental component of the algorithm is the modeling of the spatio-temporal evolution of the optical flow field by means of partial differential equations. Numerical predictors can implement such PDEs to propagate current estimation of flow forward in time. Experimental validation of the algorithm is provided using high-speed ground truth image dataset as well as real-life video data at 300 Hz. The second contribution is a new type of visual flow named structure flow. Mathematically, structure flow is the three-dimensional scene flow scaled by the inverse depth at each pixel in the image. Intuitively, it is the complete velocity field associated with image motion, including both optical flow and scale-change or apparent divergence of the image. Analogously to optic flow, structure flow provides a robotic vehicle with perception of the motion of the environment as seen by the camera. However, structure flow encodes the full 3D image motion of the scene whereas optic flow only encodes the component on the image plane. An algorithm to estimate structure flow from image and depth measurements is proposed based on the same filtering idea used to estimate optical flow. The final contribution is the spherepix data structure for processing spherical images. This data structure is the numerical back-end used for the real-time implementation of the structure flow filter. It consists of a set of overlapping patches covering the surface of the sphere. Each individual patch approximately holds properties such as orthogonality and equidistance of points, thus allowing efficient implementations of low-level classical 2D convolution based image processing routines such as Gaussian filters and numerical derivatives. These algorithms are implemented on GPU hardware and can be integrated to future Robotic Embedded Vision systems to provide fast visual information to robotic vehicles

    Generative Model based Training of Deep Neural Networks for Event Detection in Microscopy Data

    Get PDF
    Several imaging techniques employed in the life sciences heavily rely on machine learning methods to make sense of the data that they produce. These include calcium imaging and multi-electrode recordings of neural activity, single molecule localization microscopy, spatially-resolved transcriptomics and particle tracking, among others. All of them only produce indirect readouts of the spatiotemporal events they aim to record. The objective when analysing data from these methods is the identification of patterns that indicate the location of the sought-after events, e.g. spikes in neural recordings or fluorescent particles in microscopy data. Existing approaches for this task invert a forward model, i.e. a mathematical description of the process that generates the observed patterns for a given set of underlying events, using established methods like MCMC or variational inference. Perhaps surprisingly, for a long time deep learning saw little use in this domain, even though it became the dominant approach in the field of pattern recognition over the previous decade. The principal reason is that in the absence of labeled data needed for supervised optimization it remains unclear how neural networks can be trained to solve these tasks. To unlock the potential of deep learning, this thesis proposes different methods for training neural networks using forward models and without relying on labeled data. The thesis rests on two publications: In the first publication we introduce an algorithm for spike extraction from calcium imaging time traces. Building on the variational autoencoder framework, we simultaneously train a neural network that performs spike inference and optimize the parameters of the forward model. This approach combines several advantages that were previously incongruous: it is fast at test-time, can be applied to different non-linear forward models and produces samples from the posterior distribution over spike trains. The second publication deals with the localization of fluorescent particles in single molecule localization microscopy. We show that an accurate forward model can be used to generate simulations that act as a surrogate for labeled training data. Careful design of the output representation and loss function result in a method with outstanding precision across experimental designs and imaging conditions. Overall this thesis highlights how neural networks can be applied for precise, fast and flexible model inversion on this class of problems and how this opens up new avenues to achieve performance beyond what was previously possible

    X-ray Bragg Projection Ptychography for nano-materials

    Get PDF
    Progress in nanotechnology critically relies on high resolution probing tools, and X-ray coherent diffraction imaging (CDI) is certainly an attractive method for exploring science at such small scales. Thus, the aim of this PhD is to study structural properties of nano-materials using X-ray CDI, with a special motivation to combine Bragg CDI with ptychography. The former has ability to retrieve the complex density and strain maps of nano-meso crystalline objects, and the latter uses translational diversity to produce quantitative maps of the complex transmission function of non-crystalline objects. As both techniques promote highly sensitive phase-contrast properties, the thesis exploits their agreement to reveal the morphology of domain structures in metallic thin films. Additionally, it is demonstrated that Bragg-ptychography is an evolutionary improvement to probe the structure of ’highly’ strained crystals, with respect to its Bragg-CDI counterpart. However, the adaptation of ptychography to the Bragg geometry is not without difficulties and comes with more experimental cost. Therefore, the effects of experimental uncertainties, e.g., scan positions undetermination, partial coherence, and time-varying probes are assessed throughout the thesis and corrected for by implementation of suitable refinement methods. Furthermore, it is shown how the set-up at beamline 34-ID-C at the Advanced Photon Source, used for the experimental measurements can be optimized for better ptychographical reconstructions

    Flexible Automation and Intelligent Manufacturing: The Human-Data-Technology Nexus

    Get PDF
    This is an open access book. It gathers the first volume of the proceedings of the 31st edition of the International Conference on Flexible Automation and Intelligent Manufacturing, FAIM 2022, held on June 19 – 23, 2022, in Detroit, Michigan, USA. Covering four thematic areas including Manufacturing Processes, Machine Tools, Manufacturing Systems, and Enabling Technologies, it reports on advanced manufacturing processes, and innovative materials for 3D printing, applications of machine learning, artificial intelligence and mixed reality in various production sectors, as well as important issues in human-robot collaboration, including methods for improving safety. Contributions also cover strategies to improve quality control, supply chain management and training in the manufacturing industry, and methods supporting circular supply chain and sustainable manufacturing. All in all, this book provides academicians, engineers and professionals with extensive information on both scientific and industrial advances in the converging fields of manufacturing, production, and automation
    corecore