11 research outputs found

    Cine cardiac MRI reconstruction using a convolutional recurrent network with refinement

    Full text link
    Cine Magnetic Resonance Imaging (MRI) allows for understanding of the heart's function and condition in a non-invasive manner. Undersampling of the kk-space is employed to reduce the scan duration, thus increasing patient comfort and reducing the risk of motion artefacts, at the cost of reduced image quality. In this challenge paper, we investigate the use of a convolutional recurrent neural network (CRNN) architecture to exploit temporal correlations in supervised cine cardiac MRI reconstruction. This is combined with a single-image super-resolution refinement module to improve single coil reconstruction by 4.4\% in structural similarity and 3.9\% in normalised mean square error compared to a plain CRNN implementation. We deploy a high-pass filter to our 1\ell_1 loss to allow greater emphasis on high-frequency details which are missing in the original data. The proposed model demonstrates considerable enhancements compared to the baseline case and holds promising potential for further improving cardiac MRI reconstruction.Comment: MICCAI STACOM workshop 202

    3D CNN methods in biomedical image segmentation

    Get PDF
    A definite trend in Biomedical Imaging is the one towards the integration of increasingly complex interpretative layers to the pure data acquisition process. One of the most interesting and looked-forward goals in the field is the automatic segmentation of objects of interest in extensive acquisition data, target that would allow Biomedical Imaging to look beyond its use as a purely assistive tool to become a cornerstone in ambitious large-scale challenges like the extensive quantitative study of the Human Brain. In 2019 Convolutional Neural Networks represent the state of the art in Biomedical Image segmentation and scientific interests from a variety of fields, spacing from automotive to natural resource exploration, converge to their development. While most of the applications of CNNs are focused on single-image segmentation, biomedical image data -being it MRI, CT-scans, Microscopy, etc- often benefits from three-dimensional volumetric expression. This work explores a reformulation of the CNN segmentation problem that is native to the 3D nature of the data, with particular interest to the applications to Fluorescence Microscopy volumetric data produced at the European Laboratories for Nonlinear Spectroscopy in the context of two different large international human brain study projects: the Human Brain Project and the White House BRAIN Initiative

    A deep learning approach for complex microstructure inference

    Get PDF
    Automated, reliable, and objective microstructure inference from micrographs is essential for a comprehensive understanding of process-microstructure-property relations and tailored materials development. However, such inference, with the increasing complexity of microstructures, requires advanced segmentation methodologies. While deep learning offers new opportunities, an intuition about the required data quality/quantity and a methodological guideline for microstructure quantification is still missing. This, along with deep learning’s seemingly intransparent decision-making process, hampers its breakthrough in this field. We apply a multidisciplinary deep learning approach, devoting equal attention to specimen preparation and imaging, and train distinct U-Net architectures with 30–50 micrographs of different imaging modalities and electron backscatter diffraction-informed annotations. On the challenging task of lath-bainite segmentation in complex-phase steel, we achieve accuracies of 90% rivaling expert segmentations. Further, we discuss the impact of image context, pre-training with domain-extrinsic data, and data augmentation. Network visualization techniques demonstrate plausible model decisions based on grain boundary morphology

    Machine Learning/Deep Learning in Medical Image Processing

    Get PDF
    Many recent studies on medical image processing have involved the use of machine learning (ML) and deep learning (DL). This special issue, “Machine Learning/Deep Learning in Medical Image Processing”, has been launched to provide an opportunity for researchers in the area of medical image processing to highlight recent developments made in their fields with ML/DL. Seven excellent papers that cover a wide variety of medical/clinical aspects are selected in this special issue

    CAD-Based Porous Scaffold Design of Intervertebral Discs in Tissue Engineering

    Get PDF
    With the development and maturity of three-dimensional (3D) printing technology over the past decade, 3D printing has been widely investigated and applied in the field of tissue engineering to repair damaged tissues or organs, such as muscles, skin, and bones, Although a number of automated fabrication methods have been developed to create superior bio-scaffolds with specific surface properties and porosity, the major challenges still focus on how to fabricate 3D natural biodegradable scaffolds that have tailor properties such as intricate architecture, porosity, and interconnectivity in order to provide the needed structural integrity, strength, transport, and ideal microenvironment for cell- and tissue-growth. In this dissertation, a robust pipeline of fabricating bio-functional porous scaffolds of intervertebral discs based on different innovative porous design methodologies is illustrated. Firstly, a triply periodic minimal surface (TPMS) based parameterization method, which has overcome the integrity problem of traditional TPMS method, is presented in Chapter 3. Then, an implicit surface modeling (ISM) approach using tetrahedral implicit surface (TIS) is demonstrated and compared with the TPMS method in Chapter 4. In Chapter 5, we present an advanced porous design method with higher flexibility using anisotropic radial basis function (ARBF) and volumetric meshes. Based on all these advanced porous design methods, the 3D model of a bio-functional porous intervertebral disc scaffold can be easily designed and its physical model can also be manufactured through 3D printing. However, due to the unique shape of each intervertebral disc and the intricate topological relationship between the intervertebral discs and the spine, the accurate localization and segmentation of dysfunctional discs are regarded as another obstacle to fabricating porous 3D disc models. To that end, we discuss in Chapter 6 a segmentation technique of intervertebral discs from CT-scanned medical images by using deep convolutional neural networks. Additionally, some examples of applying different porous designs on the segmented intervertebral disc models are demonstrated in Chapter 6

    Experimental and Data-driven Workflows for Microstructure-based Damage Prediction

    Get PDF
    Materialermüdung ist die häufigste Ursache für mechanisches Versagen. Die Degradationsmechanismen, welche die Lebensdauer von Bauteilen bei vergleichsweise ausgeprägten zyklischen Belastungen bestimmen, sind gut bekannt. Bei Belastungen im makroskopisch elastischen Bereich hingegen, der (sehr) hochzyklischen Ermüdung, bestimmen die innere Struktur eines Werkstoffs und die Wechselwirkung kristallografischer Defekte die Lebensdauer. Unter diesen Umständen sind die inneren Degradationsphänomene auf der mikroskopischen Skala weitgehend reversibel und führen nicht zur Bildung kritischer Schädigungen, die kontinuierlich wachsen können. Allerdings sind einige Kornensembles in polykristallinen Metallen, je nach den lokalen mikrostrukturellen Gegebenheiten, anfällig für Schädigungsinitiierung, Rissbildung und -wachstum und wirken daher als Schwachstellen. Daher weisen Bauteile, die solchen Belastungen ausgesetzt sind, oft eine ausgeprägte Lebensdauerstreuung auf. Die Tatsache, dass ein umfassendes mechanistisches Verständnis für diese Degradationsprozesse in verschiedenen Werkstoffen nicht vorliegt, hat zur Folge, dass die derzeitigen Modellierungsbemühungen die mittlere Lebensdauer und ihre Varianz in der Regel nur mit unbefriedigender Genauigkeit vorhersagen. Dies wiederum erschwert die Bauteilauslegung und macht die Nutzung von Sicherheitsfaktoren während des Dimensionierungsprozesses erforderlich. Abhilfe kann geschaffen werden, indem umfangreiche Daten zu Einflussfaktoren und deren Wirkung auf die Bildung initialer Ermüdungsschädigungen erhoben werden. Die Datenknappheit wirkt sich nach wie vor negativ auf Datenwissenschaftler und Modellierungsexperten aus, die versuchen, trotz geringer Stichprobengröße und unvollständigen Merkmalsräumen, mikrostrukturelle Abhängigkeiten abzuleiten, datengetriebene Vorhersagemodelle zu trainieren oder physikalische, regelbasierte Modelle zu parametrisieren. Die Tatsache, dass nur wenige kritische Schädigungen bezogen auf das gesamte Probenvolumen auftreten und die hochzyklische Ermüdung eine Vielzahl unterschiedlicher Abhängigkeiten aufweist, impliziert einige Anforderungen an die Datenerfassung und -verarbeitung. Am wichtigsten ist, dass die Messtechniken so empfindlich sind, dass nuancierte Schwankungen im Probenzustand erfasst werden können, dass die gesamte Routine effizient ist und dass die korrelative Mikroskopie räumliche Informationen aus verschiedenen Messungen miteinander verbindet. Das Hauptziel dieser Arbeit besteht darin, einen Workflow zu etablieren, der den Datenmangel behebt, so dass die zukünftige virtuelle Auslegung von Komponenten effizienter, zuverlässiger und nachhaltiger gestaltet werden kann. Zu diesem Zweck wird in dieser Arbeit ein kombinierter experimenteller und datenverarbeitender Workflow vorgeschlagen, um multimodale Datensätze zu Ermüdungsschädigungen zu erzeugen. Der Schwerpunkt liegt dabei auf dem Auftreten von lokalen Gleitbändern, der Rissinitiierung und dem Wachstum mikrostrukturell kurzer Risse. Der Workflow vereint die Ermüdungsprüfung von mesoskaligen Proben, um die Empfindlichkeit der Schädigungsdetektion zu erhöhen, die ergänzende Charakterisierung, die multimodale Registrierung und Datenfusion der heterogenen Daten, sowie die bildverarbeitungsbasierte Schädigungslokalisierung und -bewertung. Mesoskalige Biegeresonanzprüfung ermöglicht das Erreichen des hochzyklischen Ermüdungszustands in vergleichsweise kurzen Zeitspannen bei gleichzeitig verbessertem Auflösungsvermögen der Schädigungsentwicklung. Je nach Komplexität der einzelnen Bildverarbeitungsaufgaben und Datenverfügbarkeit werden entweder regelbasierte Bildverarbeitungsverfahren oder Repräsentationslernen gezielt eingesetzt. So sorgt beispielsweise die semantische Segmentierung von Schädigungsstellen dafür, dass wichtige Ermüdungsmerkmale aus mikroskopischen Abbildungen extrahiert werden können. Entlang des Workflows wird auf einen hohen Automatisierungsgrad Wert gelegt. Wann immer möglich, wurde die Generalisierbarkeit einzelner Workflow-Elemente untersucht. Dieser Workflow wird auf einen ferritischen Stahl (EN 1.4003) angewendet. Der resultierende Datensatz verknüpft unter anderem große verzerrungskorrigierte Mikrostrukturdaten mit der Schädigungslokalisierung und deren zyklischer Entwicklung. Im Zuge der Arbeit wird der Datensatz wird im Hinblick auf seinen Informationsgehalt untersucht, indem detaillierte, analytische Studien zur einzelnen Schädigungsbildung durchgeführt werden. Auf diese Weise konnten unter anderem neuartige, quantitative Erkenntnisse über mikrostrukturinduzierte plastische Verformungs- und Rissstopmechanismen gewonnen werden. Darüber hinaus werden aus dem Datensatz abgeleitete kornweise Merkmalsvektoren und binäre Schädigungskategorien verwendet, um einen Random-Forest-Klassifikator zu trainieren und dessen Vorhersagegüte zu bewerten. Der vorgeschlagene Workflow hat das Potenzial, die Grundlage für künftiges Data Mining und datengetriebene Modellierung mikrostrukturempfindlicher Ermüdung zu legen. Er erlaubt die effiziente Erhebung statistisch repräsentativer Datensätze mit gleichzeitig hohem Informationsgehalt und kann auf eine Vielzahl von Werkstoffen ausgeweitet werden

    Developing an Efficient Real-Time Terrestrial Infrastructure Inspection System Using Autonomous Drones and Deep Learning

    Get PDF
    Unmanned aerial vehicles (UAV), commonly referred to as drones (Dynamic Remotely Operated Navigation Equipment), show promise for deploying regular, automated structural inspections remotely. Deep learning has shown great potential for robustly detecting structural faults from collected images, through convolutional neural networks (CNN). However, running computationally demanding tasks (such as deep learning algorithms) on-board drones is difficult due to on-board memory and processing constraints. Moreover, the potential for fully automating drone navigation for structural data collection while optimizing deep learning models deployed to computationally constrained on-board processing units has yet to be realized for infrastructure inspection. Thus, an efficient, fully autonomous drone infrastructure inspection system is introduced. Using inertial sensors, mounted time-of-flight (ToF) and optical sensors to calculate distance readings for obstacle avoidance, a drone can autonomously track around structures. The drone can localize and extract faults in real-time on low-power processing units, through pixel-wise segmentation of faults from structural images collected by an on-board digital camera. Furthermore, proposed modifications to a CNN-based U-Net architecture show notable improvements to the baseline U-Net, in terms of pixel-wise segmentation accuracy and efficiency on computationally constrained on-board devices. After fault segmentation, the fault points corresponding to the predicted fault pixels are passed into a custom fault tracking algorithm; based on a robust line estimation technique, modifications are proposed using a quadtree data structure and a smart sampling approach. Using this approach, the drone is capable of following along faults robustly and efficiently during inspection to better gauge the extent of the spread of the faults

    Multiple Tools for Automated Nanofiber Characterization by Image Processing

    Get PDF
    Nanofibers have been widely used in many engineering applications, including air filtration, energy storage, and biomedical engineering. Their performances largely depend on the morphology of nanofibers. The key morphological parameters include fiber diameter, pore size, porosity, and thickness homogeneity, which are often manually determined at this moment. There is a need of automated tools for fast determination of nanofiber fiber diameters, pore size, porosity, and surface/thickness homogeneity. Researchers have developed automated tools to determine nanofiber diameters, primarily using MATLAB. However, none of the tools reported earlier can automatically process multiple images, which is essential to the accuracy of results. Regarding porosity, the most accurate approach to pore size determination is Brunauer-Emmett-Teller (BET) surface area analysis. This experimental approach is precise but time-consuming, costly, and destructive. Alternatively, the image processing method may offer a quick estimation of the porosity of the nanofiber mat. In addition, many researchers consider the surface homogeneity with even fiber diameters as good a homogeneity. However, the diameters shown in a SEM image only indicate the local homogeneity with a minimal dimension bounding the SEM image. Alternatively, the thickness of the entire nanofiber sample is a more reliable criterion. However, experimental determination of the thickness throughout the entire nanofiber mat is challenging because of its fragility and thinness. If the thickness of only a few places is measured, the local unevenness may be overlooked during the sampling. The main objective of this research is therefore to develop a set of automated tools for the characterization of diameter, inter-fiber and intra-fiber pores, porosity, and thickness homogeneity of nanofiber mats. Among them, three different approaches are used to determine the nanofiber diameter. Specifically, the following tools are developed to achieve the preceding goals. First, multi-image processing tools are developed to determine the diameters of nanofiber mats using MATLAB and machine learning based on UNet and residual neural network. Second, serval image processing tools using different image segmentation methods are proposed to determine the area of inter-fiber pores, intra-fiber pores, and the porosity. The most accurate one is identified by comparing their performances with experimental data. Finally, a characterization tool is proposed to quantitatively compare the nanofiber homogeneity by analyzing the light transmittance
    corecore