8,942 research outputs found

    Nondestructive Testing Methods and New Applications

    Get PDF
    Nondestructive testing enables scientists and engineers to evaluate the integrity of their structures and the properties of their materials or components non-intrusively, and in some instances in real-time fashion. Applying the Nondestructive techniques and modalities offers valuable savings and guarantees the quality of engineered systems and products. This technology can be employed through different modalities that include contact methods such as ultrasonic, eddy current, magnetic particles, and liquid penetrant, in addition to contact-less methods such as in thermography, radiography, and shearography. This book seeks to introduce some of the Nondestructive testing methods from its theoretical fundamentals to its specific applications. Additionally, the text contains several novel implementations of such techniques in different fields, including the assessment of civil structures (concrete) to its application in medicine

    MFA-DVR: Direct Volume Rendering of MFA Models

    Full text link
    3D volume rendering is widely used to reveal insightful intrinsic patterns of volumetric datasets across many domains. However, the complex structures and varying scales of volumetric data can make efficiently generating high-quality volume rendering results a challenging task. Multivariate functional approximation (MFA) is a new data model that addresses some of the critical challenges: high-order evaluation of both value and derivative anywhere in the spatial domain, compact representation for large-scale volumetric data, and uniform representation of both structured and unstructured data. In this paper, we present MFA-DVR, the first direct volume rendering pipeline utilizing the MFA model, for both structured and unstructured volumetric datasets. We demonstrate improved rendering quality using MFA-DVR on both synthetic and real datasets through a comparative study. We show that MFA-DVR not only generates more faithful volume rendering than using local filters but also performs faster on high-order interpolations on structured and unstructured datasets. MFA-DVR is implemented in the existing volume rendering pipeline of the Visualization Toolkit (VTK) to be accessible by the scientific visualization community

    Three-dimensional subsurface defect shape reconstruction and visualisation by pulsed thermography

    Get PDF
    Defects detected by most thermographic inspection are represented in the form of 2D image, which might limit the understanding of where the defects initiate and how they grow over time. This paper introduces a novel technique to rapidly estimate the defect depth and thickness simultaneously based on one single-side inspection. For the first time, defects are reconstructed and visualised in the form of a 3D image using cost-effective and rapid pulsed thermography technology. The feasibility and effectiveness of the proposed solution is demonstrated through inspecting a composite specimen and a steel specimen with semi-closed airgaps. For the composite specimen, this technique can deliver comparatively low averaged percentage error of the estimated total 3D defect volume of less than 10%

    Understanding and improving ultrasonic inspection of jet-engine titanium alloy

    Get PDF
    Commercial titanium alloy is widely used in the rotating components of aircraft engines. To ensure the safety and longer lifetime of these critical parts, the demand to detect smaller defects becomes more and more important. However, the detection of smaller defects by ultrasonic method in such materials is made difficult by the complicated ultrasound-microstructure interactions, such as the high backscattered grain noise levels and serious signal fluctuations. The objective of this research is to develop a more complete understanding of these phenomena to guide solutions that would address those problems.;In Chapter 1, the relationships between ultrasonic properties and the microstructure are investigated for a series of Ti-6Al-4V forging specimen. Close correlation between the ultrasonic properties and the forging deformation parameters are observed. A model was developed to correlate backscattered grain noise levels with microstructural variations (grain orientation, elongation and texture) due to the inhomogeneous plastic deformation during forging. The model predictions and experiments agree reasonably well.;In Chapter 2, an existing backscattered grain noise theory is extended, leading to a formal theory predicting the spatial correlation of the backscattered grain noise. A special form of the theory for a Gaussian beam is also presented to demonstrate that the material microstructure and the overlap of the incident beam are the important physical parameters controlling the grain noise spatial correlation. The developed theory is validated by the excellent agreements between the predictions and experiments. Physical insights of the results for different setups were discussed.;Ultrasonic signal fluctuations are studied in Chapter 3. The microstructure-induced beam distortions are first explicitly demonstrated. An analytical relationship is then derived to correlate the back-wall P/E spectrum at one transducer location to the through-transmitted field. Based on the analytical relationship and the statistical descriptions of various beam distortion effects, a quantitative Monte-Carlo model is developed to predict the back-wall amplitude fluctuations seen in ultrasonic P/E inspections. The predictions are shown to be in good agreements with experiments. The same modeling approach is used to simulate the flaw (small reflector) signal fluctuation and the results are compared with an independent modeling study. Qualitative agreements are observed

    Volumetric Isosurface Rendering with Deep Learning-Based Super-Resolution

    Full text link
    Rendering an accurate image of an isosurface in a volumetric field typically requires large numbers of data samples. Reducing the number of required samples lies at the core of research in volume rendering. With the advent of deep learning networks, a number of architectures have been proposed recently to infer missing samples in multi-dimensional fields, for applications such as image super-resolution and scan completion. In this paper, we investigate the use of such architectures for learning the upscaling of a low-resolution sampling of an isosurface to a higher resolution, with high fidelity reconstruction of spatial detail and shading. We introduce a fully convolutional neural network, to learn a latent representation generating a smooth, edge-aware normal field and ambient occlusions from a low-resolution normal and depth field. By adding a frame-to-frame motion loss into the learning stage, the upscaling can consider temporal variations and achieves improved frame-to-frame coherence. We demonstrate the quality of the network for isosurfaces which were never seen during training, and discuss remote and in-situ visualization as well as focus+context visualization as potential application

    NUMERICAL AND LABORATORY STUDY OF SEISMIC WAVES PROPAGATION, TEMPERATURE EFFECTS AND FLUID FLOWS IN MULTILAYERED MEDIA

    Get PDF
    Steel production by continuous casting is nowadays the most efficient method and the one that yields the best quality semi-finished products. The types of steel that can be produced varies greatly depending on the composition of the mixtures, the casting powders used to prevent oxidation and reduce heat loss, the cooling rate, and many other factors. During continuous casting, heat from the molten steel must be removed in large quantities and quickly to allow the first layer of solid skin to be created, so the continuous casting moulds, i.e. large hollow tubes generally made of copper alloys, are immersed in a conveyor with a closed water circuit where water circulates at high speed and pressure. In addition to water, there are also other parameters that can be monitored to increase production quality, such as powder deposition on the casting bath and steel level control. It would be useful to have automatic systems capable of replacing manual human control, to avoid the hazardous situations obviously present in steel mills, but also to increase knowledge of the production process through the acquisition of reliable data. This research aims to experimentally explore the possibility of measuring the level of molten steel in the mould by making time-of-flight measurements in the wall of the ingot using ultrasonic transducers similar the ones used for non-destructive testing of materials. These time-of-flight measurements are then converted to temperature and determine a thermal profile along the mould wall, from which the steel level is derived using an ad-hoc constructed algorithm. The research activity was divided into the realization of a real-time hardware and software system that was eventually adopted in real production systems as well. To understand how to design an initial prototype and how to choose the key parameters of the measurement system, a numerical model was implemented to simulate Gaussian beams, which are used to approximate the propagation of ultrasonic beams in even heterogeneous media, as in this case. The results obtained, both from numerical simulations and laboratory tests, made it possible to implement a first measurement tool that adopted a technique already known in the literature but innovative in the sense of application to an industrial context such as continuous castingSteel production by continuous casting is nowadays the most efficient method and the one that yields the best quality semi-finished products. The types of steel that can be produced varies greatly depending on the composition of the mixtures, the casting powders used to prevent oxidation and reduce heat loss, the cooling rate, and many other factors. During continuous casting, heat from the molten steel must be removed in large quantities and quickly to allow the first layer of solid skin to be created, so the continuous casting moulds, i.e. large hollow tubes generally made of copper alloys, are immersed in a conveyor with a closed water circuit where water circulates at high speed and pressure. In addition to water, there are also other parameters that can be monitored to increase production quality, such as powder deposition on the casting bath and steel level control. It would be useful to have automatic systems capable of replacing manual human control, to avoid the hazardous situations obviously present in steel mills, but also to increase knowledge of the production process through the acquisition of reliable data. This research aims to experimentally explore the possibility of measuring the level of molten steel in the mould by making time-of-flight measurements in the wall of the ingot using ultrasonic transducers similar the ones used for non-destructive testing of materials. These time-of-flight measurements are then converted to temperature and determine a thermal profile along the mould wall, from which the steel level is derived using an ad-hoc constructed algorithm. The research activity was divided into the realization of a real-time hardware and software system that was eventually adopted in real production systems as well. To understand how to design an initial prototype and how to choose the key parameters of the measurement system, a numerical model was implemented to simulate Gaussian beams, which are used to approximate the propagation of ultrasonic beams in even heterogeneous media, as in this case. The results obtained, both from numerical simulations and laboratory tests, made it possible to implement a first measurement tool that adopted a technique already known in the literature but innovative in the sense of application to an industrial context such as continuous castin

    Efficient automatic correction and segmentation based 3D visualization of magnetic resonance images

    Get PDF
    In the recent years, the demand for automated processing techniques for digital medical image volumes has increased substantially. Existing algorithms, however, still often require manual interaction, and newly developed automated techniques are often intended for a narrow segment of processing needs. The goal of this research was to develop algorithms suitable for fast and effective correction and advanced visualization of digital MR image volumes with minimal human operator interaction. This research has resulted in a number of techniques for automated processing of MR image volumes, including a novel MR inhomogeneity correction algorithm derivative surface fitting (dsf), automatic tissue detection algorithm (atd), and a new fast technique for interactive 3D visualization of segmented volumes called gravitational shading (gs). These newly developed algorithms provided a foundation for the automated MR processing pipeline incorporated into the UniViewer medical imaging software developed in our group and available to the public. This allowed the extensive testing and evaluation of the proposed techniques. Dsf was compared with two previously published methods on 17 digital image volumes. Dsf demonstrated faster correction speeds and uniform image quality improvement in this comparison. Dsf was the only algorithm that did not remove anatomic detail. Gs was compared with the previously published algorithm fsvr and produced rendering quality improvement while preserving real-time frame-rates. These results show that the automated pipeline design principles used in this dissertation provide necessary tools for development of a fast and effective system for the automated correction and visualization of digital MR image volumes

    Efficient and Accurate Segmentation of Defects in Industrial CT Scans

    Get PDF
    Industrial computed tomography (CT) is an elementary tool for the non-destructive inspection of cast light-metal or plastic parts. A comprehensive testing not only helps to ensure the stability and durability of a part, it also allows reducing the rejection rate by supporting the optimization of the casting process and to save material (and weight) by producing equivalent but more filigree structures. With a CT scan it is theoretically possible to locate any defect in the part under examination and to exactly determine its shape, which in turn helps to draw conclusions about its harmfulness. However, most of the time the data quality is not good enough to allow segmenting the defects with simple filter-based methods which directly operate on the gray-values—especially when the inspection is expanded to the entire production. In such in-line inspection scenarios the tight cycle times further limit the available time for the acquisition of the CT scan, which renders them noisy and prone to various artifacts. In recent years, dramatic advances in deep learning (and convolutional neural networks in particular) made even the reliable detection of small objects in cluttered scenes possible. These methods are a promising approach to quickly yield a reliable and accurate defect segmentation even in unfavorable CT scans. The huge drawback: a lot of precisely labeled training data is required, which is utterly challenging to obtain—particularly in the case of the detection of tiny defects in huge, highly artifact-afflicted, three-dimensional voxel data sets. Hence, a significant part of this work deals with the acquisition of precisely labeled training data. Firstly, we consider facilitating the manual labeling process: our experts annotate on high-quality CT scans with a high spatial resolution and a high contrast resolution and we then transfer these labels to an aligned ``normal'' CT scan of the same part, which holds all the challenging aspects we expect in production use. Nonetheless, due to the indecisiveness of the labeling experts about what to annotate as defective, the labels remain fuzzy. Thus, we additionally explore different approaches to generate artificial training data, for which a precise ground truth can be computed. We find an accurate labeling to be crucial for a proper training. We evaluate (i) domain randomization which simulates a super-set of reality with simple transformations, (ii) generative models which are trained to produce samples of the real-world data distribution, and (iii) realistic simulations which capture the essential aspects of real CT scans. Here, we develop a fully automated simulation pipeline which provides us with an arbitrary amount of precisely labeled training data. First, we procedurally generate virtual cast parts in which we place reasonable artificial casting defects. Then, we realistically simulate CT scans which include typical CT artifacts like scatter, noise, cupping, and ring artifacts. Finally, we compute a precise ground truth by determining for each voxel the overlap with the defect mesh. To determine whether our realistically simulated CT data is eligible to serve as training data for machine learning methods, we compare the prediction performance of learning-based and non-learning-based defect recognition algorithms on the simulated data and on real CT scans. In an extensive evaluation, we compare our novel deep learning method to a baseline of image processing and traditional machine learning algorithms. This evaluation shows how much defect detection benefits from learning-based approaches. In particular, we compare (i) a filter-based anomaly detection method which finds defect indications by subtracting the original CT data from a generated ``defect-free'' version, (ii) a pixel-classification method which, based on densely extracted hand-designed features, lets a random forest decide about whether an image element is part of a defect or not, and (iii) a novel deep learning method which combines a U-Net-like encoder-decoder-pair of three-dimensional convolutions with an additional refinement step. The encoder-decoder-pair yields a high recall, which allows us to detect even very small defect instances. The refinement step yields a high precision by sorting out the false positive responses. We extensively evaluate these models on our realistically simulated CT scans as well as on real CT scans in terms of their probability of detection, which tells us at which probability a defect of a given size can be found in a CT scan of a given quality, and their intersection over union, which gives us information about how precise our segmentation mask is in general. While the learning-based methods clearly outperform the image processing method, the deep learning method in particular convinces by its inference speed and its prediction performance on challenging CT scans—as they, for example, occur in in-line scenarios. Finally, we further explore the possibilities and the limitations of the combination of our fully automated simulation pipeline and our deep learning model. With the deep learning method yielding reliable results for CT scans of low data quality, we examine by how much we can reduce the scan time while still maintaining proper segmentation results. Then, we take a look on the transferability of the promising results to CT scans of parts of different materials and different manufacturing techniques, including plastic injection molding, iron casting, additive manufacturing, and composed multi-material parts. Each of these tasks comes with its own challenges like an increased artifact-level or different types of defects which occasionally are hard to detect even for the human eye. We tackle these challenges by employing our simulation pipeline to produce virtual counterparts that capture the tricky aspects and fine-tuning the deep learning method on this additional training data. With that we can tailor our approach towards specific tasks, achieving reliable and robust segmentation results even for challenging data. Lastly, we examine if the deep learning method, based on our realistically simulated training data, can be trained to distinguish between different types of defects—the reason why we require a precise segmentation in the first place—and we examine if the deep learning method can detect out-of-distribution data where its predictions become less trustworthy, i.e. an uncertainty estimation
    • …
    corecore