166 research outputs found

    Mechanisms of fatigue crack nucleation near non-metallic inclusions in Ni-based superalloys

    Get PDF
    Ni-based superalloys used for turbine discs are typically produced via powder metallurgy, a process which introduces undesirable non-metallic inclusions. Inclusions can be regarded as fatigue crack nucleation hot-spots due to their differing mechanical properties compared with the matrix they are embedded in. In this thesis, a series of models and experiments were used to investigate the mechanistic drivers of fatigue crack nucleation in the vicinity of non-metallic inclusions. The drivers of decohesion and fracture of inclusions, often precursors to crack nucleation, were found to be the normal stress acting on the interface and the inclusion maximum principal stress, respectively. Exact values of either criterion were found using a cohesive zone model in a crystal plasticity finite element (CPFE) model faithfully representative of a real microstructure under low cycle fatigue. Decohesion and inclusion fracture were contrasted against slip-driven nucleation by a stored energy criterion. The key finding here was that decohesion and inclusion fracture marginally reduce fatigue life. The comparative fatigue performance of an inclusion, a twin boundary and a triple-junction were studied in a synthetic CPFE microstructure. The inclusion recorded a significantly lower fatigue life compared with the intrinsic microstructural features. Various hardening models were used to investigate cyclic decohesion in a stress-controlled regime. Under no hardening model was cyclic decohesion predicted, strongly suggesting that decohesion is purely a function of applied stress within the first cycle. A discontinuity tolerant digital image correlation algorithm was developed to study fatigue crack nucleation near a non-metallic agglomerate at 300°C. Decohesion and fracture of inclusions occurred already within the first cycle of loading. Microcracks nucleated throughout the inclusion agglomerate after 6000 cycles. In addition, a fatigue crack nucleated adjacent a twin boundary in a coarse grain neighbouring the agglomerate. A high (angular) resolution electron backscatter diffraction (HR-EBSD) analysis and a discrete dislocation plasticity (DDP) model suggested that strong build-up of GNDs and slip near twin boundary owes to the elastic anisotropy of twin and parent.Open Acces

    Matched filter stochastic background characterization for hyperspectral target detection

    Get PDF
    Algorithms exploiting hyperspectral imagery for target detection have continually evolved to provide improved detection results. Adaptive matched filters, which may be derived in many different scientific fields, can be used to locate spectral targets by modeling scene background as either structured geometric) with a set of endmembers (basis vectors) or as unstructured stochastic) with a covariance matrix. In unstructured background research, various methods of calculating the background covariance matrix have been developed, each involving either the removal of target signatures from the background model or the segmenting of image data into spatial or spectral subsets. The objective of these methods is to derive a background which matches the source of mixture interference for the detection of sub pixel targets, or matches the source of false alarms in the scene for the detection of fully resolved targets. In addition, these techniques increase the multivariate normality of the data from which the background is characterized, thus increasing adherence to the normality assumption inherent in the matched filter and ultimately improving target detection results. Such techniques for improved background characterization are widely practiced but not well documented or compared. This thesis will establish a strong theoretical foundation, describing the necessary preprocessing of hyperspectral imagery, deriving the spectral matched filter, and capturing current methods of unstructured background characterization. The extensive experimentation will allow for a comparative evaluation of several current unstructured background characterization methods as well as some new methods which improve stochastic modeling of the background. The results will show that consistent improvements over the scene-wide statistics can be achieved through spatial or spectral subsetting, and analysis of the results provides insight into the tradespaces of matching the interference, background multivariate normality and target exclusion for these techniques

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research

    Image enhancement in digital X-ray angiography

    Get PDF
    Anyone who does not look back to the beginning throughout a course of action, does not look forward to the end. Hence it necessarily follows that an intention which looks ahead, depends on a recollection which looks back. | Aurelius Augustinus, De civitate Dei, VII.7 (417 A.D.) Chapter 1 Introduction and Summary D espite the development of imaging techniques based on alternative physical phenomena, such as nuclear magnetic resonance, emission of single photons ( -radiation) by radio-pharmaceuticals and photon pairs by electron-positron annihilations, re ection of ultrasonic waves, and the Doppler eect, X-ray based im- age acquisition is still daily practice in medicine. Perhaps this can be attributed to the fact that, contrary to many other phenomena, X-rays lend themselves naturally for registration by means of materials and methods widely available at the time of their discovery | a fact that gave X-ray based medical imaging an at least 50-year head start over possible alternatives. Immediately after the preliminary communica- tion on the discovery of the \new light" by R¨ ontgen [317], late December 1895, the possible applications of X-rays were investigated intensively. In 1896 alone, almost one 1,000 articles about the new phenomenon appeared in print (Glasser [119] lists all of them). Although most of the basics of the diagnostic as well as the therapeutic uses of X-rays had been worked out by the end of that year [289], research on im- proved acquisition and reduction of potential risks for humans continued steadily in the century to follow. The development of improved X-ray tubes, rapid lm changers, image intensiers, the introduction of television cameras into uoroscopy, and com- puters in digital radiography and computerized tomography, formed a succession of achievements which increased the diagnostic potential of X-ray based imaging. One of the areas in medical imaging where X-rays have always played an im- portant role is angiography,y which concerns the visualization of blood vessels in the human body. As already suggested, research on the possibility of visualization of the human vasculature was initiated shortly after the discovery of X-rays. A photograph of a rst \angiogram" | obtained by injection of a mixture of chalk, red mercury, and petroleum into an amputated hand, followed by almost an hour of exposure to X-rays | was published as early as January 1896, by Hascheck & Lindenthal [139]. Although studies on cadavers led to greatly improved knowledge of the anatomy of the human vascular system, angiography in living man for the purpose of diagnosis and intervention became feasible only after substantial progress in the development yA term originating from the Greek words o (aggeion), meaning \vessel" or \bucket", and -' (graphein), meaning \to write" or \to record". 2 1 Introduction and Summary of relatively safe contrast media and methods of administration, as well as advance- ments in radiological equipment. Of special interest in the context of this thesis is the improvement brought by photographic subtraction, a technique known since the early 1900s and since then used successfully in e.g. astronomy, but rst introduced in X-ray angiography in 1934, by Ziedses des Plantes [425, 426]. This technique al- lowed for a considerable enhancement of vessel visibility by cancellation of unwanted background structures. In the 1960s, the time consuming lm subtraction process was replaced by analog video subtraction techniques [156, 275] which, with the in- troduction of digital computers, gave rise to the development of digital subtraction angiography [194] | a technique still considered by many the \gold standard" for de- tection and quantication of vascular anomalies. Today, research on improved X-ray based imaging techniques for angiography continues, witness the recent developments in three-dimensional rotational angiography [88, 185, 186, 341,373]. The subject of this thesis is enhancement of digital X-ray angiography images. In contrast with the previously mentioned developments, the emphasis is not on the further improvement of image acquisition techniques, but rather on the development and evaluation of digital image processing techniques for retrospective enhancement of images acquired with existing techniques. In the context of this thesis, the term \enhancement" must be regarded in a rather broad sense. It does not only refer to improvement of image quality by reduction of disturbing artifacts and noise, but also to minimization of possible image quality degradation and loss of quantitative information, inevitably introduced by required image processing operations. These two aspects of image enhancement will be claried further in a brief summary of each of the chapters of this thesis. The rst three chapters deal with the problem of patient motion artifacts in digital subtraction angiography (DSA). In DSA imaging, a sequence of 2D digital X-ray projection images is acquired, at a rate of e.g. two per second, following the injection of contrast material into one of the arteries or veins feeding the part of the vasculature to be diagnosed. Acquisition usually starts about one or two seconds prior to arrival of the contrast bolus in the vessels of interest, so that the rst few images included in the sequence do not show opacied vessels. In a subsequent post-processing step, one of these \pre-bolus" images is then subtracted automatically from each of the contrast images so as to mask out background structures such as bone and soft- tissue shadows. However, it is clear that in the resulting digital subtraction images, the unwanted background structures will have been removed completely only when the patient lied perfectly still during acquisition of the original images. Since most patients show at least some physical reaction to the passage of a contrast medium, this proviso is generally not met. As a result, DSA images frequently show patient-motion induced artifacts (see e.g. the bottom-left image in Fig. 1.1), which may in uence the subsequent analysis and diagnosis carried out by radiologists. Since the introduction of DSA, in the early 1980s, many solutions to the problem of patient motion artifacts have been put forward. Chapter 2 presents an overview of the possible types of motion artifacts reported in the literature and the techniques that have been proposed to avoid them. The main purpose of that chapter is to review and discuss the techniques proposed over the past two decades to correct for 1 Introduction and Summary 3 Figure 1.1. Example of creation and reduction of patient motion artifacts in cerebral DSA imaging. Top left: a \pre-bolus" or mask image acquired just prior to the arrival of the contrast medium. Top right: one of the contrast or live images showing opacied vessels. Bottom left: DSA image obtained after subtraction of the mask from the contrast image, followed by contrast enhancement. Due to patient motion, the background structures in the mask and contrast image were not perfectly aligned, as a result of which the DSA image does not only show blood vessels, but also additional undesired structures (in this example primarily in the bottom-left part of the image). Bottom right: DSA image resulting from subtraction of the mask and contast image after application of the automatic registration algorithm described in Chapter 3. 4 1 Introduction and Summary patient motion artifacts retrospectively, by means of digital image processing. The chapter addresses fundamental problems, such as whether it is possible to construct a 2D geometrical transformation that exactly describes the projective eects of an originally 3D transformation, as well as practical problems, such as how to retrieve the correspondence between mask and contrast images by using only the grey-level information contained in the images, and how to align the images according to that correspondence in a computationally ecient manner. The review in Chapter 2 reveals that there exists quite some literature on the topic of (semi-)automatic image alignment, or image registration, for the purpose of motion artifact reduction in DSA images. However, to the best of our knowledge, research in this area has never led to algorithms which are suciently fast and robust to be acceptable for routine use in clinical practice. By drawing upon the suggestions put forward in Chapter 2, a new approach to automatic registration of digital X-ray angiography images is presented in Chapter 3. Apart from describing the functionality of the components of the algorithm, special attention is paid to their computationally optimal implementation. The results of preliminary experiments described in that chapter indicate that the algorithm is eective, very fast, and outperforms alterna- tive approaches, in terms of both image quality and required computation time. It is concluded that the algorithm is most eective in cerebral and peripheral DSA imag- ing. An example of the image quality enhancement obtained after application of the algorithm in the case of a cerebral DSA image is provided in Fig 1.1. Chapter 4 reports on a clinical evaluation of the automatic registration technique. The evaluation involved 104 cerebral DSA images, which were corrected for patient motion artifacts by the automatic technique, as well as by pixel shifting | a manual correction technique currently used in clinical practice. The quality of the DSA images resulting from the two techniques was assessed by four observers, who compared the images both mutually and to the corresponding original images. The results of the evaluation presented in Chapter 4 indicate that the dierence in performance between the two correction techniques is statistically signicant. From the results of the mutual comparisons it is concluded that, on average, the automatic registration technique performs either comparably, better than, or even much better than manual pixel shifting in 95% of all cases. In the other 5% of the cases, the remaining artifacts are located near the borders of the image, which are generally diagnostically non-relevant. In addition, the results show that the automatic technique implies a considerable reduction of post-processing time compared to manual pixel shifting (on average, one second versus 12 seconds per DSA image). The last two chapters deal with somewhat dierent topics. Chapter 5 is concerned with visualization and quantication of vascular anomalies in three-dimensional rota- tional angiography (3DRA). Similar to DSA imaging, 3DRA involves the acquisition of a sequence of 2D digital X-ray projection images, following a single injection of contrast material. Contrary to DSA, however, this sequence is acquired during a 180 rotation of the C-arch on which the X-ray source and detector are mounted antipo- dally, with the object of interest positioned in its iso-center. The rotation is completed in about eight seconds and the resulting image sequence typically contains 100 images, which form the input to a ltered back-projection algorithm for 3D reconstruction. In contrast with most other 3D medical imaging techniques, 3DRA is capable of provid- 1 Introduction and Summary 5 Figure 1.2. Visualizations of a clinical 3DRA dataset, illustrating the qualitative improvement obtained after noise reduction ltering. Left: volume rendering of the original, raw image. Right: volume rendering of the image after application of edge-enhancing anisotropic diusion ltering (see Chapter 5 for a description of this technique). The visualizations were obtained by using the exact same settings for the parameters of the volume rendering algorithm. ing high-resolution isotropic datasets. However, due to the relatively high noise level and the presence of other unwanted background variations caused by surrounding tissue, the use of noise reduction techniques is inevitable in order to obtain smooth visualizations of these datasets (see Fig. 1.2). Chapter 5 presents an inquiry into the eects of several linear and nonlinear noise reduction techniques on the visualization and subsequent quantication of vascular anomalies in 3DRA images. The evalua- tion is focussed on frequently occurring anomalies such as a narrowing (or stenosis) of the internal carotid artery or a circumscribed dilation (or aneurysm) of intracra- nial arteries. Experiments on anthropomorphic vascular phantoms indicate that, of the techniques considered, edge-enhancing anisotropic diusion ltering is most suit- able, although the practical use of this technique may currently be limited due to its memory and computation-time requirements. Finally, Chapter 6 addresses the problem of interpolation of sampled data, which occurs e.g. when applying geometrical transformations to digital medical images for the purpose of registration or visualization. In most practical situations, interpola- tion of a sampled image followed by resampling of the resulting continuous image on a geometrically transformed grid, inevitably implies loss of grey-level information, and hence image degradation, the amount of which is dependent on image content, but also on the employed interpolation scheme (see Fig. 1.3). It follows that the choice for a particular interpolation scheme is important, since it in uences the re- sults of registrations and visualizations, and the outcome of subsequent quantitative analyses which rely on grey-level information contained in transformed images. Al- though many interpolation techniques have been developed over the past decades, 6 1 Introduction and Summary Figure 1.3. Illustration of the fact that the loss of information due to interpola- tion and resampling operations is dependent on the employed interpolation scheme. Left: slice of a 3DRA image after rotation over 5:0, by using linear interpolation. Middle: the same slice, after rotation by using cubic spline interpolation. Right: the dierence between the two rotated images. Although it is not possible with such a comparison to come to conclusions as to which of the two methods yields the smallest loss of grey-level information, this example clearly illustrates the point that dierent interpolation methods usually yield dierent results. thorough quantitative evaluations and comparisons of these techniques for medical image transformation problems are still lacking. Chapter 6 presents such a compar- ative evaluation. The study is limited to convolution-based interpolation techniques, as these are most frequently used for registration and visualization of medical image data. Because of the ubiquitousness of interpolation in medical image processing and analysis, the study is not restricted to XRA and 3DRA images, but also includes datasets from many other modalities. It is concluded that for all modalities, spline interpolation constitutes the best trade-o between accuracy and computational cost, and therefore is to be preferred over all other methods. In summary, this thesis is concerned with the improvement of image quality and the reduction of image quality degradation and loss of quantitative information. The subsequent chapters describe techniques for reduction of patient motion artifacts in DSA images, noise reduction techniques for improved visualization and quantication of vascular anomalies in 3DRA images, and interpolation techniques for the purpose of accurate geometrical transformation of medical image data. The results and con- clusions of the evaluations described in this thesis provide general guidelines for the applicability and practical use of these techniques

    Compression of Spectral Images

    Get PDF

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Characterization of Smoke Particles Toward Improved Remote Sensing Retrievals and Chemical Transport Modeling

    Get PDF
    Wildfires increase in extent, intensity, and frequency across the globe over the recent decades. The uncontrolled fires trigger cascading effects on local ecosystems and the fire emissions pose a higher risk to air quality and climate. Wildfire emissions contain a variety of trace gases and particulate matters. The particle-phase emissions, especially those light-absorbing species including black carbon (BC) and brown carbon (BrC), significantly affect the regional and global climate by modulating the radiative transfer phenomena in the atmosphere. A great discrepancy still exists between model- and observation-based estimates of aerosol-radiation interactions (ARI). The discrepancy is partially attributed to the mischaracterizations of aerosol microphysical properties in current chemical transport models and the misinterpretation of satellite observational data. Motivated these challenges, this dissertation tends to advance the knowledge in wildfire studies from two aspects: (1) assessing the radiative effects of fire-emitted particles by incorporating their morphological and optical properties into a radiative transfer algorithm, and (2) developing an improved algorithm to retrieve the subpixel fire properties.Objective 1: Nascent BC particles exhibit an aggregated appearance. We applied electron tomography (ET) coupled with a slice-by-slice voxel filling algorithm to reconstruct the 3D morphology of BC aggregates. The morphological and optical properties of the BC aggregates are respectively studied with the Q-space analysis and discrete dipole approximation approach. Our study indicates that the ET reconstructed aggregates are different in morphological and optical characteristics than those resolved from the traditional 2D microscopic analysis or modeling aggregation processes. Additionally, BC aerosols undergo an aging process as they are emitted into the atmosphere. The particle-scale characterization was further extended to the aged BC particles by adding different levels of coating onto the nascent BC. In this part of work, we numerically investigated the variation of fractal characteristics as BC is coated. The morphologies of coated BC particle fit well with the ideal fractal law when its radius of gyration is identical to that of the bare BC core. However, using the same law is difficult to fit the structures of heavily or unevenly coated BC aggregates. Our findings suggest a more realistic parameterization of both nascent and aged BC needs to be incorporated in climate models. The microphysical characteristics of fire-emitted particles were then incorporated into radiative transfer models to evaluate their radiative effects. We integrated the Mie code with the successive order of scattering (SOS) algorithm to simulate the polarimetric signals at the top of the atmosphere. The modeled polarization quantities have exhibited potential to distinguish particles with distinct light-absorbing properties. Moreover, we integrated the above-mentioned fractal particle model and the associated optical properties of aggregated particles into an optical computation module, Flexible Aerosol Optical Depth (FlexAOD), as well as an offline radiative transfer algorithm based on DIScrete Ordinates Radiative Transfer (DISORT) principle, to re-evaluate the ARI of BC in the wildfire regions in the northwest US. Our results suggest that BC morphologies have noticeable impacts on aerosol optical depth (AOD), and the resulting radiative forcings. Objective 2: The sporadic occurrence and the dynamically evolving nature of wildfires requires measurement techniques with broad spatiotemporal coverages and high resolution. Satellite-based products thus have been widely used in estimating the emission rates of atmospheric pollutants. Additionally, many atmospheric and meteorological applications require the fraction of fire area at the subpixel scale and fire temperature to estimate the plume injection height and understand mechanisms of the following pyroconvection processes. A thermodynamically-constrained algorithm was developed which utilizes the radiance at middle infrared (MIR) and thermal infrared (TIR) wavelengths to retrieve the subpixel fire characteristics. This algorithm considers the heat transfer phenomena beyond solely the fire area to include the adjacent heated land. By doing this, we resolved a continuously changed temperature profile outside the fire area. Furthermore, the comparisons of the retrieved fire temperature and area fraction between the improved and the traditional bi-spectral algorithms via a Williams Flats fire test case during the 2019 FIREX-AQ campaign show the improved algorithm outputs a lower fire temperature but significantly larger fire area fraction than the traditional method. It implies that this new algorithm can further reconcile the significant underestimation of fire emissions estimated by burned-area based approach

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Image editing and interaction tools for visual expression

    Get PDF
    Digital photography is becoming extremely common in our daily life. However, images are difficult to edit and interact with. From a user's perspective, it is important to interact freely with the images on his/her smartphone or ipad. In this thesis we develop several image editing and interaction systems with this idea in mind. We aim for creating visual models with pre-computed internal structures such that interaction is readily supported. We demonstrate that such interactable models, driven by a user's hand, can render powerful visual expressiveness, and make static pixel arrays much more fun to play with. The first system harnesses the editing power of vector graphics. We convert raster images into a vector representation using Loop's subdivision surfaces. An image is represented by a multi-resolution feature-preserving sparse control mesh, with which image editing can be done at semantic level. A user can easily put a smile on a face image, or adjust the level of scene abstractness through a simple slider. The second system allows one to insert an object from image into a new scene. The key is to correct the shading on the object such that it goes consistently with the scene. Unlike traditional approach, we use a simple shape to capture gross shading effects and a set of shading detail images to account for visual complexities. The high-frequency nature of these detail images allows a moderate range of interactive composition effects without causing alarming visual artifacts. The third system is on video clips instead of a single image. We proposed a fully automated algorithm to creat
    • …
    corecore