69 research outputs found

    Spatial-Spectral Transformer for Hyperspectral Image Denoising

    Full text link
    Hyperspectral image (HSI) denoising is a crucial preprocessing procedure for the subsequent HSI applications. Unfortunately, though witnessing the development of deep learning in HSI denoising area, existing convolution-based methods face the trade-off between computational efficiency and capability to model non-local characteristics of HSI. In this paper, we propose a Spatial-Spectral Transformer (SST) to alleviate this problem. To fully explore intrinsic similarity characteristics in both spatial dimension and spectral dimension, we conduct non-local spatial self-attention and global spectral self-attention with Transformer architecture. The window-based spatial self-attention focuses on the spatial similarity beyond the neighboring region. While, spectral self-attention exploits the long-range dependencies between highly correlative bands. Experimental results show that our proposed method outperforms the state-of-the-art HSI denoising methods in quantitative quality and visual results

    Spatially Enhanced Spectral Unmixing Through Data Fusion of Spectral and Visible Images from Different Sensors

    Get PDF
    Publiher's version (útgefin grein)We propose an unmixing framework for enhancing endmember fraction maps using a combination of spectral and visible images. The new method, data fusion through spatial information-aided learning (DFuSIAL), is based on a learning process for the fusion of a multispectral image of low spatial resolution and a visible RGB image of high spatial resolution. Unlike commonly used methods, DFuSIAL allows for fusing data from different sensors. To achieve this objective, we apply a learning process using automatically extracted invariant points, which are assumed to have the same land cover type in both images. First, we estimate the fraction maps of a set of endmembers for the spectral image. Then, we train a spatial-features aided neural network (SFFAN) to learn the relationship between the fractions, the visible bands, and rotation-invariant spatial features for learning (RISFLs) that we extract from the RGB image. Our experiments show that the proposed DFuSIAL method obtains fraction maps with significantly enhanced spatial resolution and an average mean absolute error between 2% and 4% compared to the reference ground truth. Furthermore, it is shown that the proposed method is preferable to other examined state-of-the-art methods, especially when data is obtained from different instruments and in cases with missing-data pixels.This research was partially funded by the Icelandic Research Fund through the EMMIRS project, and bythe Israel Science Ministry and Space Agency through the Venus project.Peer Reviewe

    Structured-light based sensing using a single fixed fringe grating: Fringe boundary detection and 3-D reconstruction

    Get PDF
    Advanced electronic manufacturing requires the 3-D inspection of very small surfaces like the solder bumps on wafers for direct die-to-die bonding. Yet the microscopic size and highly specular and textureless nature of the surfaces make the task difficult. It is also demanded that the size of the entire inspection system be small so as to minimize restraint on the operation of the various moving parts involved in the manufacturing process. In this paper, we describe a new 3-D reconstruction mechanism for the task. The mechanism is based upon the well-known concept of structured-light projection, but adapted to a new configuration that owns a particularly small system size and operates in a different manner. Unlike the traditional mechanisms which involve an array of light sources that occupy a rather extended physical space, the proposed mechanism consists of only a single light source plus a binary grating for projecting binary pattern. To allow the projection at each position of the inspected surface to vary and form distinct binary code, the binary grating is shifted in space. In every shift, a separate image of the illuminated surface is taken. With the use of pattern projection, and of discrete coding instead of analog coding in the projection, issues like texture-absence, image saturation, and image noise of the inspected surfaces are much lessened. Experimental results on a variety of objects are presented to illustrate the effectiveness of this mechanism. © 2008 IEEE.published_or_final_versio

    LASER RANGE IMAGING FOR ON-LINE MAPPING OF 3D IMAGES TO PSEUDO-X-RAY IMAGES FOR POULTRY BONE FRAGMENT DETECTION

    Get PDF
    A laser ranging image system was developed for on-line high-resolution 3D shape recovery of poultry fillets. The range imaging system in conjunction with X-ray imaging was used to provide synergistic imaging detection of bone fragments in poultry fillets. In this research, two 5 mW diode lasers coupled with two CCD cameras were used to produce 3D information based on structured lights and triangulation. A laser scattering phenomenon on meat tissues was studied when calculating the object thickness. To obtain the accurate 3D information, the cameras were calibrated to correct for camera distortions. For pixel registrations of the X-ray and laser 3D images, the range imaging system was calibrated, and noises and signal variations in the X-ray and laser 3D images were analyzed. Furthermore, the relationship between the X-ray absorption and 3D thickness of fillets was obtained, and a mapping function based on this relationship was applied to convert the fillet 3D images into the pseudo-X-ray images. For the on-line system implementation, the imaging hardware and software engineering issues, including the data flow optimization and the operating system task scheduling, were also studied. Based on the experimental on-line test, the range imaging system developed was able to scan poultry fillets at a speed of 0.2 m/sec at a resolution of 0.8(X) x 0.7(Y) x 0.7(Z) mm3. The results of this study have shown great potential for non-invasive detection of hazardous materials in boneless poultry meat with uneven thickness

    Imaging Sensors and Applications

    Get PDF
    In past decades, various sensor technologies have been used in all areas of our lives, thus improving our quality of life. In particular, imaging sensors have been widely applied in the development of various imaging approaches such as optical imaging, ultrasound imaging, X-ray imaging, and nuclear imaging, and contributed to achieve high sensitivity, miniaturization, and real-time imaging. These advanced image sensing technologies play an important role not only in the medical field but also in the industrial field. This Special Issue covers broad topics on imaging sensors and applications. The scope range of imaging sensors can be extended to novel imaging sensors and diverse imaging systems, including hardware and software advancements. Additionally, biomedical and nondestructive sensing applications are welcome

    In-situ Grain Scale Strain Measurements using Digital Image Correlation

    Get PDF
    Materials used in engineering structures fatigue and ultimately fail due to the various applied loads they are subject to, a process which compromises structural performance and potentially poses threats to society. Commonly employed theoretical models capable of describing and predicting deformation and failure are typically validated by relevant experimental results obtained from laboratory testing. However, such models are also often based on simplifying assumptions including for example homogeneous composition and isotropic behavior, since available experimental information relates primarily to bulk behavior.Metals are crystalline in nature and their failure depends on several parameters that span a wide range of time and length scales. Therefore, significant efforts have been made over the past decades to investigate the mechanical behavior of polycrystalline metals by formulating important microstructure-properties relations. In this context, this thesis presents a framework to obtain reliable, non-destructive, non-contact, full field measurements of deformation and strain at the grain-scale of polycrystalline materials to assist the understanding of materials phenomena and contribute in the development of realistic mechanics models. To this aim, the method of Digital Image Correlation is used, adapted and expanded.Digital Image Correlation relies on images of the surface of tested specimens, components or structures and the identification of surface contrast patterns which are tracked as a function of deformation and are subsequently used to define displacements and strains. To quantify stains at the grain-scale, three different approaches based on Digital Image Correlation are described. The first involves the use of a commercial system adapted to make grain-scale measurements at the meso-scale (~4mm). A magnesium AZ31alloy was observed for this purpose and full field strain maps are reported. The second employs the same commercial system augmented with a long distance optical microscope to in-situ quantify strains at the tip of a propagating crack in a Compact Tension specimen of an Al2024 aluminum alloy subjected to Mode I loading and using a field of view of ~870 x 730 μm. Finally, the third approach uses an image series acquired from loading a stainless steel sample inside a scanning electron microscope equipped with a micro-tensile stage. Such information was post processed ex-situ and strains were obtained. The advantages and limitations of the proposed approaches are critically evaluated and future work is described to further enhance the reliability and repeatability of grain scale strain measurements using Digital Image Correlation.M.S., Mechanical Engineering -- Drexel University, 201
    corecore