22 research outputs found

    Autonomous robotic system for thermographic detection of defects in upper layers of carbon fiber reinforced polymers

    Get PDF
    Carbon Fiber Reinforced Polymers (CFRPs) are composites whose interesting properties, like high strength-to-weight ratio and rigidity, are of interest in many industrial fields. Many defects affecting their production process are due to the wrong distribution of the thermosetting polymer in the upper layers. In this work, they are effectively and efficiently detected by automatically analyzing the thermographic images obtained by Pulsed Phase Thermography (PPT) and comparing them with a defect-free reference. The flash lamp and infrared camera needed by PPT are mounted on an industrial robot so that surfaces of CFRP automotive components, car side blades in our case, can be inspected in a series of static tests. The thermographic image analysis is based on local contrast adjustment via UnSharp Masking (USM) and takes also advantage of the high level of knowledge of the entire system provided by the calibration procedures. This system could replace manual inspection leading to a substantial increase in efficiency

    Comparing Adobe’s Unsharp Masks and High-Pass Filters in Photoshop Using the Visual Information Fidelity Metric

    Get PDF
    The present study examines image sharpening techniques quantitatively. A technique known as unsharp masking has been the preferred image sharpening technique for imaging professionals for many years. More recently, another professional-level sharpening solution has been introduced, namely, the high-pass filter technique of image sharpening. An extensive review of the literature revealed no purely quantitative studies that compared these techniques. The present research compares unsharp masking (USM) and high-pass filter (HPF) sharpening using an image quality metric known as Visual Information Fidelity (VIF). Prior researchers have used VIF data in research aimed at improving the USM sharpening technique. The present study aims to add to this branch of the literature through the comparison of the USM and the HPF sharpening techniques. The objective of the present research is to determine which sharpening technique, USM or HPF, yields the highest VIF scores for two categories of images, macro images and architectural images. Each set of images was further analyzed to compare the VIF scores of subjects with high and low severity depth of field defects. Finally, the researcher proposed rules for choosing USM and HPF parameters that resulted in optimal VIF scores. For each category, the researcher captured 24 images (12 with high severity defects and 12 with low severity defects). Each image was sharpened using an iterative process of choosing USM and HPF sharpening parameters, applying sharpening filters with the chosen parameters, and assessing the resulting images using the VIF metric. The process was repeated until the VIF scores could no longer be improved. The highest USM and HPF VIF scores for each image were compared using a paired t-test for statistical significance. The t-test results demonstrated that: • The USM VIF scores for macro images (M = 1.86, SD = 0.59) outperformed those for HPF (M = 1.34, SD = 0.18), a statistically significant mean increase of 0.52, t = 5.57 (23), p = 0.0000115. Similar results were obtained for both the high severity and low severity subsets of macro images. • The USM VIF scores for architectural images (M = 1.40, SD = 0.24) outperformed those for HPF (M = 1.26, SD = 0.15), a statistically significant mean increase of 0.14, t = 5.21 (23), p = 0.0000276. Similar results were obtained for both the high severity and low severity subsets of architectural images. The researcher found that the optimal sharpening parameters for USM and HPF depend on the content of the image. The optimal choice of parameters for USM depends on whether the most important features are edges or objects. Specific rules for choosing USM parameters were developed for each class of images. HPF is simpler in the fact that it only uses one parameter, Radius. Specific rules for choosing the HPF Radius were also developed for each class of images. Based on these results, the researcher concluded that USM outperformed HPF in sharpening macro and architectural images. The superior performance of USM could be due to the fact that it provides more parameters for users to control the sharpening process than HPF

    Image enhancement via adaptive unsharp masking

    Get PDF
    Journal ArticleAbstract-This paper presents a new method for unsharp masking for contrast enhancement of images. Our approach employs an adaptive filter that controls the contribution of the sharpening path in such a way that contrast enhancement occurs in high detail areas and little or no image sharpening occurs in smooth areas

    Nonlinear kernel based feature maps for blur-sensitive unsharp masking of JPEG images

    Get PDF
    In this paper, a method for estimating the blur regions of an image is first proposed, resorting to a mixture of linear and nonlinear convolutional kernels. The blur map obtained is then utilized to enhance images such that the enhancement strength is an inverse function of the amount of measured blur. The blur map can also be used for tasks such as attention-based object classification, low light image enhancement, and more. A CNN architecture is trained with nonlinear upsampling layers using a standard blur detection benchmark dataset, with the help of blur target maps. Further, it is proposed to use the same architecture to build maps of areas affected by the typical JPEG artifacts, ringing and blockiness. The blur map and the artifact map pair permit to build an activation map for the enhancement of a (possibly JPEG compressed) image. Extensive experiments on standard test images verify the quality of the maps obtained using the algorithm and their effectiveness in locally controlling the enhancement, for superior perceptual quality. Last but not least, the computation time for generating these maps is much lower than the one of other comparable algorithms

    Detecting Image Brush Editing Using the Discarded Coefficients and Intentions

    Get PDF
    This paper describes a quick and simple method to detect brush editing in JPEG images. The novelty of the proposed method is based on detecting the discarded coefficients during the quantization of the image. Another novelty of this paper is the development of a subjective metric named intentions. The method directly analyzes the allegedly tampered image and generates a forgery mask indicating forgery evidence for each image block. The experiments show that our method works especially well in detecting brush strokes, and it works reasonably well with added captions and image splicing. However, the method is less effective detecting copy-moved and blurred regions. This means that our method can effectively contribute to implementing a complete imagetampering detection tool. The editing operations for which our method is less effective can be complemented with methods more adequate to detect them

    A Geostatistical Filter for Remote Sensing Image Enhancement

    Get PDF
    In this paper, a new method was investigated to enhance remote sensing images by alleviating the point spread function (PSF) effect. The PSF effect exists ubiquitously in remotely sensed imagery. As a result, image quality is greatly affected, and this imposes a fundamental limit on the amount of information captured in remotely sensed images. A geostatistical filter was proposed to enhance image quality based on a downscaling-then-upscaling scheme. The difference between this method and previous methods is that the PSF is represented by breaking the pixel down into a series of sub-pixels, facilitating downscaling using the PSF and then upscaling using a square-wave response. Thus, the sub-pixels allow disaggregation as an attempt to remove the PSF effect. Experimental results on simulated and real data sets both suggest that the proposed filter can enhance the original images by reducing the PSF effect and quantify the extent to which this is possible. The predictions using the new method outperform the original coarse PSF-contaminated imagery as well as a benchmark method. The proposed method represents a new solution to compensate for the limitations introduced by remote sensors (i.e., hardware) using computer techniques (i.e., software). The method has widespread application value, particularly for applications based on remote sensing image analysis

    Image Quality Evaluation in Lossy Compressed Images

    Get PDF
    This research focuses on the quantification of image quality in lossy compressed images, exploring the impact of digital artefacts and scene characteristics upon image quality evaluation. A subjective paired comparison test was implemented to assess perceived quality of JPEG 2000 against baseline JPEG over a range of different scene types. Interval scales were generated for both algorithms, which indicated a subjective preference for JPEG 2000, particularly at low bit rates, and these were confirmed by an objective distortion measure. The subjective results did not follow this trend for some scenes however, and both algorithms were found to be scene dependent as a result of the artefacts produced at high compression rates. The scene dependencies were explored from the interval scale results, which allowed scenes to be grouped according to their susceptibilities to each of the algorithms. Groupings were correlated with scene measures applied in a linked study. A pilot study was undertaken to explore perceptibility thresholds of JPEG 2000 of the same set of images. This work was developed with a further experiment to investigate the thresholds of perceptibility and acceptability of higher resolution JPEG 2000 compressed images. A set of images was captured using a professional level full-frame Digital Single Lens Reflex camera, using a raw workflow and carefully controlled image-processing pipeline. The scenes were quantified using a set of simple scene metrics to classify them according to whether they were average, higher than, or lower than average, for a number of scene properties known to affect image compression and perceived image quality; these were used to make a final selection of test images. Image fidelity was investigated using the method of constant stimuli to quantify perceptibility thresholds and just noticeable differences (JNDs) of perceptibility. Thresholds and JNDs of acceptability were also quantified to explore suprathreshold quality evaluation. The relationships between the two thresholds were examined and correlated with the results from the scene measures, to identify more or less susceptible scenes. It was found that the level and differences between the two thresholds was an indicator of scene dependency and could be predicted by certain types of scene characteristics. A third study implemented the soft copy quality ruler as an alternative psychophysical method, by matching the quality of compressed images to a set of images varying in a single attribute, separated by known JND increments of quality. The imaging chain and image processing workflow were evaluated using objective measures of tone reproduction and spatial frequency response. An alternative approach to the creation of ruler images was implemented and tested, and the resulting quality rulers were used to evaluate a subset of the images from the previous study. The quality ruler was found to be successful in identifying scene susceptibilities and observer sensitivity. The fourth investigation explored the implementation of four different image quality metrics. These were the Modular Image Difference Metric, the Structural Similarity Metric, The Multi-scale Structural Similarity Metric and the Weighted Structural Similarity Metric. The metrics were tested against the subjective results and all were found to have linear correlation in terms of predictability of image quality

    Rauschreduktion versus Ortsauflösung in digitalen Bildern

    Get PDF
    In the signal processing of digital still cameras more and more complex algorithms take place to reduce the noise in the images. In this thesis the in uence of the noise reduction on spatial resolution is analyzed and a measurement system is set up.In modernen Digitalkameras werden immer komplexere Algorithmen verwendet, die das Rauschen im Bild reduzieren sollen. In dieser Arbeit wird untersucht, wie sich dies auf die Ortsauflösung auswirkt und ein Verfahren entwickelt, diese mit verschiedenen Mitteln zu beschreiben
    corecore