9,640 research outputs found

    Comparison of spatial domain optimal trade-off maximum average correlation height (OT-MACH) filter with scale invariant feature transform (SIFT) using images with poor contrast and large illumination gradient

    Get PDF
    A spatial domain optimal trade-off Maximum Average Correlation Height (OT-MACH) filter has been previously developed and shown to have advantages over frequency domain implementations in that it can be made locally adaptive to spatial variations in the input image background clutter and normalised for local intensity changes. In this paper we compare the performance of the spatial domain (SPOT-MACH) filter to the widely applied data driven technique known as the Scale Invariant Feature Transform (SIFT). The SPOT-MACH filter is shown to provide more robust recognition performance than the SIFT technique for demanding images such as scenes in which there are large illumination gradients. The SIFT method depends on reliable local edge-based feature detection over large regions of the image plane which is compromised in some of the demanding images we examined for this work. The disadvantage of the SPOTMACH filter is its numerically intensive nature since it is template based and is implemented in the spatial domain. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only

    Machine Vision Application For Automatic Defect Segmentation In Weld Radiographs

    Get PDF
    Objektif penyelidikan ini adalah untuk membangunkan satu kaedah peruasan kecacatan kimpalan automatik yang boleh meruas pelbagai jenis kecacatan kimpalan yang wujud dalam imej radiografi kimpalan. Kaedah segmentasi kecacatan automatik yang dibangunkan terdir:i daripada tiga algoritma utama, iaitu algoritma penyingkiran label, algoritma pengenalpastian bahagian kimpalan dan algoritma segmentasi kecacatan kimpalan. Algoritma penyingkiran label dibangunkan untuk mengenalpasti dan menyingkirkan label yang terdapat pada imej radiograf kimpalan secara automatik, sebelum algoritma pengenalpastian bahagian kimpalan dan algortima segmentasi kecacatan diaplikasikan ke atas imej radiografi. Satu algoritma pengenalpastian bahagian kimpalan juga dibangunkan dengan tujuan mengenalpasti bahagian kimpalan dalam imej radiogaf secara automatik dengan menggunakan profil keamatan yang diperoleh daripada imej radiografi. The objective of the research is to develop an automatic weld defect segmentation methodology to segment different types of defects in radiographic images of welds. The segmentation methodology consists of three main algorithms. namely label removal algorithm. weld extraction algorithm and defect segmentation algorithm. The label removal algorithm was developed to detect and remove labels that are printed on weld radiographs automatically before weld extraction algorithm and defect detection algorithm are applied. The weld extraction algorithm was developed to locate and extract welds automatically from the intensity profiles taken across the image by using graphical analysis. This algorithm was able to extract weld from a radiograph regardless of whether the intensity profile is Gaussian or otherwise. This method is an improvement compared to the previous weld extraction methods which are limited to weld image with Gaussian intensity profiles. Finally. a defect segmentation algorithm was developed to segment the defects automatically from the image using background subtraction and rank leveling method

    Virtual Frame Technique: Ultrafast Imaging with Any Camera

    Full text link
    Many phenomena of interest in nature and industry occur rapidly and are difficult and cost-prohibitive to visualize properly without specialized cameras. Here we describe in detail the Virtual Frame Technique (VFT), a simple, useful, and accessible form of compressed sensing that increases the frame acquisition rate of any camera by several orders of magnitude by leveraging its dynamic range. VFT is a powerful tool for capturing rapid phenomenon where the dynamics facilitate a transition between two states, and are thus binary. The advantages of VFT are demonstrated by examining such dynamics in five physical processes at unprecedented rates and spatial resolution: fracture of an elastic solid, wetting of a solid surface, rapid fingerprint reading, peeling of adhesive tape, and impact of an elastic hemisphere on a hard surface. We show that the performance of the VFT exceeds that of any commercial high speed camera not only in rate of imaging but also in field of view, achieving a 65MHz frame rate at 4MPx resolution. Finally, we discuss the performance of the VFT with several commercially available conventional and high-speed cameras. In principle, modern cell phones can achieve imaging rates of over a million frames per second using the VFT.Comment: 7 Pages, 4 Figures, 1 Supplementary Vide

    Review on Efficient Contrast Enhancement Technique for Low Illumination Color Images

    Get PDF
    A digital color image, as its fundamental purpose requires, is to provide a perception of the scene to a human viewer or a computer for carrying out automation tasks such as object recognition. An image of high quality that could truly represent the captured object and the scene is hence in great demand.Contrast is an important factor in any subjective evaluation of image quality. It is the difference in visual properties that makes an object distinguishable from other object and background. On the contrary, the human visual perception is interested in hue (H), saturation (S) and intensity (I) attributes that are carried by the color image. Therefore, when the image has to be processed, most approaches convert the RGB space into some convenient working signal spaces that are close to human perceptions

    Redefining A in RGBA: Towards a Standard for Graphical 3D Printing

    Full text link
    Advances in multimaterial 3D printing have the potential to reproduce various visual appearance attributes of an object in addition to its shape. Since many existing 3D file formats encode color and translucency by RGBA textures mapped to 3D shapes, RGBA information is particularly important for practical applications. In contrast to color (encoded by RGB), which is specified by the object's reflectance, selected viewing conditions and a standard observer, translucency (encoded by A) is neither linked to any measurable physical nor perceptual quantity. Thus, reproducing translucency encoded by A is open for interpretation. In this paper, we propose a rigorous definition for A suitable for use in graphical 3D printing, which is independent of the 3D printing hardware and software, and which links both optical material properties and perceptual uniformity for human observers. By deriving our definition from the absorption and scattering coefficients of virtual homogeneous reference materials with an isotropic phase function, we achieve two important properties. First, a simple adjustment of A is possible, which preserves the translucency appearance if an object is re-scaled for printing. Second, determining the value of A for a real (potentially non-homogeneous) material, can be achieved by minimizing a distance function between light transport measurements of this material and simulated measurements of the reference materials. Such measurements can be conducted by commercial spectrophotometers used in graphic arts. Finally, we conduct visual experiments employing the method of constant stimuli, and derive from them an embedding of A into a nearly perceptually uniform scale of translucency for the reference materials.Comment: 20 pages (incl. appendices), 20 figures. Version with higher quality images: https://cloud-ext.igd.fraunhofer.de/s/pAMH67XjstaNcrF (main article) and https://cloud-ext.igd.fraunhofer.de/s/4rR5bH3FMfNsS5q (appendix). Supplemental material including code: https://cloud-ext.igd.fraunhofer.de/s/9BrZaj5Uh5d0cOU/downloa

    Fusion of Visual and Thermal Images Using Genetic Algorithms

    Get PDF
    Biometric technologies such as fingerprint, hand geometry, face and iris recognition are widely used to identify a person's identity. The face recognition system is currently one of the most important biometric technologies, which identifies a person by comparing individually acquired face images with a set of pre-stored face templates in a database
    corecore