6,425 research outputs found

    Benchmarking Image Processing Algorithms for Unmanned Aerial System-Assisted Crack Detection in Concrete Structures

    Get PDF
    This paper summarizes the results of traditional image processing algorithms for detection of defects in concrete using images taken by Unmanned Aerial Systems (UASs). Such algorithms are useful for improving the accuracy of crack detection during autonomous inspection of bridges and other structures, and they have yet to be compared and evaluated on a dataset of concrete images taken by UAS. The authors created a generic image processing algorithm for crack detection, which included the major steps of filter design, edge detection, image enhancement, and segmentation, designed to uniformly compare dierent edge detectors. Edge detection was carried out by six filters in the spatial (Roberts, Prewitt, Sobel, and Laplacian of Gaussian) and frequency (Butterworth and Gaussian) domains. These algorithms were applied to fifty images each of defected and sound concrete. Performances of the six filters were compared in terms of accuracy, precision, minimum detectable crack width, computational time, and noise-to-signal ratio. In general, frequency domain techniques were slower than spatial domain methods because of the computational intensity of the Fourier and inverse Fourier transformations used to move between spatial and frequency domains. Frequency domain methods also produced noisier images than spatial domain methods. Crack detection in the spatial domain using the Laplacian of Gaussian filter proved to be the fastest, most accurate, and most precise method, and it resulted in the finest detectable crack width. The Laplacian of Gaussian filter in spatial domain is recommended for future applications of real-time crack detection using UAS

    The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures

    Full text link
    A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.Comment: 32 pages, 21 figure

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    Deep Bilateral Learning for Real-Time Image Enhancement

    Get PDF
    Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.Comment: 12 pages, 14 figures, Siggraph 201

    Stylizing Face Images via Multiple Exemplars

    Full text link
    We address the problem of transferring the style of a headshot photo to face images. Existing methods using a single exemplar lead to inaccurate results when the exemplar does not contain sufficient stylized facial components for a given photo. In this work, we propose an algorithm to stylize face images using multiple exemplars containing different subjects in the same style. Patch correspondences between an input photo and multiple exemplars are established using a Markov Random Field (MRF), which enables accurate local energy transfer via Laplacian stacks. As image patches from multiple exemplars are used, the boundaries of facial components on the target image are inevitably inconsistent. The artifacts are removed by a post-processing step using an edge-preserving filter. Experimental results show that the proposed algorithm consistently produces visually pleasing results.Comment: In CVIU 2017. Project Page: http://www.cs.cityu.edu.hk/~yibisong/cviu17/index.htm
    • …
    corecore