5,528 research outputs found

    Demodulation of Spatial Carrier Images: Performance Analysis of Several Algorithms Using a Single Image

    Get PDF
    http://link.springer.com/article/10.1007%2Fs11340-013-9741-6#Optical full-field techniques have a great importance in modern experimental mechanics. Even if they are reasonably spread among the university laboratories, their diffusion in industrial companies remains very narrow for several reasons, especially a lack of metrological performance assessment. A full-field measurement can be characterized by its resolution, bias, measuring range, and by a specific quantity, the spatial resolution. The present paper proposes an original procedure to estimate in one single step the resolution, bias and spatial resolution for a given operator (decoding algorithms such as image correlation, low-pass filters, derivation tools ...). This procedure is based on the construction of a particular multi-frequential field, and a Bode diagram representation of the results. This analysis is applied to various phase demodulating algorithms suited to estimate in-plane displacements.GDR CNRS 2519 “Mesures de Champs et Identification en Mécanique des Solide

    Automatic Main Road Extraction from High Resolution Satellite Imagery

    Get PDF
    Road information is essential for automatic GIS (geographical information system) data acquisition, transportation and urban planning. Automatic road (network) detection from high resolution satellite imagery will hold great potential for significant reduction of database development/updating cost and turnaround time. From so called low level feature detection to high level context supported grouping, so many algorithms and methodologies have been presented for this purpose. There is not any practical system that can fully automatically extract road network from space imagery for the purpose of automatic mapping. This paper presents the methodology of automatic main road detection from high resolution satellite IKONOS imagery. The strategies include multiresolution or image pyramid method, Gaussian blurring and the line finder using 1-dimemsional template correlation filter, line segment grouping and multi-layer result integration. Multi-layer or multi-resolution method for road extraction is a very effective strategy to save processing time and improve robustness. To realize the strategy, the original IKONOS image is compressed into different corresponding image resolution so that an image pyramid is generated; after that the line finder of 1-dimemsional template correlation filter after Gaussian blurring filtering is applied to detect the road centerline. Extracted centerline segments belong to or do not belong to roads. There are two ways to identify the attributes of the segments, the one is using segment grouping to form longer line segments and assign a possibility to the segment depending on the length and other geometric and photometric attribute of the segment, for example the longer segment means bigger possibility of being road. Perceptual-grouping based method is used for road segment linking by a possibility model that takes multi-information into account; here the clues existing in the gaps are considered. Another way to identify the segments is feature detection back-to-higher resolution layer from the image pyramid

    Standardised convolutional filtering for radiomics

    Full text link
    The Image Biomarker Standardisation Initiative (IBSI) aims to improve reproducibility of radiomics studies by standardising the computational process of extracting image biomarkers (features) from images. We have previously established reference values for 169 commonly used features, created a standard radiomics image processing scheme, and developed reporting guidelines for radiomic studies. However, several aspects are not standardised. Here we present a preliminary version of a reference manual on the use of convolutional image filters in radiomics. Filters, such as wavelets or Laplacian of Gaussian filters, play an important part in emphasising specific image characteristics such as edges and blobs. Features derived from filter response maps have been found to be poorly reproducible. This reference manual forms the basis of ongoing work on standardising convolutional filters in radiomics, and will be updated as this work progresses.Comment: 62 pages. For additional information see https://theibsi.github.io

    Deep learning approach to Fourier ptychographic microscopy

    Full text link
    Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequence of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by this large spatial ensemble so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ~6X. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution

    Deep learning approach to Fourier ptychographic microscopy

    Full text link
    Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf

    Developing a remote sensing system based on X-band radar technology for coastal morphodynamics study

    Get PDF
    New data processing techniques are proposed for the assessment of scopes and limitations from radar-derived sea state parameters, coastline evolution and water depth estimates. Most of the raised research is focused on Colombian Caribbean coast and the Western Mediterranean Sea. First, a novel procedure to mitigate shadowing in radar images is proposed. The method compensates distortions introduced by the radar acquisition process and the power decay of the radar signal along range applying image enhancement techniques through a couple of pre-processing steps based on filtering and interpolation. Results reveal that the proposed methodology reproduces with high accuracy the sea state parameters in nearshore areas. The improvement resulting from the proposed method is assessed in a coral reef barrier, introducing a completely novel use for X-Band radar in coastal environments. So far, wave energy dissipation on a coral reef barrier has been studied by a few in-situ sensors placed in a straight line, perpendicular to the coastline, but never been described using marine radars. In this context, marine radar images are used to describe prominent features of coral reefs, including the delineation of reef morphological structure, wave energy dissipation and wave transformation processes in the lagoon of San Andres Island barrier-reef system. Results show that reef attenuates incident waves by approximately 75% due to both frictional and wave breaking dissipation, with an equivalent bottom roughness of 0.20 m and a wave friction factor of 0.18. These parameters are comparable with estimates reported in other shallow coral reef lagoons as well as at meadow canopies, obtained using in-situ measurements of wave parameters.DoctoradoDoctor en Ingeniería Eléctrica y Electrónic

    Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity

    Full text link
    A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.Comment: 30 page

    Curvelets and Ridgelets

    Get PDF
    International audienceDespite the fact that wavelets have had a wide impact in image processing, they fail to efficiently represent objects with highly anisotropic elements such as lines or curvilinear structures (e.g. edges). The reason is that wavelets are non-geometrical and do not exploit the regularity of the edge curve. The Ridgelet and the Curvelet [3, 4] transforms were developed as an answer to the weakness of the separable wavelet transform in sparsely representing what appears to be simple building atoms in an image, that is lines, curves and edges. Curvelets and ridgelets take the form of basis elements which exhibit high directional sensitivity and are highly anisotropic [5, 6, 7, 8]. These very recent geometric image representations are built upon ideas of multiscale analysis and geometry. They have had an important success in a wide range of image processing applications including denoising [8, 9, 10], deconvolution [11, 12], contrast enhancement [13], texture analysis [14, 15], detection [16], watermarking [17], component separation [18], inpainting [19, 20] or blind source separation[21, 22]. Curvelets have also proven useful in diverse fields beyond the traditional image processing application. Let’s cite for example seismic imaging [10, 23, 24], astronomical imaging [25, 26, 27], scientific computing and analysis of partial differential equations [28, 29]. Another reason for the success of ridgelets and curvelets is the availability of fast transform algorithms which are available in non-commercial software packages following the philosophy of reproducible research, see [30, 31]
    corecore