1,306 research outputs found

    Fast Detection of Curved Edges at Low SNR

    Full text link
    Detecting edges is a fundamental problem in computer vision with many applications, some involving very noisy images. While most edge detection methods are fast, they perform well only on relatively clean images. Indeed, edges in such images can be reliably detected using only local filters. Detecting faint edges under high levels of noise cannot be done locally at the individual pixel level, and requires more sophisticated global processing. Unfortunately, existing methods that achieve this goal are quite slow. In this paper we develop a novel multiscale method to detect curved edges in noisy images. While our algorithm searches for edges over a huge set of candidate curves, it does so in a practical runtime, nearly linear in the total number of image pixels. As we demonstrate experimentally, our algorithm is orders of magnitude faster than previous methods designed to deal with high noise levels. Nevertheless, it obtains comparable, if not better, edge detection quality on a variety of challenging noisy images.Comment: 9 pages, 11 figure

    The curvelet transform for image denoising

    Get PDF
    We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a` trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state of the art" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement

    Coronal Mass Ejection Detection using Wavelets, Curvelets and Ridgelets: Applications for Space Weather Monitoring

    Full text link
    Coronal mass ejections (CMEs) are large-scale eruptions of plasma and magnetic feld that can produce adverse space weather at Earth and other locations in the Heliosphere. Due to the intrinsic multiscale nature of features in coronagraph images, wavelet and multiscale image processing techniques are well suited to enhancing the visibility of CMEs and supressing noise. However, wavelets are better suited to identifying point-like features, such as noise or background stars, than to enhancing the visibility of the curved form of a typical CME front. Higher order multiscale techniques, such as ridgelets and curvelets, were therefore explored to characterise the morphology (width, curvature) and kinematics (position, velocity, acceleration) of CMEs. Curvelets in particular were found to be well suited to characterising CME properties in a self-consistent manner. Curvelets are thus likely to be of benefit to autonomous monitoring of CME properties for space weather applications.Comment: Accepted for publication in Advances in Space Research (3 April 2010

    A multi-scale filament extraction method: getfilaments

    Get PDF
    Far-infrared imaging surveys of Galactic star-forming regions with Herschel have shown that a substantial part of the cold interstellar medium appears as a fascinating web of omnipresent filamentary structures. This highly anisotropic ingredient of the interstellar material further complicates the difficult problem of the systematic detection and measurement of dense cores in the strongly variable but (relatively) isotropic backgrounds. Observational evidence that stars form in dense filaments creates severe problems for automated source extraction methods that must reliably distinguish sources not only from fluctuating backgrounds and noise, but also from the filamentary structures. A previous paper presented the multi-scale, multi-wavelength source extraction method getsources based on a fine spatial scale decomposition and filtering of irrelevant scales from images. In this paper, a multi-scale, multi-wavelength filament extraction method getfilaments is presented that solves this problem, substantially improving the robustness of source extraction with getsources in filamentary backgrounds. The main difference is that the filaments extracted by getfilaments are now subtracted by getsources from detection images during source extraction, greatly reducing the chances of contaminating catalogs with spurious sources. The intimate physical relationship between forming stars and filaments seen in Herschel observations demands that accurate filament extraction methods must remove the contribution of sources and that accurate source extraction methods must be able to remove underlying filamentary structures. Source extraction with getsources now provides researchers also with clean images of filaments, free of sources, noise, and isotropic backgrounds.Comment: 15 pages, 19 figures, to be published in Astronomy & Astrophysics; language polished for better readabilit

    Multiscale Fields of Patterns

    Full text link
    We describe a framework for defining high-order image models that can be used in a variety of applications. The approach involves modeling local patterns in a multiscale representation of an image. Local properties of a coarsened image reflect non-local properties of the original image. In the case of binary images local properties are defined by the binary patterns observed over small neighborhoods around each pixel. With the multiscale representation we capture the frequency of patterns observed at different scales of resolution. This framework leads to expressive priors that depend on a relatively small number of parameters. For inference and learning we use an MCMC method for block sampling with very large blocks. We evaluate the approach with two example applications. One involves contour detection. The other involves binary segmentation.Comment: In NIPS 201

    Mini Kirsch Edge Detection and Its Sharpening Effect

    Get PDF
    In computer vision, edge detection is a crucial step in identifying the objects’ boundaries in an image. The existing edge detection methods function in either spatial domain or frequency domain, fail to outline the high continuity boundaries of the objects. In this work, we modified four-directional mini Kirsch edge detection kernels which enable full directional edge detection. We also introduced the novel involvement of the proposed method in image sharpening by adding the resulting edge map onto the original input image to enhance the edge details in the image. From the edge detection performance tests, our proposed method acquired the highest true edge pixels and true non-edge pixels detection, yielding the highest accuracy among all the comparing methods. Moreover, the sharpening effect offered by our proposed framework could achieve a more favorable visual appearance with a competitive score of peak signal-to-noise ratio and structural similarity index value compared to the most widely used unsharp masking and Laplacian of Gaussian sharpening methods.  The edges of the sharpened image are further enhanced could potentially contribute to better boundary tracking and higher segmentation accuracy

    Classic versus deep learning approaches to address computer vision challenges : a study of faint edge detection and multispectral image registration

    Get PDF
    Computer Vision involves many challenging problems. While early work utilized classic methods, in recent years solutions have often relied on deep neural networks. In this study, we explore those two classes of methods through two applications that are at the limit of the ability of current computer vision algorithms, i.e., faint edge detection and multispectral image registration. We show that the detection of edges at a low signal-to-noise ratio is a demanding task with proven lower bounds. The introduced method processes straight and curved edges in nearly linear complexity. Moreover, performance is of high quality on noisy simulations, boundary datasets, and real images. However, in order to improve accuracy and runtime, a deep solution was also explored. It utilizes a multiscale neural network for the detection of edges in binary images using edge preservation loss. The second group of work that is considered in this study addresses multispectral image alignment. Since multispectral fusion is particularly informative, robust image alignment algorithms are required. However, as this cannot be carried out by single-channel registration methods, we propose a traditional approach that relies on a novel edge descriptor using a feature-based registration scheme. Experiments demonstrate that, although it is able to align a wide field of spectral channels, it lacks robustness to deal with every geometric transformation. To that end, we developed a deep approach for such alignment. Contrarily to the previously suggested edge descriptor, our deep approach uses an invariant representation for spectral patches via metric learning that can be seen as a teacher-student method. All those pieces of work are reported in five published papers with state-of-the-art experimental results and proven theory. As a whole, this research reveals that, while traditional methods are rooted in theoretical principles and are robust to a wide field of images, deep approaches are faster to run and achieve better performance if, not only sufficient training data are available, but also they are of the same image type as the data on which they are applied
    • …
    corecore