12,616 research outputs found

    Focusing on out-of-focus : assessing defocus estimation algorithms for the benefit of automated image masking

    Get PDF
    Acquiring photographs as input for an image-based modelling pipeline is less trivial than often assumed. Photographs should be correctly exposed, cover the subject sufficiently from all possible angles, have the required spatial resolution, be devoid of any motion blur, exhibit accurate focus and feature an adequate depth of field. The last four characteristics all determine the " sharpness " of an image and the photogrammetric, computer vision and hybrid photogrammetric computer vision communities all assume that the object to be modelled is depicted " acceptably " sharp throughout the whole image collection. Although none of these three fields has ever properly quantified " acceptably sharp " , it is more or less standard practice to mask those image portions that appear to be unsharp due to the limited depth of field around the plane of focus (whether this means blurry object parts or completely out-of-focus backgrounds). This paper will assess how well-or ill-suited defocus estimating algorithms are for automatically masking a series of photographs, since this could speed up modelling pipelines with many hundreds or thousands of photographs. To that end, the paper uses five different real-world datasets and compares the output of three state-of-the-art edge-based defocus estimators. Afterwards, critical comments and plans for the future finalise this paper

    Learning Blind Motion Deblurring

    Full text link
    As handheld video cameras are now commonplace and available in every smartphone, images and videos can be recorded almost everywhere at anytime. However, taking a quick shot frequently yields a blurry result due to unwanted camera shake during recording or moving objects in the scene. Removing these artifacts from the blurry recordings is a highly ill-posed problem as neither the sharp image nor the motion blur kernel is known. Propagating information between multiple consecutive blurry observations can help restore the desired sharp image or video. Solutions for blind deconvolution based on neural networks rely on a massive amount of ground-truth data which is hard to acquire. In this work, we propose an efficient approach to produce a significant amount of realistic training data and introduce a novel recurrent network architecture to deblur frames taking temporal information into account, which can efficiently handle arbitrary spatial and temporal input sizes. We demonstrate the versatility of our approach in a comprehensive comparison on a number of challening real-world examples.Comment: International Conference on Computer Vision (ICCV) (2017

    Single Frame Image super Resolution using Learned Directionlets

    Full text link
    In this paper, a new directionally adaptive, learning based, single image super resolution method using multiple direction wavelet transform, called Directionlets is presented. This method uses directionlets to effectively capture directional features and to extract edge information along different directions of a set of available high resolution images .This information is used as the training set for super resolving a low resolution input image and the Directionlet coefficients at finer scales of its high-resolution image are learned locally from this training set and the inverse Directionlet transform recovers the super-resolved high resolution image. The simulation results showed that the proposed approach outperforms standard interpolation techniques like Cubic spline interpolation as well as standard Wavelet-based learning, both visually and in terms of the mean squared error (mse) values. This method gives good result with aliased images also.Comment: 14 pages,6 figure

    Computational illumination for high-speed in vitro Fourier ptychographic microscopy

    Full text link
    We demonstrate a new computational illumination technique that achieves large space-bandwidth-time product, for quantitative phase imaging of unstained live samples in vitro. Microscope lenses can have either large field of view (FOV) or high resolution, not both. Fourier ptychographic microscopy (FPM) is a new computational imaging technique that circumvents this limit by fusing information from multiple images taken with different illumination angles. The result is a gigapixel-scale image having both wide FOV and high resolution, i.e. large space-bandwidth product (SBP). FPM has enormous potential for revolutionizing microscopy and has already found application in digital pathology. However, it suffers from long acquisition times (on the order of minutes), limiting throughput. Faster capture times would not only improve imaging speed, but also allow studies of live samples, where motion artifacts degrade results. In contrast to fixed (e.g. pathology) slides, live samples are continuously evolving at various spatial and temporal scales. Here, we present a new source coding scheme, along with real-time hardware control, to achieve 0.8 NA resolution across a 4x FOV with sub-second capture times. We propose an improved algorithm and new initialization scheme, which allow robust phase reconstruction over long time-lapse experiments. We present the first FPM results for both growing and confluent in vitro cell cultures, capturing videos of subcellular dynamical phenomena in popular cell lines undergoing division and migration. Our method opens up FPM to applications with live samples, for observing rare events in both space and time

    Modern optical astronomy: technology and impact of interferometry

    Get PDF
    The present `state of the art' and the path to future progress in high spatial resolution imaging interferometry is reviewed. The review begins with a treatment of the fundamentals of stellar optical interferometry, the origin, properties, optical effects of turbulence in the Earth's atmosphere, the passive methods that are applied on a single telescope to overcome atmospheric image degradation such as speckle interferometry, and various other techniques. These topics include differential speckle interferometry, speckle spectroscopy and polarimetry, phase diversity, wavefront shearing interferometry, phase-closure methods, dark speckle imaging, as well as the limitations imposed by the detectors on the performance of speckle imaging. A brief account is given of the technological innovation of adaptive-optics (AO) to compensate such atmospheric effects on the image in real time. A major advancement involves the transition from single-aperture to the dilute-aperture interferometry using multiple telescopes. Therefore, the review deals with recent developments involving ground-based, and space-based optical arrays. Emphasis is placed on the problems specific to delay-lines, beam recombination, polarization, dispersion, fringe-tracking, bootstrapping, coherencing and cophasing, and recovery of the visibility functions. The role of AO in enhancing visibilities is also discussed. The applications of interferometry, such as imaging, astrometry, and nulling are described. The mathematical intricacies of the various `post-detection' image-processing techniques are examined critically. The review concludes with a discussion of the astrophysical importance and the perspectives of interferometry.Comment: 65 pages LaTeX file including 23 figures. Reviews of Modern Physics, 2002, to appear in April issu

    General-purpose and special-purpose visual systems

    Get PDF
    The information that eyes supply supports a wide variety of functions, from the guidance systems that enable an animal to navigate successfully around the environment, to the detection and identification of predators, prey, and conspecifics. The eyes with which we are most familiar the single-chambered eyes of vertebrates and cephalopod molluscs, and the compound eyes of insects and higher crustaceans allow these animals to perform the full range of visual tasks. These eyes have evidently evolved in conjunction with brains that are capable of subjecting the raw visual information to many different kinds of analysis, depending on the nature of the task that the animal is engaged in. However, not all eyes evolved to provide such comprehensive information. For example, in bivalve molluscs we find eyes of very varied design (pinholes, concave mirrors, and apposition compound eyes) whose only function is to detect approaching predators and thereby allow the animal to protect itself by closing its shell. Thus, there are special-purpose eyes as well as eyes with multiple functions
    • …
    corecore