12 research outputs found
Separable Image Warping with Spatial Lookup Tables
Image warping refers to the 2-D resampling of a source image onto a target image. In the general case, this requires costly 2-D filtering operations. Simplifications are possible when the warp can be expressed as a cascade of orthogonall-D transformations. In these cases, separable transformations have been introduced to realize large performance gains. The central ideas in this area were formulated in the 2-pass algorithm by Catmull and Smith. Although that method applies over an important class of transformations, there are intrinsic problems which limit its usefulness. The goal of this work is to extend the 2-pass approach to handle arbitrary spatial mapping functions. We address the difficulties intrinsic to 2-pass scanline algorithms: bottlenecking, foldovers, and the lack of closed-form inverse solutions. These problems are shown to be resolved in a general, efficient, separable technique, with graceful degradation for transformations of increasing complexity
Recommended from our members
Image Warping Among Arbitrary Planar Shapes
Image warping refers to the 2D resampling of a source image onto a target image. Despite the variety of techniques proposed, a large class of image warping problems remains inadequately solved: mapping between two images which are delimited by arbitrary, closed, planar curves, e.g., hand-drawn curves. This paper describes a novel algorithm to perform image warping among arbitrary planar shapes whose boundary correspondences are known. A generalized polar coordinate parameterization is introduced to facilitate an efficient mapping procedure. Images are treated as collections of interior layers, extracted via a thinning process. Mapping these layers between the source and target images generates the 2D resampling grid that defines the warping. The thinning operation extends the standard polar coordinate representation to deal with arbitrary shapes
BiFormer: Learning Bilateral Motion Estimation via Bilateral Transformer for 4K Video Frame Interpolation
A novel 4K video frame interpolator based on bilateral transformer (BiFormer)
is proposed in this paper, which performs three steps: global motion
estimation, local motion refinement, and frame synthesis. First, in global
motion estimation, we predict symmetric bilateral motion fields at a coarse
scale. To this end, we propose BiFormer, the first transformer-based bilateral
motion estimator. Second, we refine the global motion fields efficiently using
blockwise bilateral cost volumes (BBCVs). Third, we warp the input frames using
the refined motion fields and blend them to synthesize an intermediate frame.
Extensive experiments demonstrate that the proposed BiFormer algorithm achieves
excellent interpolation performance on 4K datasets. The source codes are
available at https://github.com/JunHeum/BiFormer.Comment: Accepted to CVPR202
Predicting and Optimizing Image Compression
Image compression is a core task for mobile devices, social media and cloud storage backend services. Key evaluation criteria for compression are: the quality of the output, the compression ratio achieved and the computational time (and energy) expended. Predicting the effectiveness of standard compression implementations like libjpeg and WebP on a novel image is challenging, and often leads to non-optimal compression. This paper presents a machine learning-based technique to accurately model the outcome of image compression for arbitrary new images in terms of quality and compression ratio, without requiring significant additional computational time and energy. Using this model, we can actively adapt the aggressiveness of compression on a per image basis to accurately fit user requirements, leading to a more optimal compression.Postprin
Improved content aware scene retargeting for retinitis pigmentosa patients
<p>Abstract</p> <p>Background</p> <p>In this paper we present a novel scene retargeting technique to reduce the visual scene while maintaining the size of the key features. The algorithm is scalable to implementation onto portable devices, and thus, has potential for augmented reality systems to provide visual support for those with tunnel vision. We therefore test the efficacy of our algorithm on shrinking the visual scene into the remaining field of view for those patients.</p> <p>Methods</p> <p>Simple spatial compression of visual scenes makes objects appear further away. We have therefore developed an algorithm which removes low importance information, maintaining the size of the significant features. Previous approaches in this field have included <it>seam carving</it>, which removes low importance seams from the scene, and <it>shrinkability </it>which dynamically shrinks the scene according to a generated importance map. The former method causes significant artifacts and the latter is inefficient. In this work we have developed a new algorithm, combining the best aspects of both these two previous methods. In particular, our approach is to generate a <it>shrinkability </it>importance map using as seam based approach. We then use it to dynamically shrink the scene in similar fashion to the <it>shrinkability </it>method. Importantly, we have implemented it so that it can be used in real time without prior knowledge of future frames.</p> <p>Results</p> <p>We have evaluated and compared our algorithm to the <it>seam carving </it>and image <it>shrinkability </it>approaches from a content preservation perspective and a compression quality perspective. Also our technique has been evaluated and tested on a trial included 20 participants with simulated tunnel vision. Results show the robustness of our method at reducing scenes up to 50% with minimal distortion. We also demonstrate efficacy in its use for those with simulated tunnel vision of 22 degrees of field of view or less.</p> <p>Conclusions</p> <p>Our approach allows us to perform content aware video resizing in real time using only information from previous frames to avoid jitter. Also our method has a great benefit over the ordinary resizing method and even over other image retargeting methods. We show that the benefit derived from this algorithm is significant to patients with fields of view 20° or less.</p
Eddy formation near the west coast of Greenland
Author Posting. © American Meteorological Society, 2008. This article is posted here by permission of American Meteorological Society for personal use, not for redistribution. The definitive version was published in Journal of Physical Oceanography 38 (2008): 1992-2002, doi:10.1175/2008JPO3669.1.This paper extends A. Bracco and J. Pedlosky’s investigation of the eddy-formation mechanism in the eastern Labrador Sea by including a more realistic depiction of the boundary current. The quasigeostrophic model consists of a meridional, coastally trapped current with three vertical layers. The current configuration and topographic domain are chosen to match, as closely as possible, the observations of the boundary current and the varying topographic slope along the West Greenland coast. The role played by the bottom-intensified component of the boundary current on the formation of the Labrador Sea Irminger Rings is explored. Consistent with the earlier study, a short, localized bottom-trapped wave is responsible for most of the perturbation energy growth. However, for the instability to occur in the three-layer model, the deepest component of the boundary current must be sufficiently strong, highlighting the importance of the near-bottom flow. The model is able to reproduce important features of the observed vortices in the eastern Labrador Sea, including the polarity, radius, rate of formation, and vertical structure. At the time of formation, the eddies have a surface signature as well as a strong circulation at depth, possibly allowing for the transport of both surface and near-bottom water from the boundary current into the interior basin. This work also supports the idea that changes in the current structure could be responsible for the observed interannual variability in the number of Irminger Rings formed.AB is supported by
WHOI unrestricted funds, JP by the National Science
Foundation OCE 85108600, and RP by 0450658
Recommended from our members
Geometric Transformation Techniques for Digital Images: A Survey
This survey presents a wide collection of algorithms for the geometric transformation of digital images. Efficient image transformation algorithms are critically important to the remote sensing, medical imaging, computer vision, and computer graphics communities. We review the growth of this field and compare all the described algorithms. Since this subject is interdisciplinary, emphasis is placed on the unification of the terminology, motivation, and contributions of each technique to yield a single coherent framework. This paper attempts to serve a dual role as a survey and a tutorial. It is comprehensive in scope and detailed in style. The primary focus centers on the three components that comprise all geometric transformations: spatial transformations, resampling, and antialiasing. In addition, considerable attention is directed to the dramatic progress made in the development of separable algorithms. The text is supplemented with numerous examples and an extensive bibliography
Image Reconstruction for Discrete Cosine Transform Compression Schemes
Electrical Engineerin