235 research outputs found
Recommended from our members
Geometric Transformation Techniques for Digital Images: A Survey
This survey presents a wide collection of algorithms for the geometric transformation of digital images. Efficient image transformation algorithms are critically important to the remote sensing, medical imaging, computer vision, and computer graphics communities. We review the growth of this field and compare all the described algorithms. Since this subject is interdisciplinary, emphasis is placed on the unification of the terminology, motivation, and contributions of each technique to yield a single coherent framework. This paper attempts to serve a dual role as a survey and a tutorial. It is comprehensive in scope and detailed in style. The primary focus centers on the three components that comprise all geometric transformations: spatial transformations, resampling, and antialiasing. In addition, considerable attention is directed to the dramatic progress made in the development of separable algorithms. The text is supplemented with numerous examples and an extensive bibliography
Multiresolution Approximation Using Shifted Splines
We consider the construction of least squares pyramids using shifted polynomial spline basis functions. We derive the pre- and post-filters as a function of the degree n and the shift parameter Δ. We show that the underlying projection operator is entirely specified by two transfer functions acting on the even and odd signal samples, respectively. We introduce a measure of shift-invariance and show that the most favorable configuration is obtained when the knots of the splines are centered with respect to the grid points (i.e., Δ=1/2 when n is odd, and Δ=0 when n is even). The worst case corresponds to the standard multiresolution setting where the spline spaces are nested
One-sided smoothness-increasing accuracy-conserving filtering for enhanced streamline integration through discontinuous fields
The discontinuous Galerkin (DG) method continues to maintain heightened levels
of interest within the simulation community because of the discretization flexibility it
provides. One of the fundamental properties of the DG methodology and arguably its most
powerful property is the ability to combine high-order discretizations on an inter-element
level while allowing discontinuities between elements. This flexibility, however, generates
a plethora of difficulties when one attempts to use DG fields for feature extraction and visualization,
as most post-processing schemes are not designed for handling explicitly discontinuous
fields. This work introduces a new method of applying smoothness-increasing,
accuracy-conserving filtering on discontinuous Galerkin vector fields for the purpose of enhancing
streamline integration. The filtering discussed in this paper enhances the smoothness
of the field and eliminates the discontinuity between elements, thus resulting in more
accurate streamlines. Furthermore, as a means of minimizing the computational cost of the
method, the filtering is done in a one-dimensional manner along the streamline.United States. Army Research Office (Grant no. W911NF-05-1-0395)National Science Foundation (U.S.) (Career Award NSF-CCF0347791
Least-Squares Image Resizing Using Finite Differences
We present an optimal spline-based algorithm for the enlargement or reduction of digital images with arbitrary (noninteger) scaling factors. This projection-based approach can be realized thanks to a new finite difference method that allows the computation of inner products with analysis functions that are B-splines of any degree n. A noteworthy property of the algorithm is that the computational complexity per pixel does not depend on the scaling factor a. For a given choice of basis functions, the results of our method are consistently better than those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signal-to-noise ratio. The method can be generalized to include other classes of piecewise polynomial functions, expressed as linear combinations of B-splines and their derivatives
A Chronology of Interpolation: From Ancient Astronomy to Modern Signal and Image Processing
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation
Digital FPGA Circuits Design for Real-Time Video Processing with Reference to Two Application Scenarios
In the present days of digital revolution, image and/or video processing has become a ubiquitous task: from mobile devices to special environments, the need for a real-time approach is everyday more and more evident. Whatever the reason, either for user experience in recreational or internet-based applications or for safety related timeliness in hard-real-time scenarios, the exploration of technologies and techniques which allow for this requirement to be satisfied is a crucial point. General purpose CPU or GPU software implementations of these applications are quite simple and widespread, but commonly do not allow high performance because of the high layering that separates high level languages and libraries, which enforce complicated procedures and algorithms, from the base architecture of the CPUs that offers only limited and basic (although rapidly executed) arithmetic operations. The most practised approach nowadays is based on the use of Very-Large-Scale Integrated (VLSI) digital electronic circuits.
Field Programmable Gate Arrays (FPGAs) are integrated digital circuits designed to be configured after manufacturing, "on the field". They typically provide lower performance levels when compared to Application Specific Integrated Circuits (ASICs), but at a lower cost, especially when dealing with limited production volumes. Of course, on-the-field programmability itself (and re-programmability, in the vast majority of cases) is also a characteristic feature that makes FPGA more suitable for applications with changing specifications where an update of capabilities may be a desirable benefit. Moreover, the time needed to fulfill the design cycle for FPGA-based circuits (including of course testing and debug speed) is much reduced when compared to the design flow and time-to-market of ASICs.
In this thesis work, we will see (Chapter 1) some common problems and strategies involved with the use of FPGAs and FPGA-based systems for Real Time Image Processing and Real Time Video Processing (in the following alsoindicated interchangeably with the acronym RTVP); we will then focus, in particular, on two applications.
Firstly, Chapter 2 will cover the implementation of a novel algorithm for Visual Search, known as CDVS, which has been recently standardised as part of the MPEG-7 standard. Visual search is an emerging field in mobile applications which is rapidly becoming ubiquitous. However, typically, algorithms for this kind of applications are connected with a high leverage on computational power and complex elaborations: as a consequence, implementation efficiency is a crucial point, and this generally results in the need for custom designed hardware.
Chapter 3 will cover the implementation of an algorithm for the compression of hyperspectral images which is bit-true compatible with the CCSDS-123.0 standard algorithm. Hyperspectral images are three dimensional matrices in which each 2D plane represents the image, as captured by the sensor, in a given spectral band: their size may range from several millions of pixels up to billions of pixels. Typical scenarios of use of hyperspectral images include airborne and satellite-borne remote sensing. As a consequence, major concerns are the limitedness of both processing power and communication links bandwidth: thus, a proper compression algorithm, as well as the efficiency of its implementation, is crucial.
In both cases we will first of all examine the scope of the work with reference to current state-of-the-art. We will then see the proposed implementations in their main characteristics and, to conclude, we will consider the primary experimental results
- …