103 research outputs found

    Development and Modelling of High-Efficiency Computing Structure for Digital Signal Processing

    Full text link
    The paper is devoted to problem of spline approximation. A new method of nodes location for curves and surfaces computer construction by means of B-splines and results of simulink-modeling is presented. The advantages of this paper is that we comprise the basic spline with classical polynomials both on accuracy, as well as degree of paralleling calculations are also shown.Comment: 4 Pages, 5 figures, IEEE International Conference on Multimedia, Signal Processing and Communication Technologies, 2009. IMPACT '0

    Survey of semi-regular multiresolution models for interactive terrain rendering

    Get PDF
    Rendering high quality digital terrains at interactive rates requires carefully crafted algorithms and data structures able to balance the competing requirements of realism and frame rates, while taking into account the memory and speed limitations of the underlying graphics platform. In this survey, we analyze multiresolution approaches that exploit a certain semi-regularity of the data. These approaches have produced some of the most efficient systems to date. After providing a short background and motivation for the methods, we focus on illustrating models based on tiled blocks and nested regular grids, quadtrees and triangle bin-trees triangulations, as well as cluster-based approaches. We then discuss LOD error metrics and system-level data management aspects of interactive terrain visualization, including dynamic scene management, out-of-core data organization and compression, as well as numerical accurac

    Application of Bandelet Transform in Image and Video Compression

    Get PDF
    The need for large-scale storage and transmission of data is growing exponentially With the widespread use of computers so that efficient ways of storing data have become important. With the advancement of technology, the world has found itself amid a vast amount of information. An efficient method has to be generated to deal with such amount of information. Data compression is a technique which minimizes the size of a file keeping the quality same as previous. So more amount of data can be stored in memory space with the help of data compression. There are various image compression standards such as JPEG, which uses discrete cosine transform technique and JPEG 2000 which uses discrete wavelet transform technique. The discrete cosine transform gives excellent compaction for highly correlated information. The computational complexity is very less as it has better information packing ability. However, it produces blocking artifacts, graininess, and blurring in the output which is overcome by the discrete wavelet transform. The image size is reduced by discarding values less than a prespecified quantity without losing much information. But it also has some limitations when the complexity of the image increases. Wavelets are optimal for point singularity however for line singularities and curve singularities these are not optimal. They do not consider the image geometry which is a vital source of redundancy. Here we analyze a new type of bases known as bandelets which can be constructed from the wavelet basis which takes an important source of regularity that is the geometrical redundancy.The image is decomposed along the direction of geometry. It is better as compared to other methods because the geometry is described by a flow vector rather than edges. it indicates the direction in which the intensity of image shows a smooth variation. It gives better compression measure compared to wavelet bases. A fast subband coding is used for the image decomposition in a bandelet basis. It has been extended for video compression. The bandelet transform based image and video compression method compared with the corresponding wavelet scheme. Different performance measure parameters such as peak signal to noise ratio, compression ratio (PSNR), bits per pixel (bpp) and entropy are evaluated for both Image and video compression

    A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity

    Full text link
    The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping "pictures". We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.Comment: 65 pages, 33 figures, 303 reference

    Сжатие графической информации с использованием тематических текстур

    Get PDF
    To decorate surface of terrain thematic textures are used. It is supposed that observer is not often interested in the exact photographic information about the wide areas covered, for instance, by forest, water, etc., so that the benefits of photographic texture can be obtained by composing a number of patterns called "themes" to produce a final texture of the specific area. The goal of using thematic textures is to abolish the need of the global area texture, and additionally, animation features become available by simple means

    Hierarchical Variance Reduction Techniques for Monte Carlo Rendering

    Get PDF
    Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications ñ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields

    A Review of Adaptive Image Representations

    Full text link

    Sparse image approximation with application to flexible image coding

    Get PDF
    Natural images are often modeled through piecewise-smooth regions. Region edges, which correspond to the contours of the objects, become, in this model, the main information of the signal. Contours have the property of being smooth functions along the direction of the edge, and irregularities on the perpendicular direction. Modeling edges with the minimum possible number of terms is of key importance for numerous applications, such as image coding, segmentation or denoising. Standard separable basis fail to provide sparse enough representation of contours, due to the fact that this kind of basis do not see the regularity of edges. In order to be able to detect this regularity, a new method based on (possibly redundant) sets of basis functions able to capture the geometry of images is needed. This thesis presents, in a first stage, a study about the features that basis functions should have in order to provide sparse representations of a piecewise-smooth image. This study emphasizes the need for edge-adapted basis functions, capable to accurately capture local orientation and anisotropic scaling of image structures. The need of different anisotropy degrees and orientations in the basis function set leads to the use of redundant dictionaries. However, redundant dictionaries have the inconvenience of giving no unique sparse image decompositions, and from all the possible decompositions of a signal in a redundant dictionary, just the sparsest is needed. There are several algorithms that allow to find sparse decompositions over redundant dictionaries, but most of these algorithms do not always guarantee that the optimal approximation has been recovered. To cope with this problem, a mathematical study about the properties of sparse approximations is performed. From this, a test to check whether a given sparse approximation is the sparsest is provided. The second part of this thesis presents a novel image approximation scheme, based on the use of a redundant dictionary. This scheme allows to have a good approximation of an image with a number of terms much smaller than the dimension of the signal. This novel approximation scheme is based on a dictionary formed by a combination of anisotropically refined and rotated wavelet-like mother functions and Gaussians. An efficient Full Search Matching Pursuit algorithm to perform the image decomposition in such a dictionary is designed. Finally, a geometric image coding scheme based on the image approximated over the anisotropic and rotated dictionary of basis functions is designed. The coding performances of this dictionary are studied. Coefficient quantization appears to be of crucial importance in the design of a Matching Pursuit based coding scheme. Thus, a quantization scheme for the MP coefficients has been designed, based on the theoretical energy upper bound of the MP algorithm and the empirical observations of the coefficient distribution and evolution. Thanks to this quantization, our image coder provides low to medium bit-rate image approximations, while it allows for on the fly resolution switching and several other affine image transformations to be performed directly in the transformed domain

    Hierarchical representation and coding of surfaces using 3-D polygon meshes

    Full text link
    corecore