1,460 research outputs found

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes

    BSP-fields: An Exact Representation of Polygonal Objects by Differentiable Scalar Fields Based on Binary Space Partitioning

    Get PDF
    The problem considered in this work is to find a dimension independent algorithm for the generation of signed scalar fields exactly representing polygonal objects and satisfying the following requirements: the defining real function takes zero value exactly at the polygonal object boundary; no extra zero-value isosurfaces should be generated; C1 continuity of the function in the entire domain. The proposed algorithms are based on the binary space partitioning (BSP) of the object by the planes passing through the polygonal faces and are independent of the object genus, the number of disjoint components, and holes in the initial polygonal mesh. Several extensions to the basic algorithm are proposed to satisfy the selected optimization criteria. The generated BSP-fields allow for applying techniques of the function-based modeling to already existing legacy objects from CAD and computer animation areas, which is illustrated by several examples

    TetSplat: Real-time Rendering and Volume Clipping of Large Unstructured Tetrahedral Meshes

    Get PDF
    We present a novel approach to interactive visualization and exploration of large unstructured tetrahedral meshes. These massive 3D meshes are used in mission-critical CFD and structural mechanics simulations, and typically sample multiple field values on several millions of unstructured grid points. Our method relies on the pre-processing of the tetrahedral mesh to partition it into non-convex boundaries and internal fragments that are subsequently encoded into compressed multi-resolution data representations. These compact hierarchical data structures are then adaptively rendered and probed in real-time on a commodity PC. Our point-based rendering algorithm, which is inspired by QSplat, employs a simple but highly efficient splatting technique that guarantees interactive frame-rates regardless of the size of the input mesh and the available rendering hardware. It furthermore allows for real-time probing of the volumetric data-set through constructive solid geometry operations as well as interactive editing of color transfer functions for an arbitrary number of field values. Thus, the presented visualization technique allows end-users for the first time to interactively render and explore very large unstructured tetrahedral meshes on relatively inexpensive hardware

    Methods in general model localization

    Get PDF
    The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).Spatiaalisen tilastotieteen keinoin pystytÀÀn kalibroimaan yleisten, koko laajan tutkimusalueen kattavien, regressiomallien ennusteita, jolloin saadaan yhÀ tarkempia paikallisia estimaatteja ja arvioita. Yleisen mallin kÀytölle on ollut esteenÀ tÀmÀ alueellinen epÀtarkkuus, mutta jos epÀtarkkuutta pystytÀÀn pienentÀmÀÀn, yleisiÀ malleja voidaan liittÀÀ esim. laajojen metsÀalueiden inventointi- ja arviointisysteemeihin. Yleisten mallien etuna on niiden yksinkertaisuus ja helppous kÀytössÀ. VÀitöskirjassa tarkastelualueena on ollut etelÀinen Suomi. VÀitöskirjassa on tutkittu ja vertailtu erilaisia menetelmiÀ, joilla regressiomallin antamia ennusteita voidaan paikallistaa eli lokalisoida. Lokalisoinnissa paikallista harhaa, joka on todellisen mitatun arvon ja mallin antaman ennusteen vÀlinen erotus, pienennetÀÀn tai poistetaan alueellisesti kokonaan. YhteistÀ menetelmille on, ettÀ ne hyödyntÀvÀt havaintojen vÀlistÀ spatiaalista autokorrelaatiota. Spatiaalisen autokorrelaation (SA) perusajatuksena on, ettÀ kaksi lÀhekkÀin sijaitsevaa kohdetta ovat todennÀköisemmin samankaltaisempia kuin kaksi toisistaan kauempana sijaitsevaa kohdetta ja siksi ympÀristön poikkeamia yleisestÀ keskiarvosta voidaan kÀyttÀÀ naapurin arvioimiseen. Tarkempia estimaatteja voidaan saavuttaa erilaisilla menetelmillÀ. Osassa menetelmistÀ tutkimusalue on jaettu pienempiin mahdollisimman yhtenÀisiin alueisiin, joille alkuperÀinen malli on uudelleen sovittettu eli lokalisoitu, ja toisissa lokalisointi tehdÀÀn aina kunkin havainnon lÀhiympÀristön havaintojen, nk. naapuruston, avulla. Kaikilla menetelmillÀ jÀÀnnösvirhe (RMSE) pieneni, mutta niillÀ menetelmillÀ, joilla lokalisointiin pyrittiin aluetta jakamalla, lokalisoiduissa RMSE:issÀ oli suurta vaihtelua. Siksi nÀihin menetelmiin pitÀisi liittÀÀ jokin lisÀmuuttuja, jolla pystyisi kontrolloimaan jakamista ja lokalisointia. TÀllöin pystyttÀisiin arvioimaan, ovatko tietyt jaot kokonaisuudessaan tai yksittÀiset alueet lokalisoinnin kannalta kannattavia. Toisaalta naapurustoa hyödyntÀvÀ lokalisointi antoi vakaita ennusteita, kun naapureiden mÀÀrÀ oli riittÀvÀ (yli 30). TÀmÀ vaihtoehto tarjoaakin parhaimmat mahdollisuudet jatkotutkimuksille; sillÀ siihen voidaan yhdistÀÀ muita vÀitöskirjassa kÀytettyjÀ menetelmiÀ tai ei-parametrisiÀ menetelmiÀ

    Multiscale likelihood analysis and complexity penalized estimation

    Full text link
    We describe here a framework for a certain class of multiscale likelihood factorizations wherein, in analogy to a wavelet decomposition of an L^2 function, a given likelihood function has an alternative representation as a product of conditional densities reflecting information in both the data and the parameter vector localized in position and scale. The framework is developed as a set of sufficient conditions for the existence of such factorizations, formulated in analogy to those underlying a standard multiresolution analysis for wavelets, and hence can be viewed as a multiresolution analysis for likelihoods. We then consider the use of these factorizations in the task of nonparametric, complexity penalized likelihood estimation. We study the risk properties of certain thresholding and partitioning estimators, and demonstrate their adaptivity and near-optimality, in a minimax sense over a broad range of function spaces, based on squared Hellinger distance as a loss function. In particular, our results provide an illustration of how properties of classical wavelet-based estimators can be obtained in a single, unified framework that includes models for continuous, count and categorical data types
    • 

    corecore