24 research outputs found

    Context-based Space Filling Curves

    Full text link

    Segmentation Based Image Scanning

    Get PDF
    The submitted paper deals with separate scanning of individual image segments. A new image processing approach based on image segmentation and segment scanning is presented. The resulting individual segments 1-dimensional representation provides higher neighbor pixel similarity than the 1-dimensional representation of the original image. This increased adjacent pixel similarity was achieved even without application of different recursive 2-dimensional scanning methods [4], such as Peano-Hilbert scanning method [1]. The resulting 1-dimensional image representation provides a good base for applying lossless compression methods, such as the entropic coding. The paper contains also results analysis of the traditional method scanned segment pixels and adjacent pixel differences from the entropy point of view. As these results indicate the lossy compression methods could be applicable using this approach as well and might improve the final results as confirmed by simple prediction algorithm results presented in this paper. More complex and sophisticated lossy compression algorithms application will be a part of the future work

    Scale-Space Splatting: Reforming Spacetime for the Cross-Scale Exploration of Integral Measures in Molecular Dynamics

    Full text link
    Understanding large amounts of spatiotemporal data from particle-based simulations, such as molecular dynamics, often relies on the computation and analysis of aggregate measures. These, however, by virtue of aggregation, hide structural information about the space/time localization of the studied phenomena. This leads to degenerate cases where the measures fail to capture distinct behaviour. In order to drill into these aggregate values, we propose a multi-scale visual exploration technique. Our novel representation, based on partial domain aggregation, enables the construction of a continuous scale-space for discrete datasets and the simultaneous exploration of scales in both space and time. We link these two scale-spaces in a scale-space space-time cube and model linked views as orthogonal slices through this cube, thus enabling the rapid identification of spatio-temporal patterns at multiple scales. To demonstrate the effectiveness of our approach, we showcase an advanced exploration of a protein-ligand simulation.Comment: 11 pages, 9 figures, IEEE SciVis 201

    Scanning and Sequential Decision Making for Multi-Dimensional Data - Part I: the Noiseless Case

    Get PDF
    We investigate the problem of scanning and prediction ("scandiction", for short) of multidimensional data arrays. This problem arises in several aspects of image and video processing, such as predictive coding, for example, where an image is compressed by coding the error sequence resulting from scandicting it. Thus, it is natural to ask what is the optimal method to scan and predict a given image, what is the resulting minimum prediction loss, and whether there exist specific scandiction schemes which are universal in some sense. Specifically, we investigate the following problems: First, modeling the data array as a random field, we wish to examine whether there exists a scandiction scheme which is independent of the field's distribution, yet asymptotically achieves the same performance as if this distribution was known. This question is answered in the affirmative for the set of all spatially stationary random fields and under mild conditions on the loss function. We then discuss the scenario where a non-optimal scanning order is used, yet accompanied by an optimal predictor, and derive bounds on the excess loss compared to optimal scanning and prediction. This paper is the first part of a two-part paper on sequential decision making for multi-dimensional data. It deals with clean, noiseless data arrays. The second part deals with noisy data arrays, namely, with the case where the decision maker observes only a noisy version of the data, yet it is judged with respect to the original, clean data.Comment: 46 pages, 2 figures. Revised version: title changed, section 1 revised, section 3.1 added, a few minor/technical corrections mad

    Neural Space-filling Curves

    Full text link
    We present Neural Space-filling Curves (SFCs), a data-driven approach to infer a context-based scan order for a set of images. Linear ordering of pixels forms the basis for many applications such as video scrambling, compression, and auto-regressive models that are used in generative modeling for images. Existing algorithms resort to a fixed scanning algorithm such as Raster scan or Hilbert scan. Instead, our work learns a spatially coherent linear ordering of pixels from the dataset of images using a graph-based neural network. The resulting Neural SFC is optimized for an objective suitable for the downstream task when the image is traversed along with the scan line order. We show the advantage of using Neural SFCs in downstream applications such as image compression. Code and additional results will be made available at https://hywang66.github.io/publication/neuralsfc

    Topographic map visualization from adaptively compressed textures

    Get PDF
    Raster-based topographic maps are commonly used in geoinformation systems to overlay geographic entities on top of digital terrain models. Using compressed texture formats for encoding topographic maps allows reducing latency times while visualizing large geographic datasets. Topographic maps encompass high-frequency content with large uniform regions, making current compressed texture formats inappropriate for encoding them. In this paper we present a method for locally-adaptive compression of topographic maps. Key elements include a Hilbert scan to maximize spatial coherence, efficient encoding of homogeneous image regions through arbitrarily-sized texel runs, a cumulative run-length encoding supporting fast random-access, and a compression algorithm supporting lossless and lossy compression. Our scheme can be easily implemented on current programmable graphics hardware allowing real-time GPU decompression and rendering of bilinear-filtered topographic maps.Postprint (published version

    Structures de haies dans un paysage agricole : une étude par chemin de Hilbert adaptatif et chaînes de Markov

    Get PDF
    National audienceDans cet article nous présentons une approche couplant une courbe remplissant l'espace et une chaîne de Markov pour analyser des données spa-tiales concernant la localisation de haies. Du fait de l'hétérogénéité spatiale des données, nous utilisons une courbe adaptative de Hilbert qui permet de linéariser l'espace en s'ajustant localement à la densité des données. Pour ensuite exploiter la séquence produite, il est nécessaire de caractériser la distance entre un point et son prédecesseur sur la courbe ainsi que la densité locale. Nous proposons de calculer un temps d'accès à un point à partir du point précédent en utilisant la notion de profondeur de découpe. Cette variable, couplée avec les variables caractérisant les haies est ensuite analysée avec un modèle de Markov. Nous présentons et interprétons les résultats obtenus sur un jeu de données d'environ 10000 segments de haies d'une zone de la Basse vallée de la Durance
    corecore