621 research outputs found

    Variable-resolution Compression of Vector Data

    Get PDF
    The compression of spatial data is a promising solution to reduce the space of data storage and to decrease the transmission time of spatial data over the Internet. This paper proposes a new method for variable-resolution compression of vector data. Three key steps are encompassed in the proposed method, namely, the simplification of vector data via the elimination of vertices, the compression of removed vertices, and the decoding of the compressed vector data. The proposed compression method was implemented and applied to compress vector data to investigate its performance in terms of the compression ratio, distortions of geometric shapes. The results show that the proposed method provides a feasible and efficient solution for the compression of vector data, is able to achieve good compression ratios and maintains the main shape characteristics of the spatial objects within the compressed vector dat

    Management of spatial data for visualization on mobile devices

    Get PDF
    Vector-based mapping is emerging as a preferred format in Location-based Services(LBS), because it can deliver an up-to-date and interactive map visualization. The Progressive Transmission(PT) technique has been developed to enable the ecient transmission of vector data over the internet by delivering various incremental levels of detail(LoD). However, it is still challenging to apply this technique in a mobile context due to many inherent limitations of mobile devices, such as small screen size, slow processors and limited memory. Taking account of these limitations, PT has been extended by developing a framework of ecient data management for the visualization of spatial data on mobile devices. A data generalization framework is proposed and implemented in a software application. This application can signicantly reduce the volume of data for transmission and enable quick access to a simplied version of data while preserving appropriate visualization quality. Using volunteered geographic information as a case-study, the framework shows exibility in delivering up-to-date spatial information from dynamic data sources. Three models of PT are designed and implemented to transmit the additional LoD renements: a full scale PT as an inverse of generalisation, a viewdependent PT, and a heuristic optimised view-dependent PT. These models are evaluated with user trials and application examples. The heuristic optimised view-dependent PT has shown a signicant enhancement over the traditional PT in terms of bandwidth-saving and smoothness of transitions. A parallel data management strategy associated with three corresponding algorithms has been developed to handle LoD spatial data on mobile clients. This strategy enables the map rendering to be performed in parallel with a process which retrieves the data for the next map location the user will require. A viewdependent approach has been integrated to monitor the volume of each LoD for visible area. The demonstration of a exible rendering style shows its potential use in visualizing dynamic geoprocessed data. Future work may extend this to integrate topological constraints and semantic constraints for enhancing the vector map visualization

    Variable-resolution Compression of Vector Data

    Full text link
    The compression of spatial data is a promising solution to reduce the space of data storage and to decrease the transmission time of spatial data over the Internet. This paper proposes a new method for variable-resolution compression of vector data. Three key steps are encompassed in the proposed method, namely, the simplification of vector data via the elimination of vertices, the compression of removed vertices, and the decoding of the compressed vector data. The proposed compression method was implemented and applied to compress vector data to investigate its performance in terms of the compression ratio, distortions of geometric shapes. The results show that the proposed method provides a feasible and efficient solution for the compression of vector data, is able to achieve good compression ratios and maintains the main shape characteristics of the spatial objects within the compressed vector data

    Compression of 3D models with NURBS

    Get PDF
    With recent progress in computing, algorithmics and telecommunications, 3D models are increasingly used in various multimedia applications. Examples include visualization, gaming, entertainment and virtual reality. In the multimedia domain 3D models have been traditionally represented as polygonal meshes. This piecewise planar representation can be thought of as the analogy of bitmap images for 3D surfaces. As bitmap images, they enjoy great flexibility and are particularly well suited to describing information captured from the real world, through, for instance, scanning processes. They suffer, however, from the same shortcomings, namely limited resolution and large storage size. The compression of polygonal meshes has been a very active field of research in the last decade and rather efficient compression algorithms have been proposed in the literature that greatly mitigate the high storage costs. However, such a low level description of a 3D shape has a bounded performance. More efficient compression should be reachable through the use of higher level primitives. This idea has been explored to a great extent in the context of model based coding of visual information. In such an approach, when compressing the visual information a higher level representation (e.g., 3D model of a talking head) is obtained through analysis methods. This can be seen as an inverse projection problem. Once this task is fullled, the resulting parameters of the model are coded instead of the original information. It is believed that if the analysis module is efficient enough, the total cost of coding (in a rate distortion sense) will be greatly reduced. The relatively poor performance and high complexity of currently available analysis methods (except for specific cases where a priori knowledge about the nature of the objects is available), has refrained a large deployment of coding techniques based on such an approach. Progress in computer graphics has however changed this situation. In fact, nowadays, an increasing number of pictures, video and 3D content are generated by synthesis processing rather than coming from a capture device such as a camera or a scanner. This means that the underlying model in the synthesis stage can be used for their efficient coding without the need for a complex analysis module. In other words it would be a mistake to attempt to compress a low level description (e.g., a polygonal mesh) when a higher level one is available from the synthesis process (e.g., a parametric surface). This is, however, what is usually done in the multimedia domain, where higher level 3D model descriptions are converted to polygonal meshes, if anything by the lack of standard coded formats for the former. On a parallel but related path, the way we consume audio-visual information is changing. As opposed to recent past and a large part of today's applications, interactivity is becoming a key element in the way we consume information. In the context of interest in this dissertation, this means that when coding visual information (an image or a video for instance), previously obvious considerations such as decision on sampling parameters are not so obvious anymore. In fact, as in an interactive environment the effective display resolution can be controlled by the user through zooming, there is no clear optimal setting for the sampling period. This means that because of interactivity, the representation used to code the scene should allow the display of objects in a variety of resolutions, and ideally up to infinity. One way to resolve this problem would be by extensive over-sampling. But this approach is unrealistic and too expensive to implement in many situations. The alternative would be to use a resolution independent representation. In the realm of 3D modeling, such representations are usually available when the models are created by an artist on a computer. The scope of this dissertation is precisely the compression of 3D models in higher level forms. The direct coding in such a form should yield improved rate-distortion performance while providing a large degree of resolution independence. There has not been, so far, any major attempt to efficiently compress these representations, such as parametric surfaces. This thesis proposes a solution to overcome this gap. A variety of higher level 3D representations exist, of which parametric surfaces are a popular choice among designers. Within parametric surfaces, Non-Uniform Rational B-Splines (NURBS) enjoy great popularity as a wide range of NURBS based modeling tools are readily available. Recently, NURBS has been included in the Virtual Reality Modeling Language (VRML) and its next generation descendant eXtensible 3D (X3D). The nice properties of NURBS and their widespread use has lead us to choose them as the form we use for the coded representation. The primary goal of this dissertation is the definition of a system for coding 3D NURBS models with guaranteed distortion. The basis of the system is entropy coded differential pulse coded modulation (DPCM). In the case of NURBS, guaranteeing the distortion is not trivial, as some of its parameters (e.g., knots) have a complicated influence on the overall surface distortion. To this end, a detailed distortion analysis is performed. In particular, previously unknown relations between the distortion of knots and the resulting surface distortion are demonstrated. Compression efficiency is pursued at every stage and simple yet efficient entropy coder realizations are defined. The special case of degenerate and closed surfaces with duplicate control points is addressed and an efficient yet simple coding is proposed to compress the duplicate relationships. Encoder aspects are also analyzed. Optimal predictors are found that perform well across a wide class of models. Simplification techniques are also considered for improved compression efficiency at negligible distortion cost. Transmission over error prone channels is also considered and an error resilient extension defined. The data stream is partitioned by independently coding small groups of surfaces and inserting the necessary resynchronization markers. Simple strategies for achieving the desired level of protection are proposed. The same extension also serves the purpose of random access and on-the-fly reordering of the data stream

    Realistic Visualization of Animated Virtual Cloth

    Get PDF
    Photo-realistic rendering of real-world objects is a broad research area with applications in various different areas, such as computer generated films, entertainment, e-commerce and so on. Within photo-realistic rendering, the rendering of cloth is a subarea which involves many important aspects, ranging from material surface reflection properties and macroscopic self-shadowing to animation sequence generation and compression. In this thesis, besides an introduction to the topic plus a broad overview of related work, different methods to handle major aspects of cloth rendering are described. Material surface reflection properties play an important part to reproduce the look & feel of materials, that is, to identify a material only by looking at it. The BTF (bidirectional texture function), as a function of viewing and illumination direction, is an appropriate representation of reflection properties. It captures effects caused by the mesostructure of a surface, like roughness, self-shadowing, occlusion, inter-reflections, subsurface scattering and color bleeding. Unfortunately a BTF data set of a material consists of hundreds to thousands of images, which exceeds current memory size of personal computers by far. This work describes the first usable method to efficiently compress and decompress a BTF data for rendering at interactive to real-time frame rates. It is based on PCA (principal component analysis) of the BTF data set. While preserving the important visual aspects of the BTF, the achieved compression rates allow the storage of several different data sets in main memory of consumer hardware, while maintaining a high rendering quality. Correct handling of complex illumination conditions plays another key role for the realistic appearance of cloth. Therefore, an upgrade of the BTF compression and rendering algorithm is described, which allows the support of distant direct HDR (high-dynamic-range) illumination stored in environment maps. To further enhance the appearance, macroscopic self-shadowing has to be taken into account. For the visualization of folds and the life-like 3D impression, these kind of shadows are absolutely necessary. This work describes two methods to compute these shadows. The first is seamlessly integrated into the illumination part of the rendering algorithm and optimized for static meshes. Furthermore, another method is proposed, which allows the handling of dynamic objects. It uses hardware-accelerated occlusion queries for the visibility determination. In contrast to other algorithms, the presented algorithm, despite its simplicity, is fast and produces less artifacts than other methods. As a plus, it incorporates changeable distant direct high-dynamic-range illumination. The human perception system is the main target of any computer graphics application and can also be treated as part of the rendering pipeline. Therefore, optimization of the rendering itself can be achieved by analyzing human perception of certain visual aspects in the image. As a part of this thesis, an experiment is introduced that evaluates human shadow perception to speedup shadow rendering and provides optimization approaches. Another subarea of cloth visualization in computer graphics is the animation of the cloth and avatars for presentations. This work also describes two new methods for automatic generation and compression of animation sequences. The first method to generate completely new, customizable animation sequences, is based on the concept of finding similarities in animation frames of a given basis sequence. Identifying these similarities allows jumps within the basis sequence to generate endless new sequences. Transmission of any animated 3D data over bandwidth-limited channels, like extended networks or to less powerful clients requires efficient compression schemes. The second method included in this thesis in the animation field is a geometry data compression scheme. Similar to the BTF compression, it uses PCA in combination with clustering algorithms to segment similar moving parts of the animated objects to achieve high compression rates in combination with a very exact reconstruction quality.Realistische Visualisierung von animierter virtueller Kleidung Das photorealistisches Rendering realer Gegenstände ist ein weites Forschungsfeld und hat Anwendungen in vielen Bereichen. Dazu zählen Computer generierte Filme (CGI), die Unterhaltungsindustrie und E-Commerce. Innerhalb dieses Forschungsbereiches ist das Rendern von photorealistischer Kleidung ein wichtiger Bestandteil. Hier reichen die wichtigen Aspekte, die es zu berücksichtigen gilt, von optischen Materialeigenschaften über makroskopische Selbstabschattung bis zur Animationsgenerierung und -kompression. In dieser Arbeit wird, neben der Einführung in das Thema, ein weiter Überblick über ähnlich gelagerte Arbeiten gegeben. Der Schwerpunkt der Arbeit liegt auf den wichtigen Aspekten der virtuellen Kleidungsvisualisierung, die oben beschrieben wurden. Die optischen Reflektionseigenschaften von Materialoberflächen spielen eine wichtige Rolle, um das so genannte look & feel von Materialien zu charakterisieren. Hierbei kann ein Material vom Nutzer identifiziert werden, ohne dass er es direkt anfassen muss. Die BTF (bidirektionale Texturfunktion)ist eine Funktion die abhängig von der Blick- und Beleuchtungsrichtung ist. Daher ist sie eine angemessene Repräsentation von Reflektionseigenschaften. Sie enthält Effekte wie Rauheit, Selbstabschattungen, Verdeckungen, Interreflektionen, Streuung und Farbbluten, die durch die Mesostruktur der Oberfläche hervorgerufen werden. Leider besteht ein BTF Datensatz eines Materials aus hunderten oder tausenden von Bildern und sprengt damit herkömmliche Hauptspeicher in Computern bei weitem. Diese Arbeit beschreibt die erste praktikable Methode, um BTF Daten effizient zu komprimieren, zu speichern und für Echtzeitanwendungen zum Visualisieren wieder zu dekomprimieren. Die Methode basiert auf der Principal Component Analysis (PCA), die Daten nach Signifikanz ordnet. Während die PCA die entscheidenen visuellen Aspekte der BTF erhält, können mit ihrer Hilfe Kompressionsraten erzielt werden, die es erlauben mehrere BTF Materialien im Hauptspeicher eines Consumer PC zu verwalten. Dies erlaubt ein High-Quality Rendering. Korrektes Verwenden von komplexen Beleuchtungssituationen spielt eine weitere, wichtige Rolle, um Kleidung realistisch erscheinen zu lassen. Daher wird zudem eine Erweiterung des BTF Kompressions- und Renderingalgorithmuses erläutert, die den Einsatz von High-Dynamic Range (HDR) Beleuchtung erlaubt, die in environment maps gespeichert wird. Um die realistische Erscheinung der Kleidung weiter zu unterstützen, muss die makroskopische Selbstabschattung integriert werden. Für die Visualisierung von Falten und den lebensechten 3D Eindruck ist diese Art von Schatten absolut notwendig. Diese Arbeit beschreibt daher auch zwei Methoden, diese Schatten schnell und effizient zu berechnen. Die erste ist nahtlos in den Beleuchtungspart des obigen BTF Renderingalgorithmuses integriert und für statische Geometrien optimiert. Die zweite Methode behandelt dynamische Objekte. Dazu werden hardwarebeschleunigte Occlusion Queries verwendet, um die Sichtbarkeitsberechnung durchzuführen. Diese Methode ist einerseits simpel und leicht zu implementieren, anderseits ist sie schnell und produziert weniger Artefakte, als vergleichbare Methoden. Zusätzlich ist die Verwendung von veränderbarer, entfernter HDR Beleuchtung integriert. Das menschliche Wahrnehmungssystem ist das eigentliche Ziel jeglicher Anwendung in der Computergrafik und kann daher selbst als Teil einer erweiterten Rendering Pipeline gesehen werden. Daher kann das Rendering selbst optimiert werden, wenn man die menschliche Wahrnehmung verschiedener visueller Aspekte der berechneten Bilder analysiert. Teil der vorliegenden Arbeit ist die Beschreibung eines Experimentes, das menschliche Schattenwahrnehmung untersucht, um das Rendern der Schatten zu beschleunigen. Ein weiteres Teilgebiet der Kleidungsvisualisierung in der Computergrafik ist die Animation der Kleidung und von Avataren für Präsentationen. Diese Arbeit beschreibt zwei neue Methoden auf diesem Teilgebiet. Einmal ein Algorithmus, der für die automatische Generierung neuer Animationssequenzen verwendet werden kann und zum anderen einen Kompressionsalgorithmus für eben diese Sequenzen. Die automatische Generierung von völlig neuen, anpassbaren Animationen basiert auf dem Konzept der Ähnlichkeitssuche. Hierbei werden die einzelnen Schritte von gegebenen Basisanimationen auf Ähnlichkeiten hin untersucht, die zum Beispiel die Geschwindigkeiten einzelner Objektteile sein können. Die Identifizierung dieser Ähnlichkeiten erlaubt dann Sprünge innerhalb der Basissequenz, die dazu benutzt werden können, endlose, neue Sequenzen zu erzeugen. Die Übertragung von animierten 3D Daten über bandbreitenlimitierte Kanäle wie ausgedehnte Netzwerke, Mobilfunk oder zu sogenannten thin clients erfordert eine effiziente Komprimierung. Die zweite, in dieser Arbeit vorgestellte Methode, ist ein Kompressionsschema für Geometriedaten. Ähnlich wie bei der Kompression von BTF Daten wird die PCA in Verbindung mit Clustering benutzt, um die animierte Geometrie zu analysieren und in sich ähnlich bewegende Teile zu segmentieren. Diese erkannten Segmente lassen sich dann hoch komprimieren. Der Algorithmus arbeitet automatisch und erlaubt zudem eine sehr exakte Rekonstruktionsqualität nach der Dekomprimierung

    Limited resource visualization with region-of-interest

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Efficient Geometry and Illumination Representations for Interactive Protein Visualization

    Get PDF
    This dissertation explores techniques for interactive simulation and visualization for large protein datasets. My thesis is that using efficient representations for geometric and illumination data can help in developing algorithms that achieve better interactivity for visual and computational proteomics. I show this by developing new algorithms for computation and visualization for proteins. I also show that the same insights that resulted in better algorithms for visual proteomics can also be turned around and used for more efficient graphics rendering. Molecular electrostatics is important for studying the structures and interactions of proteins, and is vital in many computational biology applications, such as protein folding and rational drug design. We have developed a system to efficiently solve the non-linear Poisson-Boltzmann equation governing molecular electrostatics. Our system simultaneously improves the accuracy and the efficiency of the solution by adaptively refining the computational grid near the solute-solvent interface. In addition, we have explored the possibility of mapping the PBE solution onto GPUs. We use pre-computed accumulation of transparency with spherical-harmonics-based compression to accelerate volume rendering of molecular electrostatics. We have also designed a time- and memory-efficient algorithm for interactive visualization of large dynamic molecules. With view-dependent precision control and memory-bandwidth reduction, we have achieved real-time visualization of dynamic molecular datasets with tens of thousands of atoms. Our algorithm is linearly scalable in the size of the molecular datasets. In addition, we present a compact mathematical model to efficiently represent the six-dimensional integrals of bidirectional surface scattering reflectance distribution functions (BSSRDFs) to render scattering effects in translucent materials interactively. Our analysis first reduces the complexity and dimensionality of the problem by decomposing the reflectance field into non-scattered and subsurface-scattered reflectance fields. While the non-scattered reflectance field can be described by 4D bidirectional reflectance distribution functions (BRDFs), we show that the scattered reflectance field can also be represented by a 4D field through pre-processing the neighborhood scattering radiance transfer integrals. We use a novel reference-points scheme to compactly represent the pre-computed integrals using a hierarchical and progressive spherical harmonics representation. Our algorithm scales linearly with the number of mesh vertices

    3D Mesh Simplification Techniques for Enhanced Image Based Rendering

    Get PDF
    Three dimensional videos and virtual reality applications are gaining wide range of popularity in recent years. Virtual reality creates the feeling of 'being there' and provides more realistic experience than conventional 2D media. In order to feel the immersive experience, it is important to satisfy two important criteria namely, visual quality of the video and timely rendering. However, it is quite impractical to satisfy these goals, especially on low capability devices such as mobile phones. Careful analysis of the depth map and further processing may help in achieving these goals considerably. Advanced developments in the graphics hardware tremendously reduced the time required to render the images to be displayed. However, along with this development, the demand for more realism tend to increase the complexity of the model of the virtual environment. Complex models require millions of primitives which subsequently means millions of polygons to represent it. Wise selection of rendering technique offer one of the ways to reduce the rendering speed. Mesh-based rendering is one of the techniques which enhances the speed of rendering as compared to its counterpart pixel based rendering. However, due to the demand for richer experience, the number of polygons required, always seem to exceed the number of polygons the graphics hardware can efficiently render. In practice, it is not feasible to store large number of polygons because of storage limitations in mobile phone hardware. Furthermore, number of polygons increase the rendering speed, which would necessitate more powerful devices. Mesh simplification techniques offer solution to deal with complex models. These methods simplify unimportant and redundant part of the model which helps in reducing the rendering cost without negatively effecting the visual quality of the scene. Mesh simplification has been extensively studied, however, it is not applied to all the areas. For example, depth is one of the areas where general available simplification methods are not very well suitable as most of the methods do not consider depth discontinuities very well. Moreover, some of the state of the art methods are not capable of handling high resolution depth maps. In this thesis, an attempt is made to address the problem of combining the depth maps with mesh simplification. Aim of the thesis is to reduce the computational cost of rendering by taking the homogeneous and planar areas of the depth map into account, while still maintaining suitable visual quality of the rendered image. Different depth decimation techniques are implemented and compared with the available state of the art methods. We demonstrate that the depth decimation technique which fits the plane to depth area and considers the depth discontinuities, outperforms the state of the art methods clearly

    Design and development of a system for vario-scale maps

    Get PDF
    Nowadays, there are many geo-information data sources available such as maps on the Internet, in-car navigation devices and mobile apps. All datasets used in these applications are the same in principle, and face the same issues, namely: Maps of different scales are stored separately. With many separate fixed levels, a lot of information is the same, but still needs to be included, which leads to duplication. With many redundant data throughout the scales, features are represented again and again, which may lead to inconsistency. Currently available maps contain significantly more levels of detail (twenty map scales on average) than in the past. These levels must be created, but the optimal strategy to do so is not known. For every user’s data request, a significant part of the data remains the same, but still needs to be included. This leads to more data transfer, and slower response. The interactive Internet environment is not used to its full potential for user navigation. It is common to observe lagging, popping features or flickering of a newly retrieved map scale feature while using the map. This research develops principles of variable scale (vario-scale) maps to address these issues. The vario-scale approach is an alternative for obtaining and maintaining geographical data sets at different map scales. It is based on the specific topological structure called tGAP (topological Generalized Area Partitioning) which addresses the main open issues of current solutions for managing spatial data sets of different scales such as: redundancy data, inconsistency of map scales and dynamic transfer. The objective of this thesis is to design, to develop and to extend the variable-scale data structures and it is expressed as the following research question: How to design and develop a system for vario-scale maps?  To address the above research question, this research has been conducted using the following outline: 1) Investigate state-of-the-art in map generalization. 2) Study development of vario-scale structure done so far. 3) Propose techniques for generating better vario-scale map content. 4) Implement strategies to process really massive datasets. 5) Research smooth representation of map features and their impact on user interaction. Results of our research led to new functionality, were addressed in prototype developments and were tested against real world data sets. Throughout this research we have made following main contributions to the design and development of a system of vario-scale maps. We have: studied vario-scale development in the past and we have identified the most urgent needs of the research. designed the concept of granularity and presented our strategy where changes in map content should be as small and as gradual as possible (e. g. use groups, maintain road network, support line feature representation). introduced line features in the solution and presented a fully-automated generalization process that preserves a road network features throughout all scales. proposed an approach to create a vario-scale data structure of massive datasets. demonstrated a method to generate an explicit 3D representation from the structure which can provide smoother user experience. developed a software prototype where a 3D vario-scale dataset can be used to its full potential. conducted initial usability test. All aspects together with already developed functionality provide a more complex and more unified solution for vario-scale mapping. Based on our research, design and development of a system for vario-scale maps should be clearer now. In addition, it is easier to identified necessary steps which need to be taken towards an optimal solution. Our recommendations for future work are: One of the contributions has been an integration of the road features in the structure and their automated generalization throughout the process. Integrating more map features besides roads deserve attention. We have investigated how to deal with massive datasets which do not fit in the main memory of the computer. Our experiences consisted of dataset of one province or state with records in order of millions. To verify our findings, it will be interesting to process even bigger dataset with records in order of billions (a whole continent). We have introduced representation where map content changes as gradually as possible. It is based on process where: 1) explicit 3D geometry from the structure is generated. 2) A slice of the geometry is calculated. 3) Final maps based on the slice is constructed. Investigation of how to integrate this in a server-client pipeline on the Internet is another point of further research. Our research focus has been mainly on one specific aspect of the concept at a time. Now all aspects may be brought together where integration, tuning and orchestration play an important role is another interesting research that desire attention. Carry out more user testing including; 1) maps of sufficient cartographic quality, 2) a large testing region, and 3) the finest version of visualization prototype. &nbsp
    corecore