27 research outputs found

    Coarse-grained Multiresolution Structures for Mobile Exploration of Gigantic Surface Models

    Get PDF
    We discuss our experience in creating scalable systems for distributing and rendering gigantic 3D surfaces on web environments and common handheld devices. Our methods are based on compressed streamable coarse-grained multiresolution structures. By combining CPU and GPU compression technology with our multiresolution data representation, we are able to incrementally transfer, locally store and render with unprecedented performance extremely detailed 3D mesh models on WebGL-enabled browsers, as well as on hardware-constrained mobile devices

    Scalable exploration of highly detailed and annotated 3D models

    Get PDF
    With the widespread availability of mobile graphics terminals andWebGL-enabled browsers, 3D graphics over the Internet is thriving. Thanks to recent advances in 3D acquisition and modeling systems, high-quality 3D models are becoming increasingly common, and are now potentially available for ubiquitous exploration. In current 3D repositories, such as Blend Swap, 3D Café or Archive3D, 3D models available for download are mostly presented through a few user-selected static images. Online exploration is limited to simple orbiting and/or low-fidelity explorations of simplified models, since photorealistic rendering quality of complex synthetic environments is still hardly achievable within the real-time constraints of interactive applications, especially on on low-powered mobile devices or script-based Internet browsers. Moreover, navigating inside 3D environments, especially on the now pervasive touch devices, is a non-trivial task, and usability is consistently improved by employing assisted navigation controls. In addition, 3D annotations are often used in order to integrate and enhance the visual information by providing spatially coherent contextual information, typically at the expense of introducing visual cluttering. In this thesis, we focus on efficient representations for interactive exploration and understanding of highly detailed 3D meshes on common 3D platforms. For this purpose, we present several approaches exploiting constraints on the data representation for improving the streaming and rendering performance, and camera movement constraints in order to provide scalable navigation methods for interactive exploration of complex 3D environments. Furthermore, we study visualization and interaction techniques to improve the exploration and understanding of complex 3D models by exploiting guided motion control techniques to aid the user in discovering contextual information while avoiding cluttering the visualization. We demonstrate the effectiveness and scalability of our approaches both in large screen museum installations and in mobile devices, by performing interactive exploration of models ranging from 9Mtriangles to 940Mtriangles

    Scalable exploration of 3D massive models

    Get PDF
    Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] Esta tese presenta unha serie técnicas escalables que avanzan o estado da arte da creación e exploración de grandes modelos tridimensionaies. No ámbito da xeración destes modelos, preséntanse métodos para mellorar a adquisición e procesado de escenas reais, grazas a unha implementación eficiente dun sistema out- of- core de xestión de nubes de puntos, e unha nova metodoloxía escalable de fusión de datos de xeometría e cor para adquisicións con oclusións. No ámbito da visualización de grandes conxuntos de datos, que é o núcleo principal desta tese, preséntanse dous novos métodos. O primeiro é unha técnica adaptabile out-of-core que aproveita o hardware de rasterización da GPU e as occlusion queries para crear lotes coherentes de traballo, que serán procesados por kernels de trazado de raios codificados en shaders, permitindo out-of-core ray-tracing con sombreado e iluminación global. O segundo é un método de compresión agresivo que aproveita a redundancia xeométrica que se adoita atopar en grandes modelos 3D para comprimir os datos de forma que caiban, nun formato totalmente renderizable, na memoria da GPU. O método está deseñado para representacións voxelizadas de escenas 3D, que son amplamente utilizadas para diversos cálculos como para acelerar as consultas de visibilidade na GPU. A compresión lógrase fusionando subárbores idénticas a través dunha transformación de similitude, e aproveitando a distribución non homoxénea de referencias a nodos compartidos para almacenar punteiros aos nodos fillo, e utilizando unha codificación de bits variable. A capacidade e o rendemento de todos os métodos avalíanse utilizando diversos casos de uso do mundo real de diversos ámbitos e sectores, incluídos o patrimonio cultural, a enxeñería e os videoxogos.[Resumen] En esta tesis se presentan una serie técnicas escalables que avanzan el estado del arte de la creación y exploración de grandes modelos tridimensionales. En el ámbito de la generación de estos modelos, se presentan métodos para mejorar la adquisición y procesado de escenas reales, gracias a una implementación eficiente de un sistema out-of-core de gestión de nubes de puntos, y una nueva metodología escalable de fusión de datos de geometría y color para adquisiciones con oclusiones. Para la visualización de grandes conjuntos de datos, que constituye el núcleo principal de esta tesis, se presentan dos nuevos métodos. El primero de ellos es una técnica adaptable out-of-core que aprovecha el hardware de rasterización de la GPU y las occlusion queries, para crear lotes coherentes de trabajo, que serán procesados por kernels de trazado de rayos codificados en shaders, permitiendo renders out-of-core avanzados con sombreado e iluminación global. El segundo es un método de compresión agresivo, que aprovecha la redundancia geométrica que se suele encontrar en grandes modelos 3D para comprimir los datos de forma que quepan, en un formato totalmente renderizable, en la memoria de la GPU. El método está diseñado para representaciones voxelizadas de escenas 3D, que son ampliamente utilizadas para diversos cálculos como la aceleración las consultas de visibilidad en la GPU o el trazado de sombras. La compresión se logra fusionando subárboles idénticos a través de una transformación de similitud, y aprovechando la distribución no homogénea de referencias a nodos compartidos para almacenar punteros a los nodos hijo, utilizando una codificación de bits variable. La capacidad y el rendimiento de todos los métodos se evalúan utilizando diversos casos de uso del mundo real de diversos ámbitos y sectores, incluidos el patrimonio cultural, la ingeniería y los videojuegos.[Abstract] This thesis introduces scalable techniques that advance the state-of-the-art in massive model creation and exploration. Concerning model creation, we present methods for improving reality-based scene acquisition and processing, introducing an efficient implementation of scalable out-of-core point clouds and a data-fusion approach for creating detailed colored models from cluttered scene acquisitions. The core of this thesis concerns enabling technology for the exploration of general large datasets. Two novel solutions are introduced. The first is an adaptive out-of-core technique exploiting the GPU rasterization pipeline and hardware occlusion queries in order to create coherent batches of work for localized shader-based ray tracing kernels, opening the door to out-of-core ray tracing with shadowing and global illumination. The second is an aggressive compression method that exploits redundancy in large models to compress data so that it fits, in fully renderable format, in GPU memory. The method is targeted to voxelized representations of 3D scenes, which are widely used to accelerate visibility queries on the GPU. Compression is achieved by merging subtrees that are identical through a similarity transform and by exploiting the skewed distribution of references to shared nodes to store child pointers using a variable bitrate encoding The capability and performance of all methods are evaluated on many very massive real-world scenes from several domains, including cultural heritage, engineering, and gaming

    Visual Techniques for Geological Fieldwork Using Mobile Devices

    Get PDF
    Visual techniques in general and 3D visualisation in particular have seen considerable adoption within the last 30 years in the geosciences and geology. Techniques such as volume visualisation, for analysing subsurface processes, and photo-coloured LiDAR point-based rendering, to digitally explore rock exposures at the earth’s surface, were applied within geology as one of the first adopting branches of science. A large amount of digital, geological surface- and volume data is nowadays available to desktop-based workflows for geological applications such as hydrocarbon reservoir exploration, groundwater modelling, CO2 sequestration and, in the future, geothermal energy planning. On the other hand, the analysis and data collection during fieldwork has yet to embrace this ”digital revolution”: sedimentary logs, geological maps and stratigraphic sketches are still captured in each geologist’s individual fieldbook, and physical rocks samples are still transported to the lab for subsequent analysis. Is this still necessary, or are there extended digital means of data collection and exploration in the field ? Are modern digital interpretation techniques accurate and intuitive enough to relevantly support fieldwork in geology and other geoscience disciplines ? This dissertation aims to address these questions and, by doing so, close the technological gap between geological fieldwork and office workflows in geology. The emergence of mobile devices and their vast array of physical sensors, combined with touch-based user interfaces, high-resolution screens and digital cameras provide a possible digital platform that can be used by field geologists. Their ubiquitous availability increases the chances to adopt digital workflows in the field without additional, expensive equipment. The use of 3D data on mobile devices in the field is furthered by the availability of 3D digital outcrop models and the increasing ease of their acquisition. This dissertation assesses the prospects of adopting 3D visual techniques and mobile devices within field geology. The research of this dissertation uses previously acquired and processed digital outcrop models in the form of textured surfaces from optical remote sensing and photogrammetry. The scientific papers in this thesis present visual techniques and algorithms to map outcrop photographs in the field directly onto the surface models. Automatic mapping allows the projection of photo interpretations of stratigraphy and sedimentary facies on the 3D textured surface while providing the domain expert with simple-touse, intuitive tools for the photo interpretation itself. The developed visual approach, combining insight from all across the computer sciences dealing with visual information, merits into the mobile device Geological Registration and Interpretation Toolset (GRIT) app, which is assessed on an outcrop analogue study of the Saltwick Formation exposed at Whitby, North Yorkshire, UK. Although being applicable to a diversity of study scenarios within petroleum geology and the geosciences, the particular target application of the visual techniques is to easily provide field-based outcrop interpretations for subsequent construction of training images for multiple point statistics reservoir modelling, as envisaged within the VOM2MPS project. Despite the success and applicability of the visual approach, numerous drawbacks and probable future extensions are discussed in the thesis based on the conducted studies. Apart from elaborating on more obvious limitations originating from the use of mobile devices and their limited computing capabilities and sensor accuracies, a major contribution of this thesis is the careful analysis of conceptual drawbacks of established procedures in modelling, representing, constructing and disseminating the available surface geometry. A more mathematically-accurate geometric description of the underlying algebraic surfaces yields improvements and future applications unaddressed within the literature of geology and the computational geosciences to this date. Also, future extensions to the visual techniques proposed in this thesis allow for expanded analysis, 3D exploration and improved geological subsurface modelling in general.publishedVersio

    ISCR Annual Report: Fical Year 2004

    Full text link

    Interactive Spaces Natural interfaces supporting gestures and manipulations in interactive spaces

    Get PDF
    This doctoral dissertation focuses on the development of interactive spaces through the use of natural interfaces based on gestures and manipulative actions. In the real world people use their senses to perceive the external environment and they use manipulations and gestures to explore the world around them, communicate and interact with other individuals. From this perspective the use of natural interfaces that exploit the human sensorial and explorative abilities helps filling the gap between physical and digital world. In the first part of this thesis we describe the work made for improving interfaces and devices for tangible, multi touch and free hand interactions. The idea is to design devices able to work also in uncontrolled environments, and in situations where control is mostly of the physical type where even the less experienced users can express their manipulative exploration and gesture communication abilities. We also analyze how it can be possible to mix these techniques to create an interactive space, specifically designed for teamwork where the natural interfaces are distributed in order to encourage collaboration. We then give some examples of how these interactive scenarios can host various types of applications facilitating, for instance, the exploration of 3D models, the enjoyment of multimedia contents and social interaction. Finally we discuss our results and put them in a wider context, focusing our attention particularly on how the proposed interfaces actually improve people’s lives and activities and the interactive spaces become a place of aggregation where we can pursue objectives that are both personal and shared with others

    Interactive Spaces Natural interfaces supporting gestures and manipulations in interactive spaces

    Get PDF
    This doctoral dissertation focuses on the development of interactive spaces through the use of natural interfaces based on gestures and manipulative actions. In the real world people use their senses to perceive the external environment and they use manipulations and gestures to explore the world around them, communicate and interact with other individuals. From this perspective the use of natural interfaces that exploit the human sensorial and explorative abilities helps filling the gap between physical and digital world. In the first part of this thesis we describe the work made for improving interfaces and devices for tangible, multi touch and free hand interactions. The idea is to design devices able to work also in uncontrolled environments, and in situations where control is mostly of the physical type where even the less experienced users can express their manipulative exploration and gesture communication abilities. We also analyze how it can be possible to mix these techniques to create an interactive space, specifically designed for teamwork where the natural interfaces are distributed in order to encourage collaboration. We then give some examples of how these interactive scenarios can host various types of applications facilitating, for instance, the exploration of 3D models, the enjoyment of multimedia contents and social interaction. Finally we discuss our results and put them in a wider context, focusing our attention particularly on how the proposed interfaces actually improve people’s lives and activities and the interactive spaces become a place of aggregation where we can pursue objectives that are both personal and shared with others

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Enhancing detailed haptic relief for real-time interaction

    Get PDF
    The present document exposes a different approach for haptic rendering, defined as the simulation of force interactions to reproduce the sensation of surface relief in dense models. Current research shows open issues in timely haptic interaction involving large meshes, with several problems affecting performance and fidelity, and without a dominant technique to treat these issues properly. Relying in pure geometric collisions when rendering highly dense mesh models (hundreds of thousands of triangles) sensibly degrades haptic rates due to the sheer number of collisions that must be tracked between the mesh's faces and a haptic probe. Several bottlenecks were identified in order to enhance haptic performance: software architecture and data structures, collision detection, and accurate rendering of surface relief. To account for overall software architecture and data structures, it was derived a complete component framework for transforming standalone VR applications into full-fledged multi-threaded Collaborative Virtual Reality Environments (CVREs), after characterizing existing implementations into a feature-rich superset. Enhancements include: a scalable arbitrated peer-to-peer topology for scene sharing; multi-threaded components for graphics rendering, user interaction and network communications; a collaborative user interface model for session handling; and interchangeable user roles with multi-camera perspectives, avatar awareness and shared annotations. We validate the framework by converting the existing ALICE VR Navigator into a complete CVRE, showing good performance in collaborative manipulation of complex models. To specifically address collision detection computation, we derive a conformal algebra treatment for collisions among points, segments, areas, and volumes, based on collision detection in conformal R{4,1} (5D) space, and implemented in GPU for faster parallel queries. Results show orders of magnitude time reductions in collisions computations, allowing interactive rates. Finally, the main core of the research is the haptic rendering of surface mesostructure in large meshes. Initially, a method for surface haptic rendering was proposed, using image-based Hybrid Rugosity Mesostructures (HRMs) of per-face heightfield displacements and normalmaps layered on top of a simpler mesh, adding greater surface detail than actually present. Haptic perception is achieved modulating the haptic probe's force response using the HRM coat. A usability testbed framework was built to measure experimental performance with a common set tests, meshes and HRMs. Trial results show the goodness of the proposed technique, rendering accurate 3D surface detail at high sampling rates. This local per-face method is extended into a fast global approach for haptic rendering, building a mesostructure-based atlas of depth/normal textures (HyRMA), computed out of surface differences of the same mesh object at two different resolutions: original and simplified. For each triangle in the simplified mesh, an irregular prism is considered defined by the triangle's vertices and their normals. This prism completely covers the original mesh relief over the triangle. Depth distances and surfaces normals within each prism are warped from object volume space to orthogonal tangent space, by means of a novel and fast method for computing barycentric coordinates at the prism, and storing normals and relief in a sorted atlas. Haptic rendering is effected by colliding the probe against the atlas, and effecting a modulated force response at the haptic probe. The method is validated numerically, statistically and perceptually in user testing controlled trials, achieving accurate haptic sensation of large meshes' fine features at interactive rendering rates, with some minute loss of mesostructure detail.En aquesta tesi es presenta un novedós enfocament per a la percepció hàptica del relleu de models virtuals complexes mitjançant la simulació de les forces d'interacció entre la superfície i un element de contacte. La proposta contribueix a l'estat de l'art de la recerca en aquesta àrea incrementant l'eficiència i la fidelitat de la interacció hàptica amb grans malles de triangles. La detecció de col·lisions amb malles denses (centenars de milers de triangles) limita la velocitat de resposta hàptica degut al gran nombre d'avaluacions d'intersecció cara-dispositiu hàptic que s'han de realitzar. Es van identificar diferents alternatives per a incrementar el rendiment hàptic: arquitectures de software i estructures de dades específiques, algorismes de detecció de col·lisions i reproducció hàptica de relleu superficial. En aquesta tesi es presenten contribucions en alguns d'aquests aspectes. S'ha proposat una estructura completa de components per a transformar aplicacions de Realitat Virtual en Ambients Col·laboratius de Realitat Virtual (CRVEs) multithread en xarxa. L'arquitectura proposada inclou: una topologia escalable punt a punt per a compartir escenes; components multithread per a visualització gràfica, interacció amb usuaris i comunicació en xarxa; un model d'interfície d'usuari col·laboratiu per a la gestió de sessions; i rols intercanviables de l'usuari amb perspectives de múltiples càmeres, presència d'avatars i anotacions compartides. L'estructura s'ha validat convertint el navegador ALICE en un CVRE completament funcional, mostrant un bon rendiment en la manipulació col·laborativa de models complexes. Per a incrementar l'eficiència del càlcul de col·lisions, s'ha proposat un algorisme que treballa en un espai conforme R{4,1} (5D) que permet detectar col·lisions entre punts, segments, triangles i volums. Aquest algorisme s'ha implementat en GPU per obtenir una execució paral·lela més ràpida. Els resultats mostren reduccions en el temps de càlcul de col·lisions permetent interactivitat. Per a la percepció hàptica de malles complexes que modelen objectes rugosos, s'han proposat diferents algorismes i estructures de dades. Les denominades Mesoestructures Híbrides de Rugositat (HRM) permeten substituir els detalls geomètrics d'una cara (rugositats) per dues textures: de normals i d'alçades. La percepció hàptica s'aconsegueix modulant la força de resposta entre el dispositiu hàptic i la HRM. Els tests per avaluar experimentalment l'eficiència del càlcul de col·lisions i la percepció hàptica utilitzant HRM respecte a modelar les rugositats amb geometria, van mostrar que la tècnica proposada va ser encertada, permetent percebre detalls 3D correctes a altes tases de mostreig. El mètode es va estendre per a representar rugositats d'objectes. Es proposa substituir l'objecte per un model simplificat i un atles de mesoestructures en el que s'usen textures de normals i de relleus (HyRMA). Aquest atles s'obté a partir de la diferència en el detall de la superfície entre dos malles del mateix objecte: l'original i la simplificada. A partir d'un triangle de la malla simplificada es construeix un prisma, definit pels vèrtexs del triangle i les seves normals, que engloba el relleu de la malla original sobre el triangle. Les alçades i normals dins del prisma es transformen des de l'espai de volum a l'espai ortogonal tangent, amb mètode novedós i eficient que calcula les coordenades baricèntriques relatives al prisma, per a guardar el mapa de textures transformat en un atles ordenat. La percepció hàptica s'assoleix detectant les col·lisions entre el dispositiu hàptic i l'atles, i modulant la força de resposta d'acord al resultat de la col·lisió. El mètode s'ha validat numèricament, estadística i perceptual en tests amb usuaris, aconseguint una correcta i interactiva sensació tàctil dels objectes simulats mitjançant la mesoestructura de les mallesEn esta tesis se presenta un enfoque novedoso para la percepción háptica del relieve de modelos virtuales complejos mediante la simulación de las fuerzas de interacción entre la superficie y un elemento de contacto. La propuesta contribuye al estado del arte de investigación en este área incrementando la eficiencia y fidelidad de interacción háptica con grandes mallas de triángulos. La detección de colisiones con mallas geométricas densas (cientos de miles de triángulos) limita la velocidad de respuesta háptica debido al elevado número de evaluaciones de intersección cara-dispositivo háptico que deben realizarse. Se identificaron diferentes alternativas para incrementar el rendimiento háptico: arquitecturas de software y estructuras de datos específicas, algoritmos de detección de colisiones y reproducción háptica de relieve superficial. En esta tesis se presentan contribuciones en algunos de estos aspectos. Se ha propuesto una estructura completa de componentes para transformar aplicaciones aisladas de Realidad Virtual en Ambientes Colaborativos de Realidad Virtual (CRVEs) multithread en red. La arquitectura propuesta incluye: una topología escalable punto a punto para compartir escenas; componentes multithread para visualización gráfica, interacción con usuarios y comunicación en red; un modelo de interfaz de usuario colaborativo para la gestión de sesiones; y roles intercambiables del usuario con perspectivas de múltiples cámaras, presencia de avatares y anotaciones compartidas. La estructura se ha validado convirtiendo el navegador ALICE en un CVRE completamente funcional, mostrando un buen rendimiento en la manipulación colaborativa de modelos complejos. Para incrementar la eficiencia del cálculo de colisiones, se ha propuesto un algoritmo que trabaja en un espacio conforme R4,1 (5D) que permite detectar colisiones entre puntos, segmentos, triángulos y volúmenes. Este algoritmo se ha implementado en GPU a efectos de obtener una ejecución paralelamás rápida. Los resultadosmuestran reducciones en el tiempo de cálculo de colisiones permitiendo respuesta interactiva. Para la percepción háptica de mallas complejas que modelan objetos rugosos, se han propuesto diferentes algoritmos y estructuras de datos. Las denominadasMesoestructuras Híbridas de Rugosidad (HRM) permiten substituir los detalles geométricos de una cara (rugosidades) por una textura de normales y otra de alturas. La percepción háptica se consigue modulando la fuerza de respuesta entre el dispositivo háptico y la HRM. Los tests realizados para evaluar experimentalmente la eficiencia del cálculo de colisiones y la percepción háptica utilizando HRM respecto a modelar las rugosidades con geometría, mostraron que la técnica propuesta fue acertada, permitiendo percibir detalles 3D correctos a altas tasas de muestreo. Este método anterior es extendido a un procedimiento global para representar rugosidades de objetos. Para hacerlo se propone sustituir el objeto por un modelo simplificado y un atlas de mesostructuras usando texturas de normales y relieves (HyRMA). Este atlas se obtiene de la diferencia en detalle de superficie entre dos mallas del mismo objeto: la original y la simplificada. A partir de un triángulo de la malla simplificada se construye un prisma definido por los vértices del triángulo a lo largo de sus normales, que engloba completamente el relieve de la malla original sobre este triángulo. Las alturas y normales dentro de cada prisma se transforman del espacio de volumen al espacio ortoganal tangente, usando un método novedoso y eficiente que calcula las coordenadas baricéntricas relativas a cada prisma para guardar el mapa de texturas transformado en un atlas ordenado. La percepción háptica se consigue detectando directamente las colisiones entre el dispositivo háptico y el atlas, y modulando la fuerza de respuesta de acuerdo al resultado de la colisión. El procedmiento se ha validado numérica, estadística y perceptualmente en ensayos con usuarios, consiguiendo a tasas interactivas la correcta sensación táctil de los objetos simulados mediante la mesoestructura de las mallas, con alguna pérdida muy puntual de detall
    corecore