64 research outputs found

    INTERACTIVE ONLINE VISUALIZATION OF COMPLEX 3D GEOMETRIES

    Get PDF
    In the last decade 3D datasets of the Cultural Heritage field have become extremely rich and high detailed due to the evolution of the technologies they derive from. However, their online deployment, both for scientific and general public purposes is usually deficient in user interaction and multimedia integration. A single solution that efficiently addresses these issues is presented in this paper. The developed framework provides an interactive and lightweight visualization of high-resolution 3D models in a web browser. It is based on 3D Heritage Online Presenter (3DHOP) and Three.js library, implemented on top of WebGL API. 3DHOP capabilities are fully exploited and enhanced with new, high level functionalities. The approach is especially suited to complex geometry and it is adapted to archaeological and architectural environments. Thus, the multi-dimensional documentation of the archaeological site of Meteora, in central Greece is chosen as the case study. Various navigation paradigms are implemented and the data structure is enriched with the incorporation of multiple 3D model viewers. Furthermore, a metadata repository, comprises ortho-images, photographic documentation, video and text, is accessed straight forward through the inspection of the main 3D scene of Meteora by a system of interconnections

    Interactive high fidelity visualization of complex materials on the GPU

    Get PDF
    Documento submetido para revisão pelos pares. A publicar em Computers & Graphics. ISSN 0097-8493. 37:7 (nov. 2013) p. 809–819High fidelity interactive rendering is of major importance for footwear designers, since it allows experimenting with virtual prototypes of new products, rather than producing expensive physical mock-ups. This requires capturing the appearance of complex materials by resorting to image based approaches, such as the Bidirectional Texture Function (BTF), to allow subsequent interactive visualization, while still maintaining the capability to edit the materials' appearance. However, interactive global illumination rendering of compressed editable BTFs with ordinary computing resources remains to be demonstrated. In this paper we demonstrate interactive global illumination by using a GPU ray tracing engine and the Sparse Parametric Mixture Model representation of BTFs, which is particularly well suited for BTF editing. We propose a rendering pipeline and data layout which allow for interactive frame rates and provide a scalability analysis with respect to the scene's complexity. We also include soft shadows from area light sources and approximate global illumination with ambient occlusion by resorting to progressive refinement, which quickly converges to an high quality image while maintaining interactive frame rates by limiting the number of rays shot per frame. Acceptable performance is also demonstrated under dynamic settings, including camera movements, changing lighting conditions and dynamic geometry.Work partially funded by QREN project nbr. 13114 TOPICShoe and by National Funds through the FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within projectPEst-OE/EEI/UI0752/2011

    Immersive Telerobotic Modular Framework using stereoscopic HMD's

    Get PDF
    Telepresença é o termo utilizado para descrever o conjunto de tecnologias que proporcionam aos utilizadores a sensação de que se encontram num local onde não estão fisicamente. Telepresença imersiva é o próximo passo e o objetivo passa por proporcionar a sensação de que o utilizador se encontra completamente imerso num ambiente remoto, estimulando para isso o maior número possível de sentidos e utilizando novas tecnologias tais como: visão estereoscópica, visão panorâmica, áudio 3D e Head Mounted Displays (HMDs).Telerobótica é um sub-campo da telepresença ligando a mesma à robótica, e que essencialmente consiste em proporcionar ao utilizador a possibilidade de controlar um robô de forma remota. Nas soluções do estado da arte da telerobótica existe uma falha, uma vez que a telerobótica não tem usufruido, no geral, das recentes evoluções em tecnologias de controlo e interfaces de interação pessoa- computador. Além da falta de estudos que apostam em soluções de imersividade, tais como visão estereoscópica, a telerobótica imersiva pode também incluir controlos mais intuitivos, tais como controladores de toque ou baseados em movimentos e gestos. Estes controlos são mais naturais e podem ser traduzidos de forma mais natural no sistema. Neste documento propomos uma abordagem alternativa a métodos mais comuns encontrados na teleoperação de robôs, como, por exemplo, os que se encontram em robôs de busca e salvamento (SAR). O nosso principal foco é testar o impacto que características imersivas, tais como visão estereoscópica e HMDs podem trazer para os robôs de telepresença e sistemas de telerobótica. Além disso, e tendo em conta que este é um novo e crescente campo, vamos mais além estando também a desenvolver uma framework modular que possuí a capacidade de ser extendida com diferentes robôs, com o fim de proporcionar aos investigadores uma plataforma com que podem testar diferentes casos de estudo.Pretendemos provar que adicionando tecnologias imersivas a um sistema de telerobótica é possível obter uma plataforma mais intuitiva, ou seja, menos propensa a erros induzidos por uma perceção e interação errada com o sistema de teleoperação do robô, por parte do operador. A perceção de profundidade e do ambiente em geral são significativamente melhoradas quando se utiliza esta solução de imersão. E o desempenho, tanto em tempo de operação numa tarefa como numa bem-sucedida identificação de objetos de interesse, é também reforçado. Desenvolvemos uma plataforma modular, de baixo/médio custo, de telerobótica imersiva que pode ser estendida com aplicações Android hardware-based no lado do robô. Esta solução tem por objetivo proporcionar a possibilidade de utilizar a mesma plataforma, em qualquer tipo de caso de estudo, estendendo a plataforma com diferentes tipos de robô. Em adição a uma framework modular e extensível, o projeto conta também com três principais módulos de interação, nomeadamente: - Módulo que contém um head mounted display com suporte a head tracking no ambiente do operador - Stream de visão estereoscópica através de Android - E um módulo que proporciona ao utilizador a possibilidade de interagir com o sistema com positional tracking No que respeita ao hardware não apenas a área móvel (e.g. smartphones, tablets, arduino) expandiu de forma avassaladora nos últimos anos, como também assistimos ao despertar de tecnologias de imersão a baixo custo, tais como o Oculus Rift, Google Cardboard ou Leap Motion.Estas soluções de hardware, de custo acessível, associadas aos avanços em stream de vídeo e áudio fornecidas pelas tecnologias WebRTC, principalmente pelo Google, tornam o desenvolvimento de uma solução de software em tempo real possível. Atualmente existe uma falta de métodos de software em tempo real em estereoscopia, mas acreditamos que a chegada de tecnologias WebRTC vai marcar o ponto de viragem, permitindo um plataforma económica e elevando a fasquia em termos de especificações.Telepresence is the term used to describe the set of technologies that enable people to feel or appear as if they were present in a location which they are not physically in. Immersive telepresence is the next step and the objective is to make the operator feel like he is immersed in a remote location, using as many senses as possible and new technologies such as stereoscopic vision, panoramic vision, 3D audio and Head Mounted Displays (HMDs).Telerobotics is a subfield of telepresence and merge it with robotics, providing the operator with the ability to control a robot remotely. In the current state of the art solutions there is a gap, since telerobotics have not enjoyed, in general, of the recent developments in control and human-computer interfaces technology. Besides the lack of studies investing on immersive solutions, such as stereoscopic vision, immersive telerobotics can also include more intuitive control capabilities such as haptic based controls or movement and gestures that would feel more natural and translated more naturally into the system. In this paper we propose an alternative approach to common teleoperation methods. As an example of common solutions, the reader can think about some of the methods found, for instance, in search and rescue (SAR) robots. Our main focus is to test the impact that immersive characteristics like stereoscopic vision and HMDs can bring to telepresence robots and telerobotics systems. Besides that, and since this is a new and growing field, we are also aiming to a modular framework capable of being extended with different robots in order to test different cases and aid researchers with an extensible platform.We claim that with immersive solutions the operator in a telerobotics system will have a more intuitive perception of the remote environment, and will be less error prone induced by a wrong perception and interaction with the teleoperation of the robot. We believe that the operator's depth perception and situational awareness are significantly improved when using immersive solutions, the performance both in terms of operation time and on successful identification, of particular objects, in remote environments are also enhanced.We have developed a low cost immersive telerobotic modular platform, this platform can be extended with hardware based Android applications in slave side (robot side). This solution provides the possibility of using the same platform, in any type of case study, by just extending it with different robots.In addition to the modular and extensible framework, the project will also features three main modules of interaction, namely:* A module that supports an head mounted display and head tracking in the operator environment* Stream of stereoscopic vision through Android with software synchronization* And a module that enables the operator to control the robot with positional tracking In the hardware side not only the mobile area (e.g. smartphones, tablets, arduino) expanded greatly in the last years but we also saw the raise of low cost immersive technologies, like the Oculus Rift DK2, Google Cardboard or Leap Motion. This cost effective hardware solutions associated with the advances in video and audio streaming provided by WebRTC technologies, achieved mostly by Google, make the development of a real-time software solution possible. Currently there is a lack of real-time software methods in stereoscopy, but the arrival of WebRTC technologies can be a game changer.We take advantage of this recent evolution in hardware and software in order to keep the platform economic and low cost, but at same time raising the flag in terms of performance and technical specifications of this kind of platform

    Interactive web-based visualization

    Get PDF
    The visualization of large amounts of data, which cannot be easily copied for processing on a user’s local machine, is not yet a fully solved problem. Remote visualization represents one possible solution approach to the problem, and has long been an important research topic. Depending on the device used, modern hardware, such as high-performance GPUs, is sometimes not available. This is another reason for the use of remote visualization. Additionally, due to the growing global networking and collaboration among research groups, collaborative remote visualization solutions are becoming more important. The additional use of collaborative visualization solutions is eventually due to the growing global networking and collaboration among research groups. The attractiveness of web-based remote visualization is greatly increased by the wide availability of web browsers on almost all devices; these are available today on all systems - from desktop computers to smartphones. In order to ensure interactivity, network bandwidth and latency are the biggest challenges that web-based visualization algorithms have to solve. Despite the steady improvements in available bandwidth, these improvements are still significantly slower than, for example, processor performance, resulting in increasing the impact of this bottleneck. For example, visualization of large dynamic data in low-bandwidth environments can be challenging because it requires continuous data transfer. However, bandwidth improvement alone cannot improve the latency because it is also affected by factors such as the distance between server and client and network utilization. To overcome these challenges, a combination of techniques is needed to customize the individual processing steps of the visualization pipeline, from efficient data representation to hardware-accelerated rendering on the client side. This thesis first deals with related work in the field of remote visualization with a particular focus on interactive web-based visualization and then presents techniques for interactive visualization in the browser using modern web standards such as WebGL and HTML5. These techniques enable the visualization of dynamic molecular data sets with more than one million atoms at interactive frame rates using GPU-based ray casting. Due to the limitations which exist in a browser-based environment, the concrete implementation of the GPU-based ray casting had to be customized. Evaluation of the resulting performance shows that GPU-based techniques enable the interactive rendering of large data sets and achieve higher image quality compared to polygon-based techniques. In order to reduce data transfer times and network latency, and improve rendering speed, efficient approaches for data representation and transmission are used. Furthermore, this thesis introduces a GPU-based volume-ray marching technique based on WebGL 2.0, which uses progressive brick-wise data transfer, as well as multiple levels of detail in order to achieve interactive volume rendering of datasets stored on a server. The concepts and results presented in this thesis contribute to the further spread of interactive web-based visualization. The algorithmic and technological advances that have been achieved form a basis for further development of interactive browser-based visualization applications. At the same time, this approach has the potential for enabling future collaborative visualization in the cloud.Die Visualisierung großer Datenmengen, welche nicht ohne Weiteres zur Verarbeitung auf den lokalen Rechner des Anwenders kopiert werden können, ist ein bisher nicht zufriedenstellend gelöstes Problem. Remote-Visualisierung stellt einen möglichen Lösungsansatz dar und ist deshalb seit langem ein relevantes Forschungsthema. Abhängig vom verwendeten Endgerät ist moderne Hardware, wie etwa performante GPUs, teilweise nicht verfügbar. Dies ist ein weiterer Grund für den Einsatz von Remote-Visualisierung. Durch die zunehmende globale Vernetzung und Kollaboration von Forschungsgruppen gewinnt kollaborative Remote-Visualisierung zusätzlich an Bedeutung. Die Attraktivität web-basierter Remote-Visualisierung wird durch die weitreichende Verfügbarkeit von Web-Browsern auf nahezu allen Endgeräten enorm gesteigert; diese sind heutzutage auf allen Systemen - vom Desktop-Computer bis zum Smartphone - vorhanden. Bei der Gewährleistung der Interaktivität sind Bandbreite und Latenz der Netzwerkverbindung die größten Herausforderungen, welche von web-basierten Visualisierungs-Algorithmen gelöst werden müssen. Trotz der stetigen Verbesserungen hinsichtlich der verfügbaren Bandbreite steigt diese signifikant langsamer als beispielsweise die Prozessorleistung, wodurch sich die Auswirkung dieses Flaschenhalses immer weiter verstärkt. So kann beispielsweise die Visualisierung großer dynamischer Daten in Umgebungen mit geringer Bandbreite eine Herausforderung darstellen, da kontinuierlicher Datentransfer benötigt wird. Dennoch kann die alleinige Verbesserung der Bandbreite keine entsprechende Verbesserung der Latenz bewirken, da diese zudem von Faktoren wie der Distanz zwischen Server und Client sowie der Netzwerkauslastung beeinflusst wird. Um diese Herausforderungen zu bewältigen, wird eine Kombination verschiedener Techniken für die Anpassung der einzelnen Verarbeitungsschritte der Visualisierungspipeline benötigt, angefangen bei effizienter Datenrepräsentation bis hin zu hardware-beschleunigtem Rendering auf der Client-Seite. Diese Doktorarbeit befasst sich zunächst mit verwandten Arbeiten auf dem Gebiet der Remote-Visualisierung mit besonderem Fokus auf interaktiver web-basierter Visualisierung und präsentiert danach Techniken für die interaktive Visualisierung im Browser mit Hilfe moderner Web-Standards wie WebGL und HTML5. Diese Techniken ermöglichen die Visualisierung dynamischer molekularer Datensätze mit mehr als einer Million Atomen bei interaktiven Frameraten durch die Verwendung GPU-basierten Raycastings. Aufgrund der Einschränkungen, welche in einer Browser-basierten Umgebung vorliegen, musste die konkrete Implementierung des GPU-basierten Raycastings angepasst werden. Die Evaluation der daraus resultierenden Performanz zeigt, dass GPU-basierte Techniken das interaktive Rendering von großen Datensätzen ermöglichen und eine im Vergleich zu Polygon-basierten Techniken höhere Bildqualität erreichen. Zur Verringerung der Übertragungszeiten, Reduktion der Latenz und Verbesserung der Darstellungsgeschwindigkeit werden effiziente Ansätze zur Datenrepräsentation und übertragung verwendet. Des Weiteren wird in dieser Doktorarbeit eine GPU-basierte Volumen-Ray-Marching-Technik auf Basis von WebGL 2.0 eingeführt, welche progressive blockweise Datenübertragung verwendet, sowie verschiedene Detailgrade, um ein interaktives Volumenrendering von auf dem Server gespeicherten Datensätzen zu erreichen. Die in dieser Doktorarbeit präsentierten Konzepte und Resultate tragen zur weiteren Verbreitung von interaktiver web-basierter Visualisierung bei. Die erzielten algorithmischen und technologischen Fortschritte bilden eine Grundlage für weiterführende Entwicklungen von interaktiven Visualisierungsanwendungen auf Browser-Basis. Gleichzeitig hat dieser Ansatz das Potential, zukünftig kollaborative Visualisierung in der Cloud zu ermöglichen

    Developing an interoperable cloud-based visualization workflow for 3D archaeological heritage data. The Palenque 3D Archaeological Atlas

    Get PDF
    In archaeology, 3D data has become ubiquitous, as researchers routinely capture high resolution photogrammetry and LiDAR models and engage in laborious 3D analysis and reconstruction projects at every scale: artifacts, buildings, and entire sites. The raw data and processed 3D models are rarely shared as their computational dependencies leave them unusable by other scholars. In this paper we outline a novel approach for cloud-based collaboration, visualization, analysis, contextualization, and archiving of multi-modal giga-resolution archaeological heritage 3D data. The Palenque 3D Archaeological Atlas builds on an open source WebGL systems that efficiently interlink, merge, present, and contextualize the Big Data collected at the ancient Maya city of Palenque, Mexico, allowing researchers and stakeholders to visualize, access, share, measure, compare, annotate, and repurpose massive complex archaeological datasets from their web-browsers

    A Common Digital Twin Platform for Education, Training and Collaboration

    Get PDF
    The world is in transition driven by digitalization; industrial companies and educational institutions are adopting Industry 4.0 and Education 4.0 technologies enabled by digitalization. Furthermore, digitalization and the availability of smart devices and virtual environments have evolved to pro- duce a generation of digital natives. These digital natives whose smart devices have surrounded them since birth have developed a new way to process information; instead of reading literature and writing essays, the digital native generation uses search engines, discussion forums, and on- line video content to study and learn. The evolved learning process of the digital native generation challenges the educational and industrial sectors to create natural training, learning, and collaboration environments for digital natives. Digitalization provides the tools to overcome the aforementioned challenge; extended reality and digital twins enable high-level user interfaces that are natural for the digital natives and their interaction with physical devices. Simulated training and education environments enable a risk-free way of training safety aspects, programming, and controlling robots. To create a more realistic training environment, digital twins enable interfacing virtual and physical robots to train and learn on real devices utilizing the virtual environment. This thesis proposes a common digital twin platform for education, training, and collaboration. The proposed solution enables the teleoperation of physical robots from distant locations, enabling location and time-independent training and collaboration in robotics. In addition to teleoperation, the proposed platform supports social communication, video streaming, and resource sharing for efficient collaboration and education. The proposed solution enables research collaboration in robotics by allowing collaborators to utilize each other’s equipment independent of the distance between the physical locations. Sharing of resources saves time and travel costs. Social communication provides the possibility to exchange ideas and discuss research. The students and trainees can utilize the platform to learn new skills in robotic programming, controlling, and safety aspects. Cybersecurity is considered from the planning phase to the implementation phase. Only cybersecure methods, protocols, services, and components are used to implement the presented platform. Securing the low-level communication layer of the digital twins is essential to secure the safe teleoperation of the robots. Cybersecurity is the key enabler of the proposed platform, and after implementation, periodic vulnerability scans and updates enable maintaining cybersecurity. This thesis discusses solutions and methods for cyber securing an online digital twin platform. In conclusion, the thesis presents a common digital twin platform for education, training, and collaboration. The presented solution is cybersecure and accessible using mobile devices. The proposed platform, digital twin, and extended reality user interfaces contribute to the transitions to Education 4.0 and Industry 4.0

    Architectures for ubiquitous 3D on heterogeneous computing platforms

    Get PDF
    Today, a wide scope for 3D graphics applications exists, including domains such as scientific visualization, 3D-enabled web pages, and entertainment. At the same time, the devices and platforms that run and display the applications are more heterogeneous than ever. Display environments range from mobile devices to desktop systems and ultimately to distributed displays that facilitate collaborative interaction. While the capability of the client devices may vary considerably, the visualization experiences running on them should be consistent. The field of application should dictate how and on what devices users access the application, not the technical requirements to realize the 3D output. The goal of this thesis is to examine the diverse challenges involved in providing consistent and scalable visualization experiences to heterogeneous computing platforms and display setups. While we could not address the myriad of possible use cases, we developed a comprehensive set of rendering architectures in the major domains of scientific and medical visualization, web-based 3D applications, and movie virtual production. To provide the required service quality, performance, and scalability for different client devices and displays, our architectures focus on the efficient utilization and combination of the available client, server, and network resources. We present innovative solutions that incorporate methods for hybrid and distributed rendering as well as means to manage data sets and stream rendering results. We establish the browser as a promising platform for accessible and portable visualization services. We collaborated with experts from the medical field and the movie industry to evaluate the usability of our technology in real-world scenarios. The presented architectures achieve a wide coverage of display and rendering setups and at the same time share major components and concepts. Thus, they build a strong foundation for a unified system that supports a variety of use cases.Heutzutage existiert ein großer Anwendungsbereich für 3D-Grafikapplikationen wie wissenschaftliche Visualisierungen, 3D-Inhalte in Webseiten, und Unterhaltungssoftware. Gleichzeitig sind die Geräte und Plattformen, welche die Anwendungen ausführen und anzeigen, heterogener als je zuvor. Anzeigegeräte reichen von mobilen Geräten zu Desktop-Systemen bis hin zu verteilten Bildschirmumgebungen, die eine kollaborative Anwendung begünstigen. Während die Leistungsfähigkeit der Geräte stark schwanken kann, sollten die dort laufenden Visualisierungen konsistent sein. Das Anwendungsfeld sollte bestimmen, wie und auf welchem Gerät Benutzer auf die Anwendung zugreifen, nicht die technischen Voraussetzungen zur Erzeugung der 3D-Grafik. Das Ziel dieser Thesis ist es, die diversen Herausforderungen zu untersuchen, die bei der Bereitstellung von konsistenten und skalierbaren Visualisierungsanwendungen auf heterogenen Plattformen eine Rolle spielen. Während wir nicht die Vielzahl an möglichen Anwendungsfällen abdecken konnten, haben wir eine repräsentative Auswahl an Rendering-Architekturen in den Kernbereichen wissenschaftliche Visualisierung, web-basierte 3D-Anwendungen, und virtuelle Filmproduktion entwickelt. Um die geforderte Qualität, Leistung, und Skalierbarkeit für verschiedene Client-Geräte und -Anzeigen zu gewährleisten, fokussieren sich unsere Architekturen auf die effiziente Nutzung und Kombination der verfügbaren Client-, Server-, und Netzwerkressourcen. Wir präsentieren innovative Lösungen, die hybrides und verteiltes Rendering als auch das Verwalten der Datensätze und Streaming der 3D-Ausgabe umfassen. Wir etablieren den Web-Browser als vielversprechende Plattform für zugängliche und portierbare Visualisierungsdienste. Um die Verwendbarkeit unserer Technologie in realitätsnahen Szenarien zu testen, haben wir mit Experten aus der Medizin und Filmindustrie zusammengearbeitet. Unsere Architekturen erreichen eine umfassende Abdeckung von Anzeige- und Rendering-Szenarien und teilen sich gleichzeitig wesentliche Komponenten und Konzepte. Sie bilden daher eine starke Grundlage für ein einheitliches System, das eine Vielzahl an Anwendungsfällen unterstützt

    Adaptive 3D web-based environment for heterogeneous volume objects.

    Get PDF
    The Internet was growing fast on the last decade. Interaction and visualisation became an essential feature online. The demand for online modelling and rendering in a real-time, adaptive and interactive manner exceeded the growth and development of the hardware resources including computational power and memories. Building up and accessing an instant 3D Web-based and plugin-free platform started to be a must in order to generate 3D volumes. Modelling and rendering complicated heterogeneous volumes using online applications requires good Internet bandwidth and high computational power. A large number of 3D modelling tools designed to create complicated models in an interactive manner are now available online, the problem of using such tools is that the user needs to acquire a certain level of modelling knowledge In this work, we identify the problem, introduce the theoretical background and discuss the theory about Web-based modelling and rendering, including client- server approach, scenario optimization by solving constraint satisfaction problem, and complexity analysis. We address the challenges of designing, implementing and testing an online, Web-based, instant 3D modelling and rendering environment and we discuss some of its characteristics including adaptivity, platform independence, interactivity, and easy-to-use after presenting the theoretical part of implementing such an environment. We also introduce platform-independent modelling and rendering environment for complicated heterogeneous volumes with colour attributes based on client- server architecture. The work includes analysis and implementation for different rendering approaches suitable for different kind of users. We also discuss the performance of the proposed environment by comparing the rendering approaches. As an additional feature of our modelling system, we discuss aspects of securing the model transferring between client and the server
    corecore