59 research outputs found
Ubiquitous volume rendering in the web platform
176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium
Ubiquitous volume rendering in the web platform
176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium
Share - Publish - Store - Preserve. Methodologies, Tools and Challenges for 3D Use in Social Sciences and Humanities
Through this White Paper, which gathers contributions from experts of 3D data as well as professionals concerned with the interoperability and sustainability of 3D research data, the PARTHENOS project aims at highlighting some of the current issues they have to face, with possible specific points according to the discipline, and potential practices and methodologies to deal with these issues. During the workshop, several tools to deal with these issues have been introduced and confronted with the participants experiences, this White Paper now intends to go further by also integrating participants feedbacks and suggestions of potential improvements. Therefore, even if the focus is put on specific tools, the main goal is to contribute to the development of standardized good practices related to the sharing, publication, storage and long-term preservation of 3D data
Management and Visualisation of Non-linear History of Polygonal 3D Models
The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control
Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images
The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems
3D photogrammetric data modeling and optimization for multipurpose analysis and representation of Cultural Heritage assets
This research deals with the issues concerning the processing, managing, representation
for further dissemination of the big amount of 3D data today achievable and storable with
the modern geomatic techniques of 3D metric survey. In particular, this thesis is focused
on the optimization process applied to 3D photogrammetric data of Cultural Heritage
assets.
Modern Geomatic techniques enable the acquisition and storage of a big amount of data,
with high metric and radiometric accuracy and precision, also in the very close range
field, and to process very detailed 3D textured models. Nowadays, the photogrammetric
pipeline has well-established potentialities and it is considered one of the principal
technique to produce, at low cost, detailed 3D textured models.
The potentialities offered by high resolution and textured 3D models is today well-known
and such representations are a powerful tool for many multidisciplinary purposes, at
different scales and resolutions, from documentation, conservation and restoration to
visualization and education. For example, their sub-millimetric precision makes them
suitable for scientific studies applied to the geometry and materials (i.e. for structural and
static tests, for planning restoration activities or for historical sources); their high fidelity
to the real object and their navigability makes them optimal for web-based visualization
and dissemination applications. Thanks to the improvement made in new visualization
standard, they can be easily used as visualization interface linking different kinds of
information in a highly intuitive way. Furthermore, many museums look today for more
interactive exhibitions that may increase the visitorsâ emotions and many recent
applications make use of 3D contents (i.e. in virtual or augmented reality applications and
through virtual museums).
What all of these applications have to deal with concerns the issue deriving from the
difficult of managing the big amount of data that have to be represented and navigated.
Indeed, reality based models have very heavy file sizes (also tens of GB) that makes them
difficult to be handled by common and portable devices, published on the internet or
managed in real time applications. Even though recent advances produce more and more
sophisticated and capable hardware and internet standards, empowering the ability to
easily handle, visualize and share such contents, other researches aim at define a common
pipeline for the generation and optimization of 3D models with a reduced number of
polygons, however able to satisfy detailed radiometric and geometric requests.
iii
This thesis is inserted in this scenario and focuses on the 3D modeling process of
photogrammetric data aimed at their easy sharing and visualization. In particular, this
research tested a 3D models optimization, a process which aims at the generation of Low
Polygons models, with very low byte file size, processed starting from the data of High
Poly ones, that nevertheless offer a level of detail comparable to the original models. To
do this, several tools borrowed from the game industry and game engine have been used.
For this test, three case studies have been chosen, a modern sculpture of a contemporary
Italian artist, a roman marble statue, preserved in the Civic Archaeological Museum of
Torino, and the frieze of the Augustus arch preserved in the city of Susa (Piedmont-
Italy). All the test cases have been surveyed by means of a close range photogrammetric
acquisition and three high detailed 3D models have been generated by means of a
Structure from Motion and image matching pipeline. On the final High Poly models
generated, different optimization and decimation tools have been tested with the final aim
to evaluate the quality of the information that can be extracted by the final optimized
models, in comparison to those of the original High Polygon one. This study showed how
tools borrowed from the Computer Graphic offer great potentialities also in the Cultural
Heritage field. This application, in fact, may meet the needs of multipurpose and
multiscale studies, using different levels of optimization, and this procedure could be
applied to different kind of objects, with a variety of different sizes and shapes, also on
multiscale and multisensor data, such as buildings, architectural complexes, data from
UAV surveys and so on
Recommended from our members
Automated Indoor Mapping with Point Clouds
This dissertation examines the current state of automated indoor mapping and modeling using point cloud data produced by close range remote sensing systems. The first part looks at reality capture techniques that convert the physical form of indoor spaces into point clouds of millions of measured points, each with an (x,y,z) coordinate value. The second part examines methods for teasing out geometries from these point clouds -- often complicated by noise and voids -- and converting them into 3D geometric models. The final part examines techniques for merging the coordinate reference systems of these indoor maps and models with those of the outdoor world, resulting in a seamless representation of space. Lessons learned in this study revealed that theories, techniques, and practices in indoor mapping remain relatively elementary compared to those for the outdoors, yet they also present significant opportunities for future research propelled by emerging developments in remote sensing and a growing demand for indoor maps
Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry
Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a
more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of
meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This
path is already being taken by the recent and fast-developing research in computational fields, however, some
issues related to computationally expensive processes in the integration of multi-source sensing data remain.
Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned
Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope,
many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and
multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant
contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and
hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields
concentrate most applications and are widely studied. Many challenges are currently being overcome by recent
methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image
datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that
are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are
presented.European Commission 1381202-GEU
PYC20-RE-005-UJA
IEG-2021Junta de Andalucia 1381202-GEU
PYC20-RE-005-UJA
IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU
FPU19/0010
A novel method of cadaveric data acquisition from a dissection of the male lower limb using the Perceptron ScanWorksÂź V5 scanner
Introduction: Under the current pressures of an expanding medical curriculum, the time allocated to anatomical training in medical education has been greatly reduced, resulting in an increasing number of doctors qualifying from medical school with an inadequate, and arguably unsafe level of anatomical understanding. Given the limited time now available for cadaveric dissection in medical training, future rectification of these deficits is becoming heavily dependent on supplementation from virtual anatomical training tools. In light of this, many attempts have been made to acquire cadaveric data for the creation of realistic virtual specimens. Until now however, the educational value of these training tools has been heavily scrutinised, with many sharing the view that they are over simplified and anatomically inaccurate.
The main problems associated with the usability of pre-existing datasets arise primarily as a result of the methodology used to acquire their cadaveric data. Projects in this field have previously approached the task of cadaveric data acquisition by creating comprehensive libraries of anatomical cross-sections, from which three-dimensional processing can be technically difficult and not always successful for the reconstruction of fine or complex anatomical structures.
Aim: The aim of this study therefore was to approach cadaveric data acquisition, for the purpose of creating a digital cadaveric specimen, in an unconventional manner, by obtaining three-dimensional data directly from cadaveric tissues with a Perceptron ScanWorksV5 non-contact laser scanner.
Methods: To do this, a carefully planned dissection of the lower limb was performed on a 68 year old male cadaver, and laser scanning of all clinically relevant structures was undertaken at sequential stages throughout. In addition to this, colour and texture information was collected from the cadaveric tissues by high-resolution digital photography.
Results: A comprehensive three-dimensional dataset was acquired from all clinically relevant anatomy of the lower limb as a result of the methodology used in this study. Data was obtained at extremely high point to point resolutions, with a measurement accuracy of 24ÎŒm, 2Ï.
Discussion: By obtaining cadaveric data in this way, the problems associated with the three-dimensional processing of conventional cross-sectional data, such as image segmentation, are largely overcome and fine anatomical details can be accurately documented with high precision. This data can be processed further to create an accurate and realistic virtual reconstruction of the lower limb for both three-dimensional anatomical training and surgical rehearsal in the future.
Conclusion: Whilst the value of cross-sectional datasets in their own right should not be disputed, the methodology used for cadaveric data acquisition in this study has proved a very successful in collecting three-dimensional data directly form the specimen, and could be used to acquire detailed datasets for the reconstruction of other complex body regions for virtual anatomical training in the future
- âŠ