43 research outputs found

    A Web3D Enabled Information Integration Framework for Facility Management

    Full text link
    Managing capital oil and gas and civil engineering facilities requires a large amount of heterogeneous information that is generated by different project stakeholders across the facility lifecycle phases and is stored in various databases and technical documents. The amount of information reaches its peak during the commissioning and handover phases when the project is handed over to the operator. The operational phase of facilities spans multiple decades and the way facilities are used and maintained have a huge impact on costs, environment, productivity, and health and safety. Thus, the client and the operator bear most of the additional costs associated with incomplete, incorrect or not immediately usable information. Web applications can provide quick and convenient access to information regardless of user location. However, the integration and delivery of engineering information, including 3D content, over the Web is still at its infancy and is affected by numerous technical (i.e. data and tools) and procedural (i.e. process and people) challenges. This paper addresses the technical issues and proposes a Web3D enabled information integration framework that delivers engineering information together with 3D content without any plug-ins. In the proposed framework, a class library defines the engineering data requirements and a semi-structured database provides means to integrate heterogeneous technical asset information. This framework also enables separating the 3D model content into fragments, storing them together with the digital assets and delivering to the client browser on demand. Such framework partially alleviates the current limitations of the JavaScript based 3D content delivery such as application speed and latency. Hence, the proposed framework is particularly valuable to petroleum and civil engineering companies working with large amounts of data

    Adaptivity of 3D web content in web-based virtual museums : a quality of service and quality of experience perspective

    Get PDF
    The 3D Web emerged as an agglomeration of technologies that brought the third dimension to the World Wide Web. Its forms spanned from being systems with limited 3D capabilities to complete and complex Web-Based Virtual Worlds. The advent of the 3D Web provided great opportunities to museums by giving them an innovative medium to disseminate collections' information and associated interpretations in the form of digital artefacts, and virtual reconstructions thus leading to a new revolutionary way in cultural heritage curation, preservation and dissemination thereby reaching a wider audience. This audience consumes 3D Web material on a myriad of devices (mobile devices, tablets and personal computers) and network regimes (WiFi, 4G, 3G, etc.). Choreographing and presenting 3D Web components across all these heterogeneous platforms and network regimes present a significant challenge yet to overcome. The challenge is to achieve a good user Quality of Experience (QoE) across all these platforms. This means that different levels of fidelity of media may be appropriate. Therefore, servers hosting those media types need to adapt to the capabilities of a wide range of networks and devices. To achieve this, the research contributes the design and implementation of Hannibal, an adaptive QoS & QoE-aware engine that allows Web-Based Virtual Museums to deliver the best possible user experience across those platforms. In order to ensure effective adaptivity of 3D content, this research furthers the understanding of the 3D web in terms of Quality of Service (QoS) through empirical investigations studying how 3D Web components perform and what are their bottlenecks and in terms of QoE studying the subjective perception of fidelity of 3D Digital Heritage artefacts. Results of these experiments lead to the design and implementation of Hannibal

    Web Cube: A New Model for 3-D Web Browsing Based on Hand Gesture Interaction

    Get PDF
    3-D web browsing is a promising trend for interaction with web content. However it is still illusive between virtual reality applications on the one side, and conventional web browsing on the other. In this research we propose a new model for 3-D web browsing that capitalizes on features of virtual reality technology with those of conventional browsing in order to provide an enhanced interactive user experience with web content. The new model is based on representing information content elements in 3-D perspective and organizing them inside a 3-D container that we call a “Web Cube” for 3-D web browsing. Furthermore, the model defines appropriate interaction mechanisms based on hand gestures. The model has been evaluated using an experimental technique to evaluate its efficiency, and a questionnaire to evaluate user satisfaction

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    The delta radiance field

    Get PDF
    The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now

    ImMApp: An immersive database of sound art

    Full text link
    The ImMApp (Immersive Mapping Application) thesis addresses contemporary and historical sound art from a position informed by, on one hand, post-structural critical theory, and on the other, a practice-based exploration of contemporary digital technologies (MySQL, XML, XSLT, X3D). It proposes a critical ontological schema derived from Michel Foucault's Archaeology of Knowledge (1972) and applies this to pre-existing information resources dealing with sound art. Firstly an analysis of print-based discourses (Sound by Artists. Lander and Lexier (1990), Noise, Water, Meat. Kahn (2001) and Background Noise: Perspectives on Sound Art. LaBelle (2006» is carried out according to Foucauldian notions of genealogy, subject positions, the statement, institutional affordances and the productive nature of discursive formation. The discursive field (the archive) presented by these major canonical texts is then contrasted with a formulation derived from Giles Deleuze and Felix Guattari: that of a 'minor' history of sound art practices. This is then extended by media theory (McLuhan, Kittler, Manovich) into a critique of two digital sound art resources (The Australian Sound Design Project (Bandt and Paine (2005) and soundtoys.net Stanza (1998). The divergences between the two forms of information technologies (print vs. digital) are discussed. The means by which such digitised methodologies may enhance Foucauldian discourse analysis points onwards towards the two practice-based elements of the thesis. Surface, the first iterative part, is a web-browser based database built on an Apache/MySQIlXML architecture. It is the most extensive mapping of sound art undertaken to date and extends the theoretical framework discussed above into the digital domain. Immersion, the second part, is a re-presentation of this material in an immersive digital environment, following the transformation of the source material via XSL-T into X3D. Immersion is a real-time, large format video, surround sound (5.ln.l) installation and the thesis concludes with a discussion of how this outcome has articulated Foucauldian archaeological method and unframed pre-existing notions of the nature of sound art

    Multiresolution Techniques for Real–Time Visualization of Urban Environments and Terrains

    Get PDF
    In recent times we are witnessing a steep increase in the availability of data coming from real–life environments. Nowadays, virtually everyone connected to the Internet may have instant access to a tremendous amount of data coming from satellite elevation maps, airborne time-of-flight scanners and digital cameras, street–level photographs and even cadastral maps. As for other, more traditional types of media such as pictures and videos, users of digital exploration softwares expect commodity hardware to exhibit good performance for interactive purposes, regardless of the dataset size. In this thesis we propose novel solutions to the problem of rendering large terrain and urban models on commodity platforms, both for local and remote exploration. Our solutions build on the concept of multiresolution representation, where alternative representations of the same data with different accuracy are used to selectively distribute the computational power, and consequently the visual accuracy, where it is more needed on the base of the user’s point of view. In particular, we will introduce an efficient multiresolution data compression technique for planar and spherical surfaces applied to terrain datasets which is able to handle huge amount of information at a planetary scale. We will also describe a novel data structure for compact storage and rendering of urban entities such as buildings to allow real–time exploration of cityscapes from a remote online repository. Moreover, we will show how recent technologies can be exploited to transparently integrate virtual exploration and general computer graphics techniques with web applications

    Fifth Biennial Report : June 1999 - August 2001

    No full text

    On Data-driven systems analyzing, supporting and enhancing users’ interaction and experience

    Get PDF
    Tesis doctoral en inglés y resumen extendido en español[EN] The research areas of Human-Computer Interaction and Software Architectures have been traditionally treated separately, but in the literature, many authors made efforts to merge them to build better software systems. One of the common gaps between software engineering and usability is the lack of strategies to apply usability principles in the initial design of software architectures. Including these principles since the early phases of software design would help to avoid later architectural changes to include user experience requirements. The combination of both fields (software architectures and Human-Computer Interaction) would contribute to building better interactive software that should include the best from both the systems and user-centered designs. In that combination, the software architectures should enclose the fundamental structure and ideas of the system to offer the desired quality based on sound design decisions. Moreover, the information kept within a system is an opportunity to extract knowledge about the system itself, its components, the software included, the users or the interaction occurring inside. The knowledge gained from the information generated in a software environment can be used to improve the system itself, its software, the users’ experience, and the results. So, the combination of the areas of Knowledge Discovery and Human-Computer Interaction offers ideal conditions to address Human-Computer-Interaction-related challenges. The Human-Computer Interaction focuses on human intelligence, the Knowledge Discovery in computational intelligence, and the combination of both can raise the support of human intelligence with machine intelligence to discover new insights in a world crowded of data. This Ph.D. Thesis deals with these kinds of challenges: how approaches like data-driven software architectures (using Knowledge Discovery techniques) can help to improve the users' interaction and experience within an interactive system. Specifically, it deals with how to improve the human-computer interaction processes of different kind of stakeholders to improve different aspects such as the user experience or the easiness to accomplish a specific task. Several research actions and experiments support this investigation. These research actions included performing a systematic literature review and mapping of the literature that was aimed at finding how the software architectures in the literature have been used to support, analyze or enhance the human-computer interaction. Also, the actions included work on four different research scenarios that presented common challenges in the Human-Computer Interaction knowledge area. The case studies that fit into the scenarios selected were chosen based on the Human-Computer Interaction challenges they present, and on the authors’ accessibility to them. The four case studies were: an educational laboratory virtual world, a Massive Open Online Course and the social networks where the students discuss and learn, a system that includes very large web forms, and an environment where programmers develop code in the context of quantum computing. The development of the experiences involved the review of more than 2700 papers (only in the literature review phase), the analysis of the interaction of 6000 users in four different contexts or the analysis of 500,000 quantum computing programs. As outcomes from the experiences, some solutions are presented regarding the minimal software artifacts to include in software architectures, the behavior they should exhibit, the features desired in the extended software architecture, some analytic workflows and approaches to use, or the different kinds of feedback needed to reinforce the users’ interaction and experience. The results achieved led to the conclusion that, despite this is not a standard practice in the literature, the software environments should embrace Knowledge Discovery and data-driven principles to analyze and respond appropriately to the users’ needs and improve or support the interaction. To adopt Knowledge Discovery and data-driven principles, the software environments need to extend their software architectures to cover also the challenges related to Human-Computer Interaction. Finally, to tackle the current challenges related to the users’ interaction and experience and aiming to automate the software response to users’ actions, desires, and behaviors, the interactive systems should also include intelligent behaviors through embracing the Artificial Intelligence procedures and techniques

    On data-driven systems analyzing, supporting and enhancing users’ interaction and experience

    Get PDF
    [EN]The research areas of Human-Computer Interaction and Software Architectures have been traditionally treated separately, but in the literature, many authors made efforts to merge them to build better software systems. One of the common gaps between software engineering and usability is the lack of strategies to apply usability principles in the initial design of software architectures. Including these principles since the early phases of software design would help to avoid later architectural changes to include user experience requirements. The combination of both fields (software architectures and Human-Computer Interaction) would contribute to building better interactive software that should include the best from both the systems and user-centered designs. In that combination, the software architectures should enclose the fundamental structure and ideas of the system to offer the desired quality based on sound design decisions. Moreover, the information kept within a system is an opportunity to extract knowledge about the system itself, its components, the software included, the users or the interaction occurring inside. The knowledge gained from the information generated in a software environment can be used to improve the system itself, its software, the users’ experience, and the results. So, the combination of the areas of Knowledge Discovery and Human-Computer Interaction offers ideal conditions to address Human-Computer-Interaction-related challenges. The Human-Computer Interaction focuses on human intelligence, the Knowledge Discovery in computational intelligence, and the combination of both can raise the support of human intelligence with machine intelligence to discover new insights in a world crowded of data. This Ph.D. Thesis deals with these kinds of challenges: how approaches like data-driven software architectures (using Knowledge Discovery techniques) can help to improve the users' interaction and experience within an interactive system. Specifically, it deals with how to improve the human-computer interaction processes of different kind of stakeholders to improve different aspects such as the user experience or the easiness to accomplish a specific task. Several research actions and experiments support this investigation. These research actions included performing a systematic literature review and mapping of the literature that was aimed at finding how the software architectures in the literature have been used to support, analyze or enhance the human-computer interaction. Also, the actions included work on four different research scenarios that presented common challenges in the Human- Computer Interaction knowledge area. The case studies that fit into the scenarios selected were chosen based on the Human-Computer Interaction challenges they present, and on the authors’ accessibility to them. The four case studies were: an educational laboratory virtual world, a Massive Open Online Course and the social networks where the students discuss and learn, a system that includes very large web forms, and an environment where programmers develop code in the context of quantum computing. The development of the experiences involved the review of more than 2700 papers (only in the literature review phase), the analysis of the interaction of 6000 users in four different contexts or the analysis of 500,000 quantum computing programs. As outcomes from the experiences, some solutions are presented regarding the minimal software artifacts to include in software architectures, the behavior they should exhibit, the features desired in the extended software architecture, some analytic workflows and approaches to use, or the different kinds of feedback needed to reinforce the users’ interaction and experience. The results achieved led to the conclusion that, despite this is not a standard practice in the literature, the software environments should embrace Knowledge Discovery and datadriven principles to analyze and respond appropriately to the users’ needs and improve or support the interaction. To adopt Knowledge Discovery and data-driven principles, the software environments need to extend their software architectures to cover also the challenges related to Human-Computer Interaction. Finally, to tackle the current challenges related to the users’ interaction and experience and aiming to automate the software response to users’ actions, desires, and behaviors, the interactive systems should also include intelligent behaviors through embracing the Artificial Intelligence procedures and techniques
    corecore