267 research outputs found

    Modelling the Witwatersrand basin: a window into neoarchaean-palaeoproterozoic crustal-scale tectonics

    Get PDF
    Masters dissertation school of Geosciences, University of the Witwatersrand 2017The aim of this study was to investigate and evaluate the 3D structural architecture around the Vredefort dome in the Witwatersrand basin, in particular the unexposed southern portion. This was done in order to establish strato-tectonic relationships, first order deformation structures, and basement architecture. The outcomes provide a more detailed architecture around the central uplift that may be used in future work aimed at examining the nature of giant terrestrial impacts. In summary, the integration of borehole, surface mapping, and 2D reflection seismic data provides a well constrained 3D geological model of the dome, central uplift, and adjacent areas (covering approximately 11600 km2). Seven structural features are discussed from the 3D modelling results. These include, (1) a normal fault in the lower West Rand Group, (2) an undulate, normal faulted truncation plane, constrained as post-West Rand Group and pre or early-Central Rand Group, (3) a truncation plane and local enhanced uplift constrained as pre to syn-VCF, (4) a listric fault system, constrained as post-Klipriviersberg Group and syn-Platberg Group, (5) a truncation plane, constrained as syn-Black Reef Formation, (6) folds, including a large asymmetric, gentle anticline here named the Vaal Dam Anticline, constrained as post-Magaliesberg Formation and pre-Vredefort impact, and (7) a listric fault across the southeastern margin of the Vredefort dome, constrained as late to post-central uplift formation. The findings support previous work by Tinker et al. (2002), Ivanov (2005), Alexandre et al. (2006), Dankert and Hein (2010), Manzi et al. (2013), Jahn and Riller (2015), and Reimold and Hoffmann (2016). However the findings oppose various parts of previous work by Friese et al. (1995), Henkel and Reimold (1998), and Reimold and Koeberl (2014). A new term is also proposed for the periclinal folds located around the central uplift, i.e., impact-type curvature-accommodation folds. This study demonstrates the importance of integrating multiple sources of data into a single 3D spatial environment in order to better refine and distinguish impact-related deformation from the pre-existing basement architecture.MT 201

    Transformation Model With Constraints for High Accuracy of 2D-3D Building Registration in Aerial Imagery

    Get PDF
    This paper proposes a novel rigorous transformation model for 2D-3D registration to address the difficult problem of obtaining a sufficient number of well-distributed ground control points (GCPs) in urban areas with tall buildings. The proposed model applies two types of geometric constraints, co-planarity and perpendicularity, to the conventional photogrammetric collinearity model. Both types of geometric information are directly obtained from geometric building structures, with which the geometric constraints are automatically created and combined into the conventional transformation model. A test field located in downtown Denver, Colorado, is used to evaluate the accuracy and reliability of the proposed method. The comparison analysis of the accuracy achieved by the proposed method and the conventional method is conducted. Experimental results demonstrated that: (1) the theoretical accuracy of the solved registration parameters can reach 0.47 pixels, whereas the other methods reach only 1.23 and 1.09 pixels; (2) the RMS values of 2D-3D registration achieved by the proposed model are only two pixels along the x and y directions, much smaller than the RMS values of the conventional model, which are approximately 10 pixels along the x and y directions. These results demonstrate that the proposed method is able to significantly improve the accuracy of 2D-3D registration with much fewer GCPs in urban areas with tall buildings

    Towards Scene Understanding with Detailed 3D Object Representations

    Full text link
    Current approaches to semantic image and scene understanding typically employ rather simple object representations such as 2D or 3D bounding boxes. While such coarse models are robust and allow for reliable object detection, they discard much of the information about objects' 3D shape and pose, and thus do not lend themselves well to higher-level reasoning. Here, we propose to base scene understanding on a high-resolution object representation. An object class - in our case cars - is modeled as a deformable 3D wireframe, which enables fine-grained modeling at the level of individual vertices and faces. We augment that model to explicitly include vertex-level occlusion, and embed all instances in a common coordinate frame, in order to infer and exploit object-object interactions. Specifically, from a single view we jointly estimate the shapes and poses of multiple objects in a common 3D frame. A ground plane in that frame is estimated by consensus among different objects, which significantly stabilizes monocular 3D pose estimation. The fine-grained model, in conjunction with the explicit 3D scene model, further allows one to infer part-level occlusions between the modeled objects, as well as occlusions by other, unmodeled scene elements. To demonstrate the benefits of such detailed object class models in the context of scene understanding we systematically evaluate our approach on the challenging KITTI street scene dataset. The experiments show that the model's ability to utilize image evidence at the level of individual parts improves monocular 3D pose estimation w.r.t. both location and (continuous) viewpoint.Comment: International Journal of Computer Vision (appeared online on 4 November 2014). Online version: http://link.springer.com/article/10.1007/s11263-014-0780-

    LandMarkAR: An application to study virtual route instructions and the design of 3D landmarks for indoor pedestrian navigation with a mixed reality head-mounted display

    Get PDF
    Mixed Reality (MR) interfaces on head-mounted displays (HMDs) have the potential to replace screen-based interfaces as the primary interface to the digital world. They potentially offer a more immersive and less distracting experience compared to mobile phones, allowing users to stay focused on their environment and main goals while accessing digital information. Due to their ability to gracefully embed virtual information in the environment, MR HMDs could potentially alleviate some of the issues plaguing users of mobile pedestrian navigation systems, such as distraction, diminished route recall, and reduced spatial knowledge acquisition. However, the complexity of MR technology presents significant challenges, particularly for researchers with limited programming knowledge. This thesis presents “LandMarkAR” to address those challenges. “LandMarkAR” is a HoloLens application that allows researchers to create augmented territories to study human navigation with MR interfaces, even if they have little programming knowledge. “LandMarkAR” was designed using different methods from human-centered design (HCD), such as design thinking and think-aloud testing, and was developed with Unity and the Mixed Reality Toolkit (MRTK). With “LandMarkAR”, researchers can place and manipulate 3D objects as holograms in real-time, facilitating indoor navigation experiments using 3D objects that serve as turn-by-turn instructions, highlights of physical landmarks, or other information researchers may come up with. Researchers with varying technical expertise will be able to use “LandMarkAR” for MR navigation studies. They can opt to utilize the easy-to-use User Interface (UI) on the HoloLens or add custom functionality to the application directly in Unity. “LandMarkAR” empowers researchers to explore the full potential of MR interfaces in human navigation and create meaningful insights for their studies

    An interactive camera placement and visibility simulator for image-based VR applications

    Full text link

    Shopping Using Gesture-Driven Interaction

    Get PDF

    Enhancing Expressiveness of Speech through Animated Avatars for Instant Messaging and Mobile Phones

    Get PDF
    This thesis aims to create a chat program that allows users to communicate via an animated avatar that provides believable lip-synchronization and expressive emotion. Currently many avatars do not attempt to do lip-synchronization. Those that do are not well synchronized and have little or no emotional expression. Most avatars with lip synch use realistic looking 3D models or stylized rendering of complex models. This work utilizes images rendered in a cartoon style and lip-synchronization rules based on traditional animation. The cartoon style, as opposed to a more realistic look, makes the mouth motion more believable and the characters more appealing. The cartoon look and image-based animation (as opposed to a graphic model animated through manipulation of a skeleton or wireframe) also allows for fewer key frames resulting in faster speed with more room for expressiveness. When text is entered into the program, the Festival Text-to-Speech engine creates a speech file and extracts phoneme and phoneme duration data. Believable and fluid lip-synchronization is then achieved by means of a number of phoneme-to-image rules. Alternatively, phoneme and phoneme duration data can be obtained for speech dictated into a microphone using Microsoft SAPI and the CSLU Toolkit. Once lip synchronization has been completed, rules for non-verbal animation are added. Emotions are appended to the animation of speech in two ways: automatically, by recognition of key words and punctuation, or deliberately, by user-defined tags. Additionally, rules are defined for idle-time animation. Preliminary results indicate that the animated avatar program offers an improvement over currently available software. It aids in the understandability of speech, combines easily recognizable and expressive emotions with speech, and successfully enhances overall enjoyment of the chat experience. Applications for the program include use in cell phones for the deaf or hearing impaired, instant messaging, video conferencing, instructional software, and speech and animation synthesis

    Surface Reconstruction and Evolution from Multiple Views

    Get PDF
    Applications like 3D Telepresence necessitate faithful 3D surface reconstruction of the object and 3D data compression in both spatial and temporal domains. This makes us feel immersed in virtual environments there by making 3D Telepresence a powerful tool in many applications. Hence 3D surface reconstruction and 3D compression are two challenging problems which are addressed in this thesis

    3D object reconstruction from 2D and 3D line drawings.

    Get PDF
    Chen, Yu.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (leaves 78-85).Abstracts in English and Chinese.Chapter 1 --- Introduction and Related Work --- p.1Chapter 1.1 --- Reconstruction from 2D Line Drawings and the Applications --- p.2Chapter 1.2 --- Previous Work on 3D Reconstruction from Single 2D Line Drawings --- p.4Chapter 1.3 --- Other Related Work on Interpretation of 2D Line Drawings --- p.5Chapter 1.3.1 --- Line Labeling and Superstrictness Problem --- p.6Chapter 1.3.2 --- CAD Reconstruction --- p.6Chapter 1.3.3 --- Modeling from Images --- p.6Chapter 1.3.4 --- Identifying Faces in the Line Drawings --- p.7Chapter 1.4 --- 3D Modeling Systems --- p.8Chapter 1.5 --- Research Problems and Our Contributions --- p.10Chapter 1.5.1 --- Recovering Complex Manifold Objects from Line Drawings --- p.10Chapter 1.5.2 --- The Vision-based Sketching System --- p.11Chapter 2 --- Reconstruction from Complex Line Drawings --- p.13Chapter 2.1 --- Introduction --- p.13Chapter 2.2 --- Assumptions and Terminology --- p.15Chapter 2.3 --- Separation of a Line Drawing --- p.17Chapter 2.3.1 --- Classification of Internal Faces --- p.18Chapter 2.3.2 --- Separating a Line Drawing along Internal Faces of Type 1 --- p.19Chapter 2.3.3 --- Detecting Internal Faces of Type 2 --- p.20Chapter 2.3.4 --- Separating a Line Drawing along Internal Faces of Type 2 --- p.28Chapter 2.4 --- 3D Reconstruction --- p.44Chapter 2.4.1 --- 3D Reconstruction from a Line Drawing --- p.44Chapter 2.4.2 --- Merging 3D Manifolds --- p.45Chapter 2.4.3 --- The Complete 3D Reconstruction Algorithm --- p.47Chapter 2.5 --- Experimental Results --- p.47Chapter 2.6 --- Summary --- p.52Chapter 3 --- A Vision-Based Sketching System for 3D Object Design --- p.54Chapter 3.1 --- Introduction --- p.54Chapter 3.2 --- The Sketching System --- p.55Chapter 3.3 --- 3D Geometry of the System --- p.56Chapter 3.3.1 --- Locating the Wand --- p.57Chapter 3.3.2 --- Calibration --- p.59Chapter 3.3.3 --- Working Space --- p.60Chapter 3.4 --- Wireframe Input and Object Editing --- p.62Chapter 3.5 --- Surface Generation --- p.63Chapter 3.5.1 --- Face Identification --- p.64Chapter 3.5.2 --- Planar Surface Generation --- p.65Chapter 3.5.3 --- Smooth Curved Surface Generation --- p.67Chapter 3.6 --- Experiments --- p.70Chapter 3.7 --- Summary --- p.72Chapter 4 --- Conclusion and Future Work --- p.74Chapter 4.1 --- Conclusion --- p.74Chapter 4.2 --- Future Work --- p.75Chapter 4.2.1 --- Learning-Based Line Drawing Reconstruction --- p.75Chapter 4.2.2 --- New Query Interface for 3D Object Retrieval --- p.75Chapter 4.2.3 --- Curved Object Reconstruction --- p.76Chapter 4.2.4 --- Improving the 3D Sketch System --- p.77Chapter 4.2.5 --- Other Directions --- p.77Bibliography --- p.7
    corecore