1,977 research outputs found

    Modeling And Simulation Of Soft Bodies

    Get PDF
    As graphics and simulations become more realistic, techniques for approximating soft body objects, that is, non-solid objects such as liquids, gases, and cloth, are becoming increasingly common. The proposed generalized soft body method encompasses some specific cases of other existing models enabling simulation of a variety of soft body materials by parameter adjustment. This research presents a general method of soft body model and simulation in which parameters for body control, surface deformation, volume control, and gravitation, can be adjusted to simulate different types of soft bodies. In this method, the soft body mesh structure maintains configuration among surface points while fluid modeling deforms the details of the surface. To maintain volume, an internal pressure is approximated by simulated molecules within the soft body. Free fall motion of soft body is generated by gravitational field. Additionally, a constraint is specified based on the property of the soft body being modeled. There are several standard methods to control soft body volume. This work illustrates the simplicity of simulation by selecting a mass-spring system for the deformation of the connected points of a three-dimensional mesh, while an internal pressure force acts upon the surface triangles. To incorporate fluidity, smooth particles hydrodynamics (SPH) is applied where surface points are considered as free moving particles interacting with neighboring surface points within a SPH radius. Because SPH is computationally expensive, it requires an efficient method to determine neighboring surface points. Collision detection with soft bodies and other rigid body objects also requires such fast neighbor detection. To determine the neighboring surface point, Axis Aligned Bounding Box (AABB), Octree, and a partitioning and hashing schemes iv have been investigated and the result shows that the partitioning and hashing scheme provides the best frame rate. Thus a fast partitioning and hashing scheme is used in this research to reduce both computational time and the memory requirements. The proposed soft body model aims to be applied in several types of soft body application depending on the specific types of soft body deformation. The work presented in this dissertation details experiments with a variety of visually appealing fluid-like surfaces and organic materials animated at interactive speeds. The algorithm is also used to implement animated space-blob creatures in the Galactic Arms Race video game and a human lung simulation, demonstrating the effectiveness of the algorithm in both an actual video game engine and a medical application. The simulation results show that the general model of the soft body can be applied to several applications by adjusting the soft body parameters according to the appearance results

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    MVDream: Multi-view Diffusion for 3D Generation

    Full text link
    We propose MVDream, a multi-view diffusion model that is able to generate geometrically consistent multi-view images from a given text prompt. By leveraging image diffusion models pre-trained on large-scale web datasets and a multi-view dataset rendered from 3D assets, the resulting multi-view diffusion model can achieve both the generalizability of 2D diffusion and the consistency of 3D data. Such a model can thus be applied as a multi-view prior for 3D generation via Score Distillation Sampling, where it greatly improves the stability of existing 2D-lifting methods by solving the 3D consistency problem. Finally, we show that the multi-view diffusion model can also be fine-tuned under a few shot setting for personalized 3D generation, i.e. DreamBooth3D application, where the consistency can be maintained after learning the subject identity.Comment: Our project page is https://MV-Dream.github.i
    corecore