82,614 research outputs found

    GENERATION OF FORESTS ON TERRAIN WITH DYNAMIC LIGHTING AND SHADOWING

    Get PDF
    The purpose of this research project is to exhibit an efficient method of creating dynamic lighting and shadowing for the generation of forests on terrain. In this research project, I use textures which contain images of trees from a bird’s eye view in order to create a high scale forest. Furthermore, by manipulating the transparency and color of the textures according to the algorithmic calculations of light and shadow on terrain, I provide the functionality of dynamic lighting and shadowing. Finally, by analyzing the OpenGL pipeline, I design my code in order to allow efficient rendering of the forest

    Asteroid modeling for testing spacecraft approach and landing

    Get PDF

    Fast, Realistic Terrain Synthesis

    Get PDF
    The authoring of realistic terrain models is necessary to generate immersive virtual environments for computer games and film visual effects. However, creating these landscapes is difficult – it usually involves an artist spending many hours sculpting a model in a 3D design program. Specialised terrain generation programs exist to rapidly create artificial terrains, such as Bryce (2013) and Terragen (2013). These make use of complex algorithms to pseudo-randomly generate the terrains, which can then be exported into a 3D editing program for fine tuning. Height-maps are a 2D data-structure, which stores elevation values, and can be used to represent terrain data. They are also a common format used with terrain generation and editing systems. Height-maps share the same storage design as image files, as such they can be viewed like any picture and image transformation algorithms can be applied to them. Early techniques for generating terrains include fractal generation and physical simulation. These methods proved difficult to use as the algorithms were manipulated with a set of parameters. However, the outcome from changing the values is not known, which results in the user changing values over several iterations to produce their desired terrain. An improved technique brings in a higher degree of user control as well as improved realism, known as texture-based terrain synthesis. This borrows techniques from texture synthesis, which is the process of algorithmically generating a larger image from a smaller sample image. Texture-based terrain synthesis makes use or real-world terrain data to produce highly realistic landscapes, which improves upon previous techniques. Recent work in texture-based synthesis has focused on improving both the realism and user control, through the use of sketching interfaces. We present a patch-based terrain synthesis system that utilises a user sketch to control the location of desired terrain features, such as ridges and valleys. Digital Elevation Models (DEMs) of real landscapes are used as exemplars, from which candidate patches of data are extracted and matched against the user’s sketch. The best candidates are merged seamlessly into the final terrain. Because real landscapes are used the resulting terrain appears highly realistic. Our research contributes a new version of this approach that employs multiple input terrains and acceleration using a modern Graphics Processing Unit (GPU). The use of multiple inputs increases the candidate pool of patches and thus the system is capable of producing more varied terrains. This addresses the limitation where supplying the wrong type of input terrain would fail to synthesise anything useful, for example supplying the system with a mountainous DEM and expecting deep valleys in the output. We developed a hybrid multithreaded CPU and GPU implementation that achieves a 45 times speedup

    Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications

    Full text link
    We present an overview and evaluation of a new, systematic approach for generation of highly realistic, annotated synthetic data for training of deep neural networks in computer vision tasks. The main contribution is a procedural world modeling approach enabling high variability coupled with physically accurate image synthesis, and is a departure from the hand-modeled virtual worlds and approximate image synthesis methods used in real-time applications. The benefits of our approach include flexible, physically accurate and scalable image synthesis, implicit wide coverage of classes and features, and complete data introspection for annotations, which all contribute to quality and cost efficiency. To evaluate our approach and the efficacy of the resulting data, we use semantic segmentation for autonomous vehicles and robotic navigation as the main application, and we train multiple deep learning architectures using synthetic data with and without fine tuning on organic (i.e. real-world) data. The evaluation shows that our approach improves the neural network's performance and that even modest implementation efforts produce state-of-the-art results.Comment: The project web page at http://vcl.itn.liu.se/publications/2017/TKWU17/ contains a version of the paper with high-resolution images as well as additional materia

    Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection

    Full text link
    Selective weeding is one of the key challenges in the field of agriculture robotics. To accomplish this task, a farm robot should be able to accurately detect plants and to distinguish them between crop and weeds. Most of the promising state-of-the-art approaches make use of appearance-based models trained on large annotated datasets. Unfortunately, creating large agricultural datasets with pixel-level annotations is an extremely time consuming task, actually penalizing the usage of data-driven techniques. In this paper, we face this problem by proposing a novel and effective approach that aims to dramatically minimize the human intervention needed to train the detection and classification algorithms. The idea is to procedurally generate large synthetic training datasets randomizing the key features of the target environment (i.e., crop and weed species, type of soil, light conditions). More specifically, by tuning these model parameters, and exploiting a few real-world textures, it is possible to render a large amount of realistic views of an artificial agricultural scenario with no effort. The generated data can be directly used to train the model or to supplement real-world images. We validate the proposed methodology by using as testbed a modern deep learning based image segmentation architecture. We compare the classification results obtained using both real and synthetic images as training data. The reported results confirm the effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201

    On Advanced Mobility Concepts for Intelligent Planetary Surface Exploration

    Get PDF
    Surface exploration by wheeled rovers on Earth's Moon (the two Lunokhods) and Mars (Nasa's Sojourner and the two MERs) have been followed since many years already very suc-cessfully, specifically concerning operations over long time. However, despite of this success, the explored surface area was very small, having in mind a total driving distance of about 8 km (Spirit) and 21 km (Opportunity) over 6 years of operation. Moreover, ESA will send its ExoMars rover in 2018 to Mars, and NASA its MSL rover probably this year. However, all these rovers are lacking sufficient on-board intelligence in order to overcome longer dis-tances, driving much faster and deciding autonomously on path planning for the best trajec-tory to follow. In order to increase the scientific output of a rover mission it seems very nec-essary to explore much larger surface areas reliably in much less time. This is the main driver for a robotics institute to combine mechatronics functionalities to develop an intelligent mo-bile wheeled rover with four or six wheels, and having specific kinematics and locomotion suspension depending on the operational terrain of the rover to operate. DLR's Robotics and Mechatronics Center has a long tradition in developing advanced components in the field of light-weight motion actuation, intelligent and soft manipulation and skilled hands and tools, perception and cognition, and in increasing the autonomy of any kind of mechatronic systems. The whole design is supported and is based upon detailed modeling, optimization, and simula-tion tasks. We have developed efficient software tools to simulate the rover driveability per-formance on various terrain characteristics such as soft sandy and hard rocky terrains as well as on inclined planes, where wheel and grouser geometry plays a dominant role. Moreover, rover optimization is performed to support the best engineering intuitions, that will optimize structural and geometric parameters, compare various kinematics suspension concepts, and make use of realistic cost functions like mass and consumed energy minimization, static sta-bility, and more. For self-localization and safe navigation through unknown terrain we make use of fast 3D stereo algorithms that were successfully used e.g. in unmanned air vehicle ap-plications and on terrestrial mobile systems. The advanced rover design approach is applica-ble for lunar as well as Martian surface exploration purposes. A first mobility concept ap-proach for a lunar vehicle will be presented

    A Hierarchal Planning Framework for AUV Mission Management in a Spatio-Temporal Varying Ocean

    Full text link
    The purpose of this paper is to provide a hierarchical dynamic mission planning framework for a single autonomous underwater vehicle (AUV) to accomplish task-assign process in a limited time interval while operating in an uncertain undersea environment, where spatio-temporal variability of the operating field is taken into account. To this end, a high level reactive mission planner and a low level motion planning system are constructed. The high level system is responsible for task priority assignment and guiding the vehicle toward a target of interest considering on-time termination of the mission. The lower layer is in charge of generating optimal trajectories based on sequence of tasks and dynamicity of operating terrain. The mission planner is able to reactively re-arrange the tasks based on mission/terrain updates while the low level planner is capable of coping unexpected changes of the terrain by correcting the old path and re-generating a new trajectory. As a result, the vehicle is able to undertake the maximum number of tasks with certain degree of maneuverability having situational awareness of the operating field. The computational engine of the mentioned framework is based on the biogeography based optimization (BBO) algorithm that is capable of providing efficient solutions. To evaluate the performance of the proposed framework, firstly, a realistic model of undersea environment is provided based on realistic map data, and then several scenarios, treated as real experiments, are designed through the simulation study. Additionally, to show the robustness and reliability of the framework, Monte-Carlo simulation is carried out and statistical analysis is performed. The results of simulations indicate the significant potential of the two-level hierarchical mission planning system in mission success and its applicability for real-time implementation
    • …
    corecore