1,024 research outputs found

    Procedural Generation and Rendering of Realistic, Navigable Forest Environments: An Open-Source Tool

    Full text link
    Simulation of forest environments has applications from entertainment and art creation to commercial and scientific modelling. Due to the unique features and lighting in forests, a forest-specific simulator is desirable, however many current forest simulators are proprietary or highly tailored to a particular application. Here we review several areas of procedural generation and rendering specific to forest generation, and utilise this to create a generalised, open-source tool for generating and rendering interactive, realistic forest scenes. The system uses specialised L-systems to generate trees which are distributed using an ecosystem simulation algorithm. The resulting scene is rendered using a deferred rendering pipeline, a Blinn-Phong lighting model with real-time leaf transparency and post-processing lighting effects. The result is a system that achieves a balance between high natural realism and visual appeal, suitable for tasks including training computer vision algorithms for autonomous robots and visual media generation.Comment: 14 pages, 11 figures. Submitted to Computer Graphics Forum (CGF). The application and supporting configuration files can be found at https://github.com/callumnewlands/ForestGenerato

    Mapping textures on 3d terrains: a hybrid cellular automata approach

    Get PDF
    It is a time consuming task to generate textures for large 3D terrain surfaces in computer games, flight simulations and computer animations. This work explores the use of cellular automata in the automatic generation of textures for large surfaces. I propose a method for generating textures for 3D terrains using various approaches - in particular, a hybrid approach that integrates the concepts of cellular automata, probabilistic distribution according to height and Wang tiles. I also look at other hybrid combinations using cellular automata to generate textures for 3D terrains. Work for this thesis includes development of a tool called "Texullar" that allows users to generate textures for 3D terrain surfaces by configuring various input parameters and choosing cellular automata rules. I evaluate the effectiveness of the approach by conducting a user survey to compare the results obtained by using different inputs and analyzing the results. The findings show that incorporating concepts of cellular automata in texture generation for terrains can lead to better results than random generation of textures. The analysis also reveals that incorporating height information along with cellular automata yields better results than using cellular automata alone. Results from the user survey indicate that a hybrid approach incorporating height information along with cellular automata and Wang tiles is better than incorporating height information along with cellular automata in the context of texture generation for 3D meshes. The survey did not yield enough evidence to suggest whether the use of Wang tiles in combination with cellular automata and probabilistic distribution according to height results in a higher mean score than the use of only cellular automata and probabilistic distribution. However, this outcome could have been influenced by the fact that the survey respondents did not have information about the parameters used to generate the final image - such as probabilistic distributions, the population configurations and rules of the cellular automata

    Hyperspectral Clustering and Unmixing of Satellite Imagery for the Study of Complex Society State Formation

    Get PDF
    This project is an application of remote sensing techniques to the field of archaeology. Clustering and unmixing algorithms are applied to hyperspectral Hyperion imagery over Oaxaca, Mexico. Oaxaca is the birthplace of the Zapotec civilization, the earliest state-level society in Mesoamerica. A passionate debate is ongoing over whether the Zapotecs\u27 evolution was environmentally deterministic or socioeconomic. Previous archaeological remote sensing has focused on the difficult tasks of feature detection using low spatial resolution imagery or visual inspection of spectral data. This project attempts to learn about a civilization on the macro level, using unsupervised land classification techniques. Overlapping 158 band Hyperion data are tasked for approximately 30,000 km2, to be taken over several years. K-means and ISODATA are implemented for clustering. MaxD is used to find endmembers for stepwise spectral unmixing. Case studies are performed that provide insights into the best use of various algorithms. To produce results with spatial context, a method is devised to tile long hyperspectral flight lines, process them, then merge the tiles back into a single coherent image. Google Earth is utilized to effectively share the produced classification and abundance maps. All the processes are automated to efficiently handle the large amount of data. In summary, this project focuses on spectral over spatial exploitation for a land survey study, using open source tools to facilitate results. Classification and abundance maps are generated highlighting basic material spatial patterns (e.g., soil, vegetation and water). Additional remote sensing techniques that are potentially useful to archaeologists are briefly described for use in future work

    Visualization Tools for Comparative Genomics applied to Convergent Evolution in Ash Trees

    Get PDF
    Assembly and analysis of whole genomes is now a routine part of genetic research, but effective tools for the visualization of whole genomes and their alignments are few. Here we present two approaches to allow such visualizations to be done in an efficient and user-friendly manner. These allow researchers to spot problems and patterns in their data and present them effectively. First, FluentDNA is developed to tackle single full genome visualization and assembly tasks by representing nucleotides as colored pixels in a zooming interface. This enables users to identify features without relying on algorithmic annotation. FluentDNA also supports visualizing pairwise alignments of wellassembled whole genomes from chromosome to nucleotide resolution. Second, Pantograph is developed to tackle the problem of visualizing variation among large numbers of whole genome sequences. This uses a graph genome approach, which addresses many of the technical challenges of whole genome multiple sequence alignments by representing aligned sequences as nodes which can be shared by many individuals. Pantograph is capable of scaling to thousands of individuals and is applied to SARS and A. thaliana pangenomes. Alongside the development of these new genomics tools, comparative genomic research was undertaken on worldwide species of ash trees. I assembled 13 ash genomes and used FluentDNA to quality check the results and discovered contaminants and a mitochondrial integration. I annotated protein coding genes in 28 ash assemblies and aligned their gene families. Using phylogenetic analysis, I identified gene duplications that likely occurred in an ancient whole genome duplication shared by all ash species. I examined the fate of these duplicated genes, showing that losses are concentrated in a subset of gene families more often than predicted by a null model simulation. I conclude that convergent evolution has occurred in the loss and retention of duplicated genes in different ash species.BBSRC BB/S004661/

    Observing the Cell in Its Native State: Imaging Subcellular Dynamics in Multicellular Organisms

    Get PDF
    True physiological imaging of subcellular dynamics requires studying cells within their parent organisms, where all the environmental cues that drive gene expression, and hence the phenotypes that we actually observe, are present. A complete understanding also requires volumetric imaging of the cell and its surroundings at high spatiotemporal resolution, without inducing undue stress on either. We combined lattice light-sheet microscopy with adaptive optics to achieve, across large multicellular volumes, noninvasive aberration-free imaging of subcellular processes, including endocytosis, organelle remodeling during mitosis, and the migration of axons, immune cells, and metastatic cancer cells in vivo. The technology reveals the phenotypic diversity within cells across different organisms and developmental stages and may offer insights into how cells harness their intrinsic variability to adapt to different physiological environments

    RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures

    Get PDF
    © The Author(s) 2019. Published by Oxford University Press. BACKGROUND: In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. RESULTS: We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. CONCLUSIONS: We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever

    CartoCell, a high-content pipeline for 3D image analysis, unveils cell morphology patterns in epithelia

    Get PDF
    Decades of research have not yet fully explained the mechanisms of epithelial self-organization and 3D packing. Single-cell analysis of large 3D epithelial libraries is crucial for understanding the assembly and function of whole tissues. Combining 3D epithelial imaging with advanced deep-learning segmentation methods is essential for enabling this high-content analysis. We introduce CartoCell, a deep-learning-based pipeline that uses small datasets to generate accurate labels for hundreds of whole 3D epithelial cysts. Our method detects the realistic morphology of epithelial cells and their contacts in the 3D structure of the tissue. CartoCell enables the quantification of geometric and packing features at the cellular level. Our single-cell cartography approach then maps the distribution of these features on 2D plots and 3D surface maps, revealing cell morphology patterns in epithelial cysts. Additionally, we show that CartoCell can be adapted to other types of epithelial tissues
    • …
    corecore