309 research outputs found

    A review of the Discrete Element Method/Modelling (DEM) in agricultural engineering

    Get PDF
    With the development of high-performance computing technology, the number of scientific publications regarding computational modelling of applications with the Discrete Element Method/Modelling (DEM) approaches in agricultural engineering has risen in the past decades. Many granular materials, e.g. grains, fruits and soils in agricultural engineering are processed, and thus a better understanding of these granular media with DEM is of great significance in design and optimization of tools and process in agricultural engineering. In this review, the theory and background of DEM have been introduced. Some improved contact models discussed in the literature for accurately predicting the contact force between two interacting particles have been compared. Accurate approximation of irregular particle shapes is of great importance in DEM simulations to model real particles in agricultural engineering. New algorithms to approximate irregular particle shapes, e.g. overlapping multi-sphere approach, ellipsoid, etc., have been summarized. Some remarkable engineering applications of the improved numerical models developed and implemented in DEM are discussed. Finally, potential applications of DEM and some suggested further works are addressed in the last section of this review

    The Effects of Spatio-Temporal Heterogeneities on the Emergence and Spread of Dengue Virus

    Get PDF
    The dengue virus (DENV) remains a considerable global public health concern. The interactions between the virus, its mosquito vectors and the human host are complex and only partially understood. Dependencies of vector ecology on environmental attributes, such as temperature and rainfall, together with host population density, introduce strong spatiotemporal heterogeneities, resulting in irregular epidemic outbreaks and asynchronous oscillations in serotype prevalence. Human movements across different spatial scales have also been implicated as important drivers of dengue epidemiology across space and time, and further create the conditions for the geographic expansion of dengue into new habitats. Previously proposed transmission models often relied on strong, unrealistic assumptions regarding key epidemiological and ecological interactions to elucidate the effects of these spatio-temporal heterogeneities on the emergence, spread and persistence of dengue. Furthermore, the computational limitations of individual based models have hindered the development of more detailed descriptions of the influence of vector ecology, environment and human mobility on dengue epidemiology. In order to address these shortcomings, the main aim of this thesis was to rigorously quantify the effects of ecological drivers on dengue epidemiology within a robust and computational efficient framework. The individual based model presented included an explicit spatial structure, vector and human movement, spatio-temporal heterogeneity in population densities, and climate effects. The flexibility of the framework allowed robust assessment of the implications of classical modelling assumptions on the basic reproduction number, R₀, demonstrating that traditional approaches grossly inflate R₀ estimates. The model's more realistic meta-population formulation was then exploited to elucidate the effects of ecological heterogeneities on dengue incidence which showed that sufficient levels of community connectivity are required for the spread and persistence of dengue virus. By fitting the individual based model to empirical data, the influence of climate and on dengue was quantified, revealing the strong benefits that cross-sectional serological data could bring to more precisely inferring ecological drivers of arboviral epidemiology. Overall, the findings presented here demonstrate the wide epidemiological landscape which ecological drivers induce, forewarning against the strong implications of generalising interpretations from one particular setting across wider spatial contexts. These findings will prove invaluable for the assessment of vector-borne control strategies, such as mosquito elimination or vaccination deployment programs

    A novel learning automata game with local feedback for parallel optimization of hydropower production

    Get PDF
    Master's thesis Information- and communication technology IKT590 - University of Agder 2017Hydropower optimization for multi-reservoir systems is classi ed as a combinatorial optimization problem with large state-space that is particularly di cult to solve. There exist no golden standard when solving such problems, and many proposed algorithms are domain speci c. The literature describes several di erent techniques where linear programming approaches are extensively discussed, but tends to succumb to the curse of dimensionality problem when the state vector dimensions increase. This thesis introduces LA LCS, a novel learning automata algorithm that utilizes a parallel form of local feedback. This enables each individual automaton to receive direct feedback, resulting in faster convergence. In addition, the algorithm is implemented using a parallel architecture on a CUDA enabled GPU, along with exhaustive and random search. LA LCS has been veri ed through several scenarios. Experiments show that the algorithm is able to quickly adapt and nd optimal production strategies for problems of variable complexity. The algorithm is empirically veri ed and shown to hold great promise for solving optimization problems, including hydropower production strategies

    CGAMES'2009

    Get PDF

    Stellar Populations in STARFORGE: The Origin and Evolution of Star Clusters and Associations

    Full text link
    Most stars form in highly clustered environments within molecular clouds, but eventually disperse into the distributed stellar field population. Exactly how the stellar distribution evolves from the embedded stage into gas-free associations and (bound) clusters is poorly understood. We investigate the long-term evolution of stars formed in the STARFORGE simulation suite -- a set of radiation-magnetohydrodynamic simulations of star-forming turbulent clouds that include all key stellar feedback processes inherent to star formation. We use Nbody6++GPU to follow the evolution of the young stellar systems after gas removal. We use HDBSCAN to define stellar groups and analyze the stellar kinematics to identify the true bound star clusters. The conditions modeled by the simulations, i.e., global cloud surface densities below 0.15 g cm−2^{-2},, star formation efficiencies below 15%, and gas expulsion timescales shorter than a free fall time, primarily produce expanding stellar associations and small clusters. The largest star clusters, which have ∌\sim1000 bound members, form in the densest and lowest velocity dispersion clouds, representing ∌\sim32 and 39% of the stars in the simulations, respectively. The cloud's early dynamical state plays a significant role in setting the classical star formation efficiency versus bound fraction relation. All stellar groups follow a narrow mass-velocity dispersion power law relation at 10 Myr with a power law index of 0.21. This correlation result in a distinct mass-size relationship for bound clusters. We also provide valuable constraints on the gas dispersal timescale during the star formation process and analyze the implications for the formation of bound systems.Comment: 20 Pages, 10 figures, submitted to MNRA

    Contributions to Big Geospatial Data Rendering and Visualisations

    Get PDF
    Current geographical information systems lack features and components which are commonly found within rendering and game engines. When combined with computer game technologies, a modern geographical information system capable of advanced rendering and data visualisations are achievable. We have investigated the combination of big geospatial data, and computer game engines for the creation of a modern geographical information system framework capable of visualising densely populated real-world scenes using advanced rendering algorithms. The pipeline imports raw geospatial data in the form of Ordnance Survey data which is provided by the UK government, LiDAR data provided by a private company, and the global open mapping project of OpenStreetMap. The data is combined to produce additional terrain data where data is missing from the high resolution data sources of LiDAR by utilising interpolated Ordnance Survey data. Where data is missing from LiDAR, the same interpolation techniques are also utilised. Once a high resolution terrain data set which is complete in regards to coverage, is generated, sub datasets can be extracted from the LiDAR using OSM boundary data as a perimeter. The boundaries of OSM represent buildings or assets. Data can then be extracted such as the heights of buildings. This data can then be used to update the OSM database. Using a novel adjacency matrix extraction technique, 3D model mesh objects can be generated using both LiDAR and OSM information. The generation of model mesh objects created from OSM data utilises procedural content generation techniques, enabling the generation of GIS based 3D real-world scenes. Although only LiDAR and Ordnance Survey for UK data is available, restricting the generation to the UK borders, using OSM alone, the system is able to procedurally generate any place within the world covered by OSM. In this research, to manage the large amounts of data, a novel scenegraph structure has been generated to spatially separate OSM data according to OS coordinates, splitting the UK into 1kilometer squared tiles, and categorising OSM assets such as buildings, highways, amenities. Once spatially organised, and categorised as an asset of importance, the novel scenegraph allows for data dispersal through an entire scene in real-time. The 3D real-world scenes visualised within the runtime simulator can be manipulated in four main aspects; ‱ Viewing at any angle or location through the use of a 3D and 2D camera system. ‱ Modifying the effects or effect parameters applied to the 3D model mesh objects to visualise user defined data by use of our novel algorithms and unique lighting data-structure effect file with accompanying material interface. ‱ Procedurally generating animations which can be applied to the spatial parameters of objects, or the visual properties of objects. ‱ Applying Indexed Array Shader Function and taking advantage of the novel big geospatial scenegraph structure to exploit better rendering techniques in the context of a modern Geographical Information System, which has not been done, to the best of our knowledge. Combined with a novel scenegraph structure layout, the user can view and manipulate real-world procedurally generated worlds with additional user generated content in a number of unique and unseen ways within the current geographical information system implementations. We evaluate multiple functionalities and aspects of the framework. We evaluate the performance of the system, measuring frame rates with multi sized maps by stress testing means, as well as evaluating the benefits of the novel scenegraph structure for categorising, separating, manoeuvring, and data dispersal. Uniform scaling by n2 of scenegraph nodes which contain no model mesh data, procedurally generated model data, and user generated model data. The experiment compared runtime parameters, and memory consumption. We have compared the technical features of the framework against that of real-world related commercial projects; Google Maps, OSM2World, OSM-3D, OSM-Buildings, OpenStreetMap, ArcGIS, Sustainability Assessment Visualisation and Enhancement (SAVE), and Autonomous Learning Agents for Decentralised Data and Information (ALLADIN). We conclude that when compared to related research, the framework produces data-sets relevant for visualising geospatial assets from the combination of real-world data-sets, capable of being used by a multitude of external game engines, applications, and geographical information systems. The ability to manipulate the production of said data-sets at pre-compile time aids processing speeds for runtime simulation. This ability is provided by the pre-processor. The added benefit is to allow users to manipulate the spatial and visual parameters in a number of varying ways with minimal domain knowledge. The features of creating procedural animations attached to each of the spatial parameters and visual shading parameters allow users to view and encode their own representations of scenes which are unavailable within all of the products stated. Each of the alternative projects have similar features, but none which allow full animation ability of all parameters of an asset; spatially or visually, or both. We also evaluated the framework on the implemented features; implementing the needed algorithms and novelties of the framework as problems arose in the development of the framework. Examples of this is the algorithm for combining the multiple terrain data-sets we have (Ordnance Survey terrain data and Light Detection and Ranging Digital Surface Model data and Digital Terrain Model data), and combining them in a justifiable way to produce maps with no missing data values for further analysis and visualisation. A majority of visualisations are rendered using an Indexed Array Shader Function effect file, structured to create a novel design to encapsulate common rendering effects found in commercial computer games, and apply them to the rendering of real-world assets for a modern geographical information system. Maps of various size, in both dimensions, polygonal density, asset counts, and memory consumption prove successful in relation to real-time rendering parameters i.e. the visualisation of maps do not create a bottleneck for processing. The visualised scenes allow users to view large dense environments which include terrain models within procedural and user generated buildings, highways, amenities, and boundaries. The use of a novel scenegraph structure allows for the fast iteration and search from user defined dynamic queries. The interaction with the framework is allowed through a novel Interactive Visualisation Interface. Utilising the interface, a user can apply procedurally generated animations to both spatial and visual properties to any node or model mesh within the scene. We conclude that the framework has been a success. We have completed what we have set out to develop and create, we have combined multiple data-sets to create improved terrain data-sets for further research and development. We have created a framework which combines the real-world data of Ordnance Survey, LiDAR, and OpenStreetMap, and implemented algorithms to create procedural assets of buildings, highways, terrain, amenities, model meshes, and boundaries. for visualisation, with implemented features which allows users to search and manipulate a city’s worth of data on a per-object basis, or user-defined combinations. The successful framework has been built by the cross domain specialism needed for such a project. We have combined the areas of; computer games technology, engine and framework development, procedural generation techniques and algorithms, use of real-world data-sets, geographical information system development, data-parsing, big-data algorithmic reduction techniques, and visualisation using shader techniques

    Procedural Generation and Rendering of Realistic, Navigable Forest Environments: An Open-Source Tool

    Full text link
    Simulation of forest environments has applications from entertainment and art creation to commercial and scientific modelling. Due to the unique features and lighting in forests, a forest-specific simulator is desirable, however many current forest simulators are proprietary or highly tailored to a particular application. Here we review several areas of procedural generation and rendering specific to forest generation, and utilise this to create a generalised, open-source tool for generating and rendering interactive, realistic forest scenes. The system uses specialised L-systems to generate trees which are distributed using an ecosystem simulation algorithm. The resulting scene is rendered using a deferred rendering pipeline, a Blinn-Phong lighting model with real-time leaf transparency and post-processing lighting effects. The result is a system that achieves a balance between high natural realism and visual appeal, suitable for tasks including training computer vision algorithms for autonomous robots and visual media generation.Comment: 14 pages, 11 figures. Submitted to Computer Graphics Forum (CGF). The application and supporting configuration files can be found at https://github.com/callumnewlands/ForestGenerato

    Image Segmentation of Bacterial Cells in Biofilms

    Get PDF
    Bacterial biofilms are three-dimensional cell communities that live embedded in a self-produced extracellular matrix. Due to the protective properties of the dense coexistence of microorganisms, single bacteria inside the communities are hard to eradicate by antibacterial agents and bacteriophages. This increased resilience gives rise to severe problems in medical and technological settings. To fight the bacterial cells, an in-detail understanding of the underlying mechanisms of biofilm formation and development is required. Due to spatio-temporal variances in environmental conditions inside a single biofilm, the mechanisms can only be investigated by probing single-cells at different locations over time. Currently, the mechanistic information is primarily encoded in volumetric image data gathered with confocal fluorescence microscopy. To quantify features of the single-cell behaviour, single objects need to be detected. This identification of objects inside biofilm image data is called segmentation and is a key step for the understanding of the biological processes inside biofilms. In the first part of this work, a user-friendly computer program is presented which simplifies the analysis of bacterial biofilms. It provides a comprehensive set of tools to segment, analyse, and visualize fluorescent microscopy data without writing a single line of analysis code. This allows for faster feedback loops between experiment and analysis, and allows fast insights into the gathered data. The single-cell segmentation accuracy of a recent segmentation algorithm is discussed in detail. In this discussion, points for improvements are identified and a new optimized segmentation approach presented. The improved algorithm achieves superior segmentation accuracy on bacterial biofilms when compared to the current state-of-the-art algorithms. Finally, the possibility of deep learning-based end-to-end segmentation of biofilm data is investigated. A method for the quick generation of training data is presented and the results of two single-cell segmentation approaches for eukaryotic cells are adapted for the segmentation of bacterial biofilm segmentation.Bakterielle Biofilme sind drei-dimensionale Zellcluster, welche ihre eigene Matrix produzieren. Die selbst-produzierte Matrix bietet den Zellen einen gemeinschaftlichen Schutz vor Ă€ußeren Stressfaktoren. Diese Stressfaktoren können abiotischer Natur sein wie z.B. Temperatur- und NĂ€hrstoff\- schwankungen, oder aber auch biotische Faktoren wie z.B. Antibiotikabehandlung oder Bakteriophageninfektionen. Dies fĂŒhrt dazu, dass einzelne Zelle innerhalb der mikrobiologischen Gemeinschaften eine erhöhte WiderstandsfĂ€higkeit aufweisen und eine große Herausforderung fĂŒr Medizin und technische Anwendungen darstellen. Um Biofilme wirksam zu bekĂ€mpfen, muss man die dem Wachstum und Entwicklung zugrundeliegenden Mechanismen entschlĂŒsseln. Aufgrund der hohen Zelldichte innerhalb der Gemeinschaften sind die Mechanismen nicht rĂ€umlich und zeitlich invariant, sondern hĂ€ngen z.B. von Metabolit-, NĂ€hrstoff- und Sauerstoffgradienten ab. Daher ist es fĂŒr die Beschreibung unabdingbar Beobachtungen auf Einzelzellebene durchzufĂŒhren. FĂŒr die nicht-invasive Untersuchung von einzelnen Zellen innerhalb eines Biofilms ist man auf konfokale Fluoreszenzmikroskopie angewiesen. Um aus den gesammelten, drei-dimensionalen Bilddaten Zelleigenschaften zu extrahieren, ist die Erkennung von den jeweiligen Zellen erforderlich. Besonders die digitale Rekonstruktion der Zellmorphologie spielt dabei eine große Rolle. Diese erhĂ€lt man ĂŒber die Segmentierung der Bilddaten. Dabei werden einzelne Bildelemente den abgebildeten Objekten zugeordnet. Damit lassen sich die einzelnen Objekte voneinander unterscheiden und deren Eigenschaften extrahieren. Im ersten Teil dieser Arbeit wird ein benutzerfreundliches Computerprogramm vorgestellt, welches die Segmentierung und Analyse von Fluoreszenzmikroskopiedaten wesentlich vereinfacht. Es stellt eine umfangreiche Auswahl an traditionellen Segmentieralgorithmen, Parameterberechnungen und Visualisierungsmöglichkeiten zur VerfĂŒgung. Alle Funktionen sind ohne Programmierkenntnisse zugĂ€nglich, sodass sie einer großen Gruppe von Benutzern zur VerfĂŒgung stehen. Die implementierten Funktionen ermöglichen es die Zeit zwischen durchgefĂŒhrtem Experiment und vollendeter Datenanalyse signifikant zu verkĂŒrzen. Durch eine schnelle Abfolge von stetig angepassten Experimenten können in kurzer Zeit schnell wissenschaftliche Einblicke in Biofilme gewonnen werden.\\ Als ErgĂ€nzung zu den bestehenden Verfahren zur Einzelzellsegmentierung in Biofilmen, wird eine Verbesserung vorgestellt, welche die Genauigkeit von bisherigen Filter-basierten Algorithmen ĂŒbertrifft und einen weiteren Schritt in Richtung von zeitlich und rĂ€umlich aufgelöster Einzelzellverfolgung innerhalb bakteriellen Biofilme darstellt. Abschließend wird die Möglichkeit der Anwendung von Deep Learning Algorithmen fĂŒr die Segmentierung in Biofilmen evaluiert. Dazu wird eine Methode vorgestellt welche den Annotationsaufwand von Trainingsdaten im Vergleich zu einer vollstĂ€ndig manuellen Annotation drastisch verkĂŒrzt. Die erstellten Daten werden fĂŒr das Training von Algorithmen eingesetzt und die Genauigkeit der Segmentierung an experimentellen Daten untersucht

    Drone and sensor technology for sustainable weed management: a review

    Get PDF
    Weeds are amongst the most impacting abiotic factors in agriculture, causing important yield loss worldwide. Integrated Weed Management coupled with the use of Unmanned Aerial Vehicles (drones), allows for Site-Specific Weed Management, which is a highly efficient methodology as well as beneficial to the environment. The identification of weed patches in a cultivated field can be achieved by combining image acquisition by drones and further processing by machine learning techniques. Specific algorithms can be trained to manage weeds removal by Autonomous Weeding Robot systems via herbicide spray or mechanical procedures. However, scientific and technical understanding of the specific goals and available technology is necessary to rapidly advance in this field. In this review, we provide an overview of precision weed control with a focus on the potential and practical use of the most advanced sensors available in the market. Much effort is needed to fully understand weed population dynamics and their competition with crops so as to implement this approach in real agricultural contexts

    Development of a GPGPU accelerated tool to simulate advection-reaction-diffusion phenomena in 2D

    Get PDF
    Computational models are powerful tools to the study of environmental systems, playing a fundamental role in several fields of research (hydrological sciences, biomathematics, atmospheric sciences, geosciences, among others). Most of these models require high computational capacity, especially when one considers high spatial resolution and the application to large areas. In this context, the exponential increase in computational power brought by General Purpose Graphics Processing Units (GPGPU) has drawn the attention of scientists and engineers to the development of low cost and high performance parallel implementations of environmental models. In this research, we apply GPGPU computing for the development of a model that describes the physical processes of advection, reaction and diffusion. This presentation is held in the form of three self-contained articles. In the first one, we present a GPGPU implementation for the solution of the 2D groundwater flow equation in unconfined aquifers for heterogenous and anisotropic media. We implement a finite difference solution scheme based on the Crank- Nicolson method and show that the GPGPU accelerated solution implemented using CUDA C/C++ (Compute Unified Device Architecture) greatly outperforms the corresponding serial solution implemented in C/C++. The results show that accelerated GPGPU implementation is capable of delivering up to 56 times acceleration in the solution process using an ordinary office computer. In the second article, we study the application of a diffusive-logistic growth (DLG) model to the problem of forest growth and regeneration. The study focuses on vegetation belonging to preservation areas, such as riparian buffer zones. The study was developed in two stages: (i) a methodology based on Artificial Neural Network Ensembles (ANNE) was applied to evaluate the width of riparian buffer required to filter 90% of the residual nitrogen; (ii) the DLG model was calibrated and validated to generate a prognostic of forest regeneration in riparian protection bands considering the minimum widths indicated by the ANNE. The solution was implemented in GPGPU and it was applied to simulate the forest regeneration process for forty years on the riparian protection bands along the Ligeiro river, in Brazil. The results from calibration and validation showed that the DLG model provides fairly accurate results for the modelling of forest regeneration. In the third manuscript, we present a GPGPU implementation of the solution of the advection-reaction-diffusion equation in 2D. The implementation is designed to be general and flexible to allow the modeling of a wide range of processes, including those with heterogeneity and anisotropy. We show that simulations performed in GPGPU allow the use of mesh grids containing more than 20 million points, corresponding to an area of 18,000 km? in a standard Landsat image resolution.Os modelos computacionais s?o ferramentas poderosas para o estudo de sistemas ambientais, desempenhando um papel fundamental em v?rios campos de pesquisa (ci?ncias hidrol?gicas, biomatem?tica, ci?ncias atmosf?ricas, geoci?ncias, entre outros). A maioria desses modelos requer alta capacidade computacional, especialmente quando se considera uma alta resolu??o espacial e a aplica??o em grandes ?reas. Neste contexto, o aumento exponencial do poder computacional trazido pelas Unidades de Processamento de Gr?ficos de Prop?sito Geral (GPGPU) chamou a aten??o de cientistas e engenheiros para o desenvolvimento de implementa??es paralelas de baixo custo e alto desempenho para modelos ambientais. Neste trabalho, aplicamos computa??o em GPGPU para o desenvolvimento de um modelo que descreve os processos f?sicos de advec??o, rea??o e difus?o. Esta disserta??o ? apresentada sob a forma de tr?s artigos. No primeiro, apresentamos uma implementa??o em GPGPU para a solu??o da equa??o de fluxo de ?guas subterr?neas 2D em aqu?feros n?o confinados para meios heterog?neos e anisotr?picos. Foi implementado um esquema de solu??o de diferen?as finitas com base no m?todo Crank- Nicolson e mostramos que a solu??o acelerada GPGPU implementada usando CUDA C / C ++ supera a solu??o serial correspondente implementada em C / C ++. Os resultados mostram que a implementa??o acelerada por GPGPU ? capaz de fornecer acelera??o de at? 56 vezes no processo da solu??o usando um computador de escrit?rio comum. No segundo artigo estudamos a aplica??o de um modelo de crescimento log?stico difusivo (DLG) ao problema de crescimento e regenera??o florestal. O estudo foi desenvolvido em duas etapas: (i) Aplicou-se uma metodologia baseada em Comites de Rede Neural Artificial (ANNE) para avaliar a largura da faixa de prote??o rip?ria necess?ria para filtrar 90% do nitrog?nio residual; (ii) O modelo DLG foi calibrado e validado para gerar um progn?stico de regenera??o florestal em faixas de prote??o rip?rias considerando as larguras m?nimas indicadas pela ANNE. A solu??o foi implementada em GPGPU e aplicada para simular o processo de regenera??o florestal para um per?odo de quarenta anos na faixa de prote??o rip?ria ao longo do rio Ligeiro, no Brasil. Os resultados da calibra??o e valida??o mostraram que o modelo DLG fornece resultados bastante precisos para a modelagem de regenera??o florestal. No terceiro artigo, apresenta-se uma implementa??o em GPGPU para solu??o da equa??o advec??o-rea??o-difus?o em 2D. A implementa??o ? projetada para ser geral e flex?vel para permitir a modelagem de uma ampla gama de processos, incluindo caracter?sticas como heterogeneidade e anisotropia do meio. Neste trabalho mostra-se que as simula??es realizadas em GPGPU permitem o uso de malhas contendo mais de 20 milh?es de pontos (vari?veis), correspondendo a uma ?rea de 18.000 km? em resolu??o de 30m padr?o das imagens Landsat
    • 

    corecore