863 research outputs found

    Learning in Real-Time Search: A Unifying Framework

    Full text link
    Real-time search methods are suited for tasks in which the agent is interacting with an initially unknown environment in real time. In such simultaneous planning and learning problems, the agent has to select its actions in a limited amount of time, while sensing only a local part of the environment centered at the agents current location. Real-time heuristic search agents select actions using a limited lookahead search and evaluating the frontier states with a heuristic function. Over repeated experiences, they refine heuristic values of states to avoid infinite loops and to converge to better solutions. The wide spread of such settings in autonomous software and hardware agents has led to an explosion of real-time search algorithms over the last two decades. Not only is a potential user confronted with a hodgepodge of algorithms, but he also faces the choice of control parameters they use. In this paper we address both problems. The first contribution is an introduction of a simple three-parameter framework (named LRTS) which extracts the core ideas behind many existing algorithms. We then prove that LRTA*, epsilon-LRTA*, SLA*, and gamma-Trap algorithms are special cases of our framework. Thus, they are unified and extended with additional features. Second, we prove completeness and convergence of any algorithm covered by the LRTS framework. Third, we prove several upper-bounds relating the control parameters and solution quality. Finally, we analyze the influence of the three control parameters empirically in the realistic scalable domains of real-time navigation on initially unknown maps from a commercial role-playing game as well as routing in ad hoc sensor networks

    Accelerating Scientific Computing Models Using GPU Processing

    Get PDF
    GPGPUs offer significant computational power for programmers to leverage. This computational power is especially useful when utilized for accelerating scientific models. This thesis analyzes the utilization of GPGPU programming to accelerate scientific computing models. First the construction of hardware for visualization and computation of scientific models is discussed. Several factors in the construction of the machines focus on the performance impacts related to scientific modeling. Image processing is an embarrassingly parallel problem well suited for GPGPU acceleration. An image processing library was developed to show the processes of recognizing embarrassingly parallel problems and serves as an excellent example of converting from a serial CPU implementation to a GPU accelerated implementation. Genetic algorithms are biologically inspired heuristic search algorithms based on natural selection. The Tetris genetic algorithm with A* pathfinding discusses memory bound limitations that can prevent direct algorithm conversions from the CPU to the GPU. An analysis of an existing landscape evolution model, CHILD, for GPU acceleration explores that even when a model shows promise for GPU acceleration, the underlying data structures can have a significant impact upon that ability to move to a GPU implementation. CHILD also offers an example of creating tighter MATLAB integration between existing models. Lastly, a parallel spatial sorting algorithm is discussed as a possible replacement for current spatial sorting algorithms implemented in models such as smoothed particle hydrodynamics

    Delta Building Visualization - Agent Logic

    Get PDF
    Käesolev bakalaureusetöö kirjeldab Delta õppehoone visualisatsiooni edasiarendust. Töös esmalt kirjeldatakse lähtekoodi algseisu ja selle parandusi. Töö põhiosas antakse töö käigus implementeeritud agentide loogika kirjeldus. Uus agentide loogika implementeeriti, et muuta agentide käitumist realistlikumaks ja parandada visualisatsiooni jõudlust. See saavutati kahekihilise rajaleidmise algoritmi implementeerimisega, eelnevalt arvutatud radade mõjutamisega ja agentide rühmitamisega. Töö lõpus kirjeldatakse arendatud rakenduse jõudlust ja töökindlust.The work in this Bachelor’s thesis is a continuation of the Delta Building Visualization project. Firstly, the initial state of source code and its refactoring is described. The main part of this thesis is about the implementation of new agent logic. The new agent logic was implemented to make the agents behavior more realistic and to improve the performance of the visualization. It was done by implementing two layered pathfinding, modifying the precalculated paths and agent grouping. Lastly, the testing of the performance and the stability of the project is described

    Pathfinding in hierarchical representation of large realistic virtual terrains

    Get PDF
    Pathfinding is critical to virtual simulation applications. One of the most prominent pathfinding challenges is the fast computation of path plans in large and realistic virtual terrain environments. To tackle this problem, this work proposes the exploration of a quadtree structure in the navigation map representation of large real-world virtual terrains. Exploring a hierarchical approach for virtual terrain representation, we detail how a global hierarchical pathfinding algorithm searches for a path in a coarse initial navigation map representation. Then, during execution time, the pathfinding algorithm refines regions of interest in this terrain representation in order to compute paths with a higher quality in areas where a large amount of navigation obstacles is found. The computational time of such hierarchical pathfinding algorithm is systematically measured in different hierarchical and non-hierarchical terrain representation structures that are instantiated in the modeling of a small real-world terrain scenario. Then, similar experiments are developed in a large real-world virtual terrain that is inserted in a real-life simulation system for the development of military tactical training exercises. The results show that the computational time required to generate pathfinding answers can be optimized when the proposed hierarchical pathfinding algorithm along with the easy and reliable implementation of the quadtree-based navigation map representation of the large virtual terrain are explored in the development of simulation systems

    Performance Evaluation of Pathfinding Algorithms

    Get PDF
    Pathfinding is the search for an optimal path from a start location to a goal location in a given environment. In Artificial Intelligence pathfinding algorithms are typically designed as a kind of graph search. These algorithms are applicable in a wide variety of applications such as computer games, robotics, networks, and navigation systems. The performance of these algorithms is affected by several factors such as the problem size, path length, the number and distribution of obstacles, data structures and heuristics. When new pathfinding algorithms are proposed in the literature, their performance is often investigated empirically (if at all). Proper experimental design and analysis is crucial to provide an informative and non- misleading evaluation. In this research, we survey many papers and classify them according to their methodology, experimental design, and analytical techniques. We identify some weaknesses in these areas that are all too frequently found in reported approaches. We first found the pitfalls in pathfinding research and then provide solutions by creating example problems. Our research shows that spurious effects, control conditions provide solutions to avoid these pitfalls

    Identification of metabolic pathways using pathfinding approaches: A systematic review

    Get PDF
    Metabolic pathways have become increasingly available for variousmicroorganisms. Such pathways have spurred the development of a wide array of computational tools, in particular, mathematical pathfinding approaches. This article can facilitate the understanding of computational analysis ofmetabolic pathways in genomics. Moreover, stoichiometric and pathfinding approaches inmetabolic pathway analysis are discussed. Threemajor types of studies are elaborated: stoichiometric identification models, pathway-based graph analysis and pathfinding approaches in cellular metabolism. Furthermore, evaluation of the outcomes of the pathways withmathematical benchmarkingmetrics is provided. This review would lead to better comprehension ofmetabolismbehaviors in living cells, in terms of computed pathfinding approaches. © The Author 2016

    ΔSCOPE: A New Method to Quantify 3D Biological Structures and Identify Differences in Zebrafish Forebrain Development

    Get PDF
    Research in the life sciences has traditionally relied on the analysis of clear morphological phenotypes, which are often revealed using increasingly powerful microscopy techniques analyzed as maximum intensity projections (MIPs). However, as biology turns towards the analysis of more subtle phenotypes, MIPs and qualitative approaches are failing to adequately describe these phenotypes. To address these limitations and quantitatively analyze the three-dimensional (3D) spatial relationships of biological structures, we developed the computational method and program called ∆SCOPE (Changes in Spatial Cylindrical Coordinate Orientation using PCA Examination). Our approach uses the fluorescent signal distribution within a 3D data set and reorients the fluorescent signal to a relative biological reference structure. This approach enables quantification and statistical analysis of spatial relationships and signal density in 3D multichannel signals that are positioned around a well-defined structure contained in a reference channel. We validated the application of ∆SCOPE by analyzing normal axon and glial cell guidance in the zebrafish forebrain and by quantify- ing the commissural phenotypes associated with abnormal Slit guidance cue expression in the forebrain. Despite commissural phenotypes which display disruptions to the reference structure, ∆SCOPE was able to detect subtle, previously uncharacterized changes in zebrafish forebrain midline crossing axons and glia. This method has been developed as a user-friendly, open source program. We propose that ∆SCOPE is an innovative approach to advancing the state of image quantification in the field of high resolution microscopy, and that the techniques presented here are of broad applications to the life science field

    Crowd simulation and visualization

    Get PDF
    Large-scale simulation and visualization are essential topics in areas as different as sociology, physics, urbanism, training, entertainment among others. This kind of systems requires a vast computational power and memory resources commonly available in High Performance Computing HPC platforms. Currently, the most potent clusters have heterogeneous architectures with hundreds of thousands and even millions of cores. The industry trends inferred that exascale clusters would have thousands of millions. The technical challenges for simulation and visualization process in the exascale era are intertwined with difficulties in other areas of research, including storage, communication, programming models and hardware. For this reason, it is necessary prototyping, testing, and deployment a variety of approaches to address the technical challenges identified and evaluate the advantages and disadvantages of each proposed solution. The focus of this research is interactive large-scale crowd simulation and visualization. To exploit to the maximum the capacity of the current HPC infrastructure and be prepared to take advantage of the next generation. The project develops a new approach to scale crowd simulation and visualization on heterogeneous computing cluster using a task-based technique. Its main characteristic is hardware agnostic. It abstracts the difficulties that imply the use of heterogeneous architectures like memory management, scheduling, communications, and synchronization — facilitating development, maintenance, and scalability. With the goal of flexibility and take advantage of computing resources as best as possible, the project explores different configurations to connect the simulation with the visualization engine. This kind of system has an essential use in emergencies. Therefore, urban scenes were implemented as realistic as possible; in this way, users will be ready to face real events. Path planning for large-scale crowds is a challenge to solve, due to the inherent dynamism in the scenes and vast search space. A new path-finding algorithm was developed. It has a hierarchical approach which offers different advantages: it divides the search space reducing the problem complexity, it can obtain a partial path instead of wait for the complete one, which allows a character to start moving and compute the rest asynchronously. It can reprocess only a part if necessary with different levels of abstraction. A case study is presented for a crowd simulation in urban scenarios. Geolocated data are used, they were produced by mobile devices to predict individual and crowd behavior and detect abnormal situations in the presence of specific events. It was also address the challenge of combining all these individual’s location with a 3D rendering of the urban environment. The data processing and simulation approach are computationally expensive and time-critical, it relies thus on a hybrid Cloud-HPC architecture to produce an efficient solution. Within the project, new models of behavior based on data analytics were developed. It was developed the infrastructure to be able to consult various data sources such as social networks, government agencies or transport companies such as Uber. Every time there is more geolocation data available and better computation resources which allow performing analysis of greater depth, this lays the foundations to improve the simulation models of current crowds. The use of simulations and their visualization allows to observe and organize the crowds in real time. The analysis before, during and after daily mass events can reduce the risks and associated logistics costs.La simulación y visualización a gran escala son temas esenciales en áreas tan diferentes como la sociología, la física, el urbanismo, la capacitación, el entretenimiento, entre otros. Este tipo de sistemas requiere una gran capacidad de cómputo y recursos de memoria comúnmente disponibles en las plataformas de computo de alto rendimiento. Actualmente, los equipos más potentes tienen arquitecturas heterogéneas con cientos de miles e incluso millones de núcleos. Las tendencias de la industria infieren que los equipos en la era exascale tendran miles de millones. Los desafíos técnicos en el proceso de simulación y visualización en la era exascale se entrelazan con dificultades en otras áreas de investigación, incluidos almacenamiento, comunicación, modelos de programación y hardware. Por esta razón, es necesario crear prototipos, probar y desplegar una variedad de enfoques para abordar los desafíos técnicos identificados y evaluar las ventajas y desventajas de cada solución propuesta. El foco de esta investigación es la visualización y simulación interactiva de multitudes a gran escala. Aprovechar al máximo la capacidad de la infraestructura actual y estar preparado para aprovechar la próxima generación. El proyecto desarrolla un nuevo enfoque para escalar la simulación y visualización de multitudes en un clúster de computo heterogéneo utilizando una técnica basada en tareas. Su principal característica es que es hardware agnóstico. Abstrae las dificultades que implican el uso de arquitecturas heterogéneas como la administración de memoria, las comunicaciones y la sincronización, lo que facilita el desarrollo, el mantenimiento y la escalabilidad. Con el objetivo de flexibilizar y aprovechar los recursos informáticos lo mejor posible, el proyecto explora diferentes configuraciones para conectar la simulación con el motor de visualización. Este tipo de sistemas tienen un uso esencial en emergencias. Por lo tanto, se implementaron escenas urbanas lo más realistas posible, de esta manera los usuarios estarán listos para enfrentar eventos reales. La planificación de caminos para multitudes a gran escala es un desafío a resolver, debido al dinamismo inherente en las escenas y el vasto espacio de búsqueda. Se desarrolló un nuevo algoritmo de búsqueda de caminos. Tiene un enfoque jerárquico que ofrece diferentes ventajas: divide el espacio de búsqueda reduciendo la complejidad del problema, puede obtener una ruta parcial en lugar de esperar a la completa, lo que permite que un personaje comience a moverse y calcule el resto de forma asíncrona, puede reprocesar solo una parte si es necesario con diferentes niveles de abstracción. Se presenta un caso de estudio para una simulación de multitud en escenarios urbanos. Se utilizan datos geolocalizados producidos por dispositivos móviles para predecir el comportamiento individual y público y detectar situaciones anormales en presencia de eventos específicos. También se aborda el desafío de combinar la ubicación de todos estos individuos con una representación 3D del entorno urbano. Dentro del proyecto, se desarrollaron nuevos modelos de comportamiento basados ¿¿en el análisis de datos. Se creo la infraestructura para poder consultar varias fuentes de datos como redes sociales, agencias gubernamentales o empresas de transporte como Uber. Cada vez hay más datos de geolocalización disponibles y mejores recursos de cómputo que permiten realizar un análisis de mayor profundidad, esto sienta las bases para mejorar los modelos de simulación de las multitudes actuales. El uso de simulaciones y su visualización permite observar y organizar las multitudes en tiempo real. El análisis antes, durante y después de eventos multitudinarios diarios puede reducir los riesgos y los costos logísticos asociadosPostprint (published version
    corecore