1,565 research outputs found

    Hierarchical path-finding for Navigation Meshes (HNA*)

    Get PDF
    Path-finding can become an important bottleneck as both the size of the virtual environments and the number of agents navigating them increase. It is important to develop techniques that can be efficiently applied to any environment independently of its abstract representation. In this paper we present a hierarchical NavMesh representation to speed up path-finding. Hierarchical path-finding (HPA*) has been successfully applied to regular grids, but there is a need to extend the benefits of this method to polygonal navigation meshes. As opposed to regular grids, navigation meshes offer representations with higher accuracy regarding the underlying geometry, while containing a smaller number of cells. Therefore, we present a bottom-up method to create a hierarchical representation based on a multilevel k-way partitioning algorithm (MLkP), annotated with sub-paths that can be accessed online by our Hierarchical NavMesh Path-finding algorithm (HNA*). The algorithm benefits from searching in graphs with a much smaller number of cells, thus performing up to 7.7 times faster than traditional A¿ over the initial NavMesh. We present results of HNA* over a variety of scenarios and discuss the benefits of the algorithm together with areas for improvement.Peer ReviewedPostprint (author's final draft

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Populating 3D Cities: a True Challenge

    Full text link
    In this paper, we describe how we can model crowds in real-time using dynamic meshes, static meshes andimpostors. Techniques to introduce variety in crowds including colors, shapes, textures, individualanimation, individualized path-planning, simple and complex accessories are explained. We also present ahybrid architecture to handle the path planning of thousands of pedestrians in real time, while ensuringdynamic collision avoidance. Several behavioral aspects are presented as gaze control, group behaviour, aswell as the specific technique of crowd patches

    Feeling crowded yet?: Crowd simulations for VR

    Get PDF
    With advances in virtual reality technology and its multiple applications, the need for believable, immersive virtual environments is increasing. Even though current computer graphics methods allow us to develop highly realistic virtual worlds, the main element failing to enhance presence is autonomous groups of human inhabitants. A great number of crowd simulation techniques have emerged in the last decade, but critical details in the crowd's movements and appearance do not meet the standards necessary to convince VR participants that they are present in a real crowd. In this paper, we review recent advances in the creation of immersive virtual crowds and discuss areas that require further work to turn these simulations into more fully immersive and believable experiences.Peer ReviewedPostprint (author's final draft

    Populating 3D Cities: A True Challenge

    Get PDF
    In this paper, we describe how we can model crowds in real-time using dynamic meshes, static meshes andimpostors. Techniques to introduce variety in crowds including colors, shapes, textures, individualanimation, individualized path-planning, simple and complex accessories are explained. We also present ahybrid architecture to handle the path planning of thousands of pedestrians in real time, while ensuringdynamic collision avoidance. Several behavioral aspects are presented as gaze control, group behaviour, aswell as the specific technique of crowd patches

    A GROWTH-BASED APPROACH TO THE AUTOMATIC GENERATION OF NAVIGATION MESHES

    Get PDF
    Providing an understanding of space in game and simulation environments is one of the major challenges associated with moving artificially intelligent characters through these environments. The usage of some form of navigation mesh has become the standard method to provide a representation of the walkable space in game environments to characters moving around in that environment. There is currently no standardized best method of producing a navigation mesh. In fact, producing an optimal navigation mesh has been shown to be an NP-Hard problem. Current approaches are a patchwork of divergent methods all of which have issues either in the time to create the navigation meshes (e.g., the best looking navigation meshes have traditionally been produced by hand which is time consuming), generate substandard quality navigation meshes (e.g., many of the automatic mesh production algorithms result in highly triangulated meshes that pose problems for character navigation), or yield meshes that contain gaps of areas that should be included in the mesh and are not (e.g., existing growth-based methods are unable to adapt to non-axis-aligned geometry and as such tend to provide a poor representation of the walkable space in complex environments). We introduce the Planar Adaptive Space Filling Volumes (PASFV) algorithm, Volumetric Adaptive Space Filling Volumes (VASFV) algorithm, and the Iterative Wavefront Edge Expansion Cell Decomposition (Wavefront) algorithm. These algorithms provide growth-based spatial decompositions for navigation mesh generation in either 2D (PASFV) or 3D (VASFV). These algorithms generate quick (on demand) decompositions (Wavefront), use quad/cube base spatial structures to provide more regular regions in the navigation mesh instead of triangles, and offer full coverage decompositions to avoid gaps in the navigation mesh by adapting to non-axis-aligned geometry. We have shown experimentally that the decompositions offered by PASFV and VASFV are superior both in character navigation ability, number of regions, and coverage in comparison to the existing and commonly used techniques of Space Filling Volumes, Hertel-Melhorn decomposition, Delaunay Triangulation, and Automatic Path Node Generation. Finally, we show that our Wavefront algorithm retains the superior performance of the PASFV and VASFV algorithms while providing faster decompositions that contain fewer degenerate and near degenerate regions. Unlike traditional navigation mesh generation techniques, the PASFV and VASFV algorithms have a real time extension (Dynamic Adaptive Space Filling Volumes, DASFV) which allows the navigation mesh to adapt to changes in the geometry of the environment at runtime. In addition, it is possible to use a navigation mesh for applications above and beyond character path planning and navigation. These multiple uses help to increase the return on the investment in creating a navigation mesh for a game or simulation environment. In particular, we will show how to use a navigation mesh for the acceleration of collision detection

    Multi-Domain Real-Time Planning in Dynamic Environments

    Get PDF
    This paper presents a real-time planning framework for multicharacter navigation that enables the use of multiple heterogeneous problem domains of differing complexities for navigation in large, complex, dynamic virtual environments. The original navigation problem is decomposed into a set of smaller problems that are distributed across planning tasks working in these different domains. An anytime dynamic planner is used to efficiently compute and repair plans for each of these tasks, while using plans in one domain to focus and accelerate searches in more complex domains. We demonstrate the benefits of our framework by solving many challenging multi-agent scenarios in complex dynamic environments requiring space-time precision and explicit coordination between interacting agents, by accounting for dynamic information at all stages of the decision-making process
    corecore