5,291 research outputs found

    LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning

    Full text link
    We present a novel procedural framework to generate an arbitrary number of labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to design accurate algorithms or training models for crowded scene understanding. Our overall approach is composed of two components: a procedural simulation framework for generating crowd movements and behaviors, and a procedural rendering framework to generate different videos or images. Each video or image is automatically labeled based on the environment, number of pedestrians, density, behavior, flow, lighting conditions, viewpoint, noise, etc. Furthermore, we can increase the realism by combining synthetically-generated behaviors with real-world background videos. We demonstrate the benefits of LCrowdV over prior lableled crowd datasets by improving the accuracy of pedestrian detection and crowd behavior classification algorithms. LCrowdV would be released on the WWW

    The matrix revisited: A critical assessment of virtual reality technologies for modeling, simulation, and training

    Get PDF
    A convergence of affordable hardware, current events, and decades of research have advanced virtual reality (VR) from the research lab into the commercial marketplace. Since its inception in the 1960s, and over the next three decades, the technology was portrayed as a rarely used, high-end novelty for special applications. Despite the high cost, applications have expanded into defense, education, manufacturing, and medicine. The promise of VR for entertainment arose in the early 1990\u27s and by 2016 several consumer VR platforms were released. With VR now accessible in the home and the isolationist lifestyle adopted due to the COVID-19 global pandemic, VR is now viewed as a potential tool to enhance remote education. Drawing upon over 17 years of experience across numerous VR applications, this dissertation examines the optimal use of VR technologies in the areas of visualization, simulation, training, education, art, and entertainment. It will be demonstrated that VR is well suited for education and training applications, with modest advantages in simulation. Using this context, the case is made that VR can play a pivotal role in the future of education and training in a globally connected world

    A mixed reality telepresence system for collaborative space operation

    Get PDF
    This paper presents a Mixed Reality system that results from the integration of a telepresence system and an application to improve collaborative space exploration. The system combines free viewpoint video with immersive projection technology to support non-verbal communication, including eye gaze, inter-personal distance and facial expression. Importantly, these can be interpreted together as people move around the simulation, maintaining natural social distance. The application is a simulation of Mars, within which the collaborators must come to agreement over, for example, where the Rover should land and go. The first contribution is the creation of a Mixed Reality system supporting contextualization of non-verbal communication. Tw technological contributions are prototyping a technique to subtract a person from a background that may contain physical objects and/or moving images, and a light weight texturing method for multi-view rendering which provides balance in terms of visual and temporal quality. A practical contribution is the demonstration of pragmatic approaches to sharing space between display systems of distinct levels of immersion. A research tool contribution is a system that allows comparison of conventional authored and video based reconstructed avatars, within an environment that encourages exploration and social interaction. Aspects of system quality, including the communication of facial expression and end-to-end latency are reported

    Physically Interacting With Four Dimensions

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Sciences, 2009People have long been fascinated with understanding the fourth dimension. While making pictures of 4D objects by projecting them to 3D can help reveal basic geometric features, 3D graphics images by themselves are of limited value. For example, just as 2D shadows of 3D curves may have lines crossing one another in the shadow, 3D graphics projections of smooth 4D topological surfaces can be interrupted where one surface intersects another. The research presented here creates physically realistic models for simple interactions with objects and materials in a virtual 4D world. We provide methods for the construction, multimodal exploration, and interactive manipulation of a wide variety of 4D objects. One basic achievement of this research is to exploit the free motion of a computer-based haptic probe to support a continuous motion that follows the \emph{local continuity\/} of a 4D surface, allowing collision-free exploration in the 3D projection. In 3D, this interactive probe follows the full local continuity of the surface as though we were in fact \emph{physically touching\/} the actual static 4D object. Our next contribution is to support dynamic 4D objects that can move, deform, and collide with other objects as well as with themselves. By combining graphics, haptics, and collision-sensing physical modeling, we can thus enhance our 4D visualization experience. Since we cannot actually place interaction devices in 4D, we develop fluid methods for interacting with a 4D object in its 3D shadow image using adapted reduced-dimension 3D tools for manipulating objects embedded in 4D. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D interactive or haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the real-world experience accessible to human beings

    A Testing and Experimenting Environment for Microscopic Traffic Simulation Utilizing Virtual Reality and Augmented Reality

    Get PDF
    Microscopic traffic simulation (MTS) is the emulation of real-world traffic movements in a virtual environment with various traffic entities. Typically, the movements of the vehicles in MTS follow some predefined algorithms, e.g., car-following models, lane changing models, etc. Moreover, existing MTS models only provide a limited capability of two- and/or three-dimensional displays that often restrict the user’s viewpoint to a flat screen. Their downscaled scenes neither provide a realistic representation of the environment nor allow different users to simultaneously experience or interact with the simulation model from different perspectives. These limitations neither allow the traffic engineers to effectively disseminate their ideas to various stakeholders of different backgrounds nor allow the analysts to have realistic data about the vehicle or pedestrian movements. This dissertation intends to alleviate those issues by creating a framework and a prototype for a testing environment where MTS can have inputs from user-controlled vehicles and pedestrians to improve their traffic entity movement algorithms as well as have an immersive M3 (multi-mode, multi-perspective, multi-user) visualization of the simulation using Virtual Reality (VR) and Augmented Reality (AR) technologies. VR environments are created using highly realistic 3D models and environments. With modern game engines and hardware available on the market, these VR applications can provide a highly realistic and immersive experience for a user. Different experiments performed by real users in this study prove that utilizing VR technology for different traffic related experiments generated much more favorable results than the traditional displays. Moreover, using AR technologies for pedestrian studies is a novel approach that allows a user to walk in the real world and the simulation world at a one-to-one scale. This capability opens a whole new avenue of user experiment possibilities. On top of that, the in-environment communication chat system will allow researchers to perform different Advanced Driver Assistance System (ADAS) studies without ever needing to leave the simulation environment. Last but not least, the distributed nature of the framework enables users to participate from different geographic locations with their choice of display device (desktop, smartphone, VR, or AR). The prototype developed for this dissertation is readily available on a test webpage, and a user can easily download the prototype application without needing to install anything. The user also can run the remote MTS server and then connect their client application to the server

    Interactive Visual Analytics for Large-scale Particle Simulations

    Get PDF
    Particle based model simulations are widely used in scientific visualization. In cosmology, particles are used to simulate the evolution of dark matter in the universe. Clusters of particles (that have special statistical properties) are called halos. From a visualization point of view, halos are clusters of particles, each having a position, mass and velocity in three dimensional space, and they can be represented as point clouds that contain various structures of geometric interest such as filaments, membranes, satellite of points, clusters, and cluster of clusters. The thesis investigates methods for interacting with large scale data-sets represented as point clouds. The work mostly aims at the interactive visualization of cosmological simulation based on large particle systems. The study consists of three components: a) two human factors experiments into the perceptual factors that make it possible to see features in point clouds; b) the design and implementation of a user interface making it possible to rapidly navigate through and visualize features in the point cloud, c) software development and integration to support visualization

    Travails in the third dimension: a critical evaluation of three-dimensional geographical visualization

    Get PDF
    Several broad questions are posed about the role of the third dimension in data visualization. First, how far have we come in developing effective 3D displays for the analysis of spatial and other data? Second, when is it appropriate to use 3D techniques in visualising data, which 3D techniques are most appropriate for particular applications, and when might 2D approaches be more appropriate? (Indeed, is 3D always better than 2D?) Third, what can we learn from other communities in which 3D graphics and visualization technologies have been developed? And finally, what are the key R&D challenges in making effective use of the third dimension for visualising data across the spatial and related sciences? Answers to these questions will be based on several lines of evidence: the extensive literature on data and information visualization; visual perception research; computer games technology; and the author’s experiments with a prototype 3D data visualization system

    Videogames: the new GIS?

    Get PDF
    Videogames and GIS have more in common than might be expected. Indeed, it is suggested that videogame technology may not only be considered as a kind of GIS, but that in several important respects its world modelling capabilities out-perform those of most GIS. This chapter examines some of the key differences between videogames and GIS, explores a number of perhaps-surprising similarities between their technologies, and considers which ideas might profitably be borrowed from videogames to improve GIS functionality and usability
    • …
    corecore