24 research outputs found
Top-view Trajectories: A Pedestrian Dataset of Vehicle-Crowd Interaction from Controlled Experiments and Crowded Campus
Predicting the collective motion of a group of pedestrians (a crowd) under
the vehicle influence is essential for the development of autonomous vehicles
to deal with mixed urban scenarios where interpersonal interaction and
vehicle-crowd interaction (VCI) are significant. This usually requires a model
that can describe individual pedestrian motion under the influence of nearby
pedestrians and the vehicle. This study proposed two pedestrian trajectory
datasets, CITR dataset and DUT dataset, so that the pedestrian motion models
can be further calibrated and verified, especially when vehicle influence on
pedestrians plays an important role. CITR dataset consists of experimentally
designed fundamental VCI scenarios (front, back, and lateral VCIs) and provides
unique ID for each pedestrian, which is suitable for exploring a specific
aspect of VCI. DUT dataset gives two ordinary and natural VCI scenarios in
crowded university campus, which can be used for more general purpose VCI
exploration. The trajectories of pedestrians, as well as vehicles, were
extracted by processing video frames that come from a down-facing camera
mounted on a hovering drone as the recording equipment. The final trajectories
of pedestrians and vehicles were refined by Kalman filters with linear
point-mass model and nonlinear bicycle model, respectively, in which
xy-velocity of pedestrians and longitudinal speed and orientation of vehicles
were estimated. The statistics of the velocity magnitude distribution
demonstrated the validity of the proposed dataset. In total, there are
approximate 340 pedestrian trajectories in CITR dataset and 1793 pedestrian
trajectories in DUT dataset. The dataset is available at GitHub.Comment: This paper was accepted into the 30th IEEE Intelligent Vehicles
Symposium. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other use
Top-view Trajectories: A Pedestrian Dataset of Vehicle-Crowd Interaction from Controlled Experiments and Crowded Campus
Predicting the collective motion of a group of pedestrians (a crowd) under
the vehicle influence is essential for the development of autonomous vehicles
to deal with mixed urban scenarios where interpersonal interaction and
vehicle-crowd interaction (VCI) are significant. This usually requires a model
that can describe individual pedestrian motion under the influence of nearby
pedestrians and the vehicle. This study proposed two pedestrian trajectory
datasets, CITR dataset and DUT dataset, so that the pedestrian motion models
can be further calibrated and verified, especially when vehicle influence on
pedestrians plays an important role. CITR dataset consists of experimentally
designed fundamental VCI scenarios (front, back, and lateral VCIs) and provides
unique ID for each pedestrian, which is suitable for exploring a specific
aspect of VCI. DUT dataset gives two ordinary and natural VCI scenarios in
crowded university campus, which can be used for more general purpose VCI
exploration. The trajectories of pedestrians, as well as vehicles, were
extracted by processing video frames that come from a down-facing camera
mounted on a hovering drone as the recording equipment. The final trajectories
of pedestrians and vehicles were refined by Kalman filters with linear
point-mass model and nonlinear bicycle model, respectively, in which
xy-velocity of pedestrians and longitudinal speed and orientation of vehicles
were estimated. The statistics of the velocity magnitude distribution
demonstrated the validity of the proposed dataset. In total, there are
approximate 340 pedestrian trajectories in CITR dataset and 1793 pedestrian
trajectories in DUT dataset. The dataset is available at GitHub.Comment: This paper was accepted into the 30th IEEE Intelligent Vehicles
Symposium. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other use
Blending Generative Adversarial Image Synthesis with Rendering for Computer Graphics
Conventional computer graphics pipelines require detailed 3D models, meshes,
textures, and rendering engines to generate 2D images from 3D scenes. These
processes are labor-intensive. We introduce Hybrid Neural Computer Graphics
(HNCG) as an alternative. The contribution is a novel image formation strategy
to reduce the 3D model and texture complexity of computer graphics pipelines.
Our main idea is straightforward: Given a 3D scene, render only important
objects of interest and use generative adversarial processes for synthesizing
the rest of the image. To this end, we propose a novel image formation strategy
to form 2D semantic images from 3D scenery consisting of simple object models
without textures. These semantic images are then converted into photo-realistic
RGB images with a state-of-the-art conditional Generative Adversarial Network
(cGAN) based image synthesizer trained on real-world data. Meanwhile, objects
of interest are rendered using a physics-based graphics engine. This is
necessary as we want to have full control over the appearance of objects of
interest. Finally, the partially-rendered and cGAN synthesized images are
blended with a blending GAN. We show that the proposed framework outperforms
conventional rendering with ablation and comparison studies. Semantic retention
and Fr\'echet Inception Distance (FID) measurements were used as the main
performance metrics