739 research outputs found

    From perspective restitution to mixed reality : reconstruction of San Nicolò dei Carmelitani church in Palermo

    Get PDF
    Digital surveying and representation tools are widely used for the virtual reconstruction of historic buildings that have vanished or have been transformed. When changes or destructions occurred after the first half of the XIX century, the reconstruction process can be based on photographic images, if available. Photos provide an effective support for the reconstruction of lost buildings, especially when drawings or previous surveys are not available. The 3D reconstruction from archive images has become a relevant topic in almost recent years. In 2013 Migliari et al. proposed a method that allows the reconstruction of 3D models from a single image using perspective restitution. In those years computer engineers developed digital tools that supported inner and outer orientation of archive images. In the years from 2017 to 2019 several studies focused on the reconstruction of lost buildings with photogrammetric traditional and SfM techniques. In this study an un-assisted process is proposed for the reconstruction of a lost buildings from a single image: perspective restitution developed with digital representation tools allowed the retrieval of inner and outer orientation of archive photos and the reconstruction of the case study, a church that no longer exist. Outer orientation and scaling were provided by the lidar survey of those buildings that are still in place. Finally, a motion tracking commercial software has been tested for the contextualization of the 3D reconstruction model

    Vehicle shape approximation from motion for visual traffic surveillance

    Get PDF
    In this paper, a vehicle shape approximation method based on the vehicle motion in a typical traffic image sequence is proposed. In the proposed method, instead of using the 2D image data directly, the intrinsic 3D data is estimated in a monocular image sequence. Given the binary vehicle mask and the camera parameters, the vehicle shape is estimated by the four stages shape approximation method. These stages include feature point extraction, feature point motion estimation between two consecutive frames, feature point height estimation from motion vector, and the 3D shape estimation based on the feature point height. We have tested our method using real world traffic image sequence and the vehicle height profile and dimensions are estimated to be reasonably close to the actual dimensions.published_or_final_versio

    TSTTC: A Large-Scale Dataset for Time-to-Contact Estimation in Driving Scenarios

    Full text link
    Time-to-Contact (TTC) estimation is a critical task for assessing collision risk and is widely used in various driver assistance and autonomous driving systems. The past few decades have witnessed development of related theories and algorithms. The prevalent learning-based methods call for a large-scale TTC dataset in real-world scenarios. In this work, we present a large-scale object oriented TTC dataset in the driving scene for promoting the TTC estimation by a monocular camera. To collect valuable samples and make data with different TTC values relatively balanced, we go through thousands of hours of driving data and select over 200K sequences with a preset data distribution. To augment the quantity of small TTC cases, we also generate clips using the latest Neural rendering methods. Additionally, we provide several simple yet effective TTC estimation baselines and evaluate them extensively on the proposed dataset to demonstrate their effectiveness. The proposed dataset is publicly available at https://open-dataset.tusen.ai/TSTTC.Comment: 19 pages, 9 figure

    The Interstate-24 3D Dataset: a new benchmark for 3D multi-camera vehicle tracking

    Full text link
    This work presents a novel video dataset recorded from overlapping highway traffic cameras along an urban interstate, enabling multi-camera 3D object tracking in a traffic monitoring context. Data is released from 3 scenes containing video from at least 16 cameras each, totaling 57 minutes in length. 877,000 3D bounding boxes and corresponding object tracklets are fully and accurately annotated for each camera field of view and are combined into a spatially and temporally continuous set of vehicle trajectories for each scene. Lastly, existing algorithms are combined to benchmark a number of 3D multi-camera tracking pipelines on the dataset, with results indicating that the dataset is challenging due to the difficulty of matching objects traveling at high speeds across cameras and heavy object occlusion, potentially for hundreds of frames, during congested traffic. This work aims to enable the development of accurate and automatic vehicle trajectory extraction algorithms, which will play a vital role in understanding impacts of autonomous vehicle technologies on the safety and efficiency of traffic

    Binocular interactions underlying the classic optomotor responses of flying flies.

    Get PDF
    In response to imposed course deviations, the optomotor reactions of animals reduce motion blur and facilitate the maintenance of stable body posture. In flies, many anatomical and electrophysiological studies suggest that disparate motion cues stimulating the left and right eyes are not processed in isolation but rather are integrated in the brain to produce a cohesive panoramic percept. To investigate the strength of such inter-ocular interactions and their role in compensatory sensory-motor transformations, we utilize a virtual reality flight simulator to record wing and head optomotor reactions by tethered flying flies in response to imposed binocular rotation and monocular front-to-back and back-to-front motion. Within a narrow range of stimulus parameters that generates large contrast insensitive optomotor responses to binocular rotation, we find that responses to monocular front-to-back motion are larger than those to panoramic rotation, but are contrast sensitive. Conversely, responses to monocular back-to-front motion are slower than those to rotation and peak at the lowest tested contrast. Together our results suggest that optomotor responses to binocular rotation result from the influence of non-additive contralateral inhibitory as well as excitatory circuit interactions that serve to confer contrast insensitivity to flight behaviors influenced by rotatory optic flow

    Development of a Low-Cost 6 DOF Brick Tracking System for Use in Advanced Gas-Cooled Reactor Model Tests

    Get PDF
    This paper presents the design of a low-cost, compact instrumentation system to enable six degree of freedom motion tracking of acetal bricks within an experimental model of a cracked Advanced Gas-Cooled Reactor (AGR) core. The system comprises optical and inertial sensors and capitalises on the advantages offered by data fusion techniques. The optical system tracks LED indicators, allowing a brick to be accurately located even in cluttered images. The LED positions are identified using a geometrical correspondence algorithm, which was optimised to be computationally efficient for shallow movements, and complex camera distortions are corrected using a versatile Incident Ray-Tracking calibration. Then, a Perspective-Ray-based Scaled Orthographic projection with Iteration (PRSOI) algorithm is applied to each LED position to determine the six degree of freedom pose. Results from experiments show that the system achieves a low Root Mean Squared (RMS) error of 0.2296 mm in x, 0.3943 mm in y, and 0.0703 mm in z. Although providing an accurate measurement solution, the optical tracking system has a low sample rate and requires the line of sight to be maintained throughout each test. To increase the robustness, accuracy, and sampling frequency of the system, the optical system can be augmented with an Inertial Measurement Unit (IMU). This paper presents a method to integrate the optical system and IMU data by accurately timestamping data from each set of sensors and aligning the two coordinate axes. Once miniaturised, the developed system will be used to track smaller components within the AGR models that cannot be tracked with current instrumentation, expanding reactor core modelling capabilities
    corecore