4,863 research outputs found
A high speed Tri-Vision system for automotive applications
Purpose: Cameras are excellent ways of non-invasively monitoring the interior and exterior of vehicles. In particular, high speed stereovision and multivision systems are important for transport applications such as driver eye tracking or collision avoidance. This paper addresses the synchronisation problem which arises when multivision camera systems are used to capture the high speed motion common in such applications.
Methods: An experimental, high-speed tri-vision camera system intended for real-time driver eye-blink and saccade measurement was designed, developed, implemented and tested using prototype, ultra-high dynamic range, automotive-grade image sensors specifically developed by E2V (formerly Atmel) Grenoble SA as part of the European FP6 project – sensation (advanced sensor development for attention stress, vigilance and sleep/wakefulness monitoring).
Results : The developed system can sustain frame rates of 59.8 Hz at the full stereovision resolution of 1280 × 480 but this can reach 750 Hz when a 10 k pixel Region of Interest (ROI) is used, with a maximum global shutter speed of 1/48000 s and a shutter efficiency of 99.7%. The data can be reliably transmitted uncompressed over standard copper Camera-Link® cables over 5 metres. The synchronisation error between the left and right stereo images is less than 100 ps and this has been verified both electrically and optically. Synchronisation is automatically established at boot-up and maintained during resolution changes. A third camera in the set can be configured independently. The dynamic range of the 10bit sensors exceeds 123 dB with a spectral sensitivity extending well into the infra-red range.
Conclusion: The system was subjected to a comprehensive testing protocol, which confirms that the salient requirements for the driver monitoring application are adequately met and in some respects, exceeded. The synchronisation technique presented may also benefit several other automotive stereovision applications including near and far-field obstacle detection and collision avoidance, road condition monitoring and others.Partially funded by the EU FP6 through the IST-507231 SENSATION project.peer-reviewe
Ultrafast imaging of light scattering dynamics using second-generation compressed ultrafast photography
We present single-shot real-time video recording of light scattering dynamics by second-generation compressed ultrafast photography (G2-CUP). Using G2-CUP at 100 billion frames per second, in a single camera exposure, we experimentally captured the evolution of the light intensity distribution in an engineered thin scattering plate assembly. G2-CUP, which implements a new reconstruction paradigm and a more efficient hardware design than its predecessors, markedly improves the reconstructed image quality. The ultrafast imaging reveals the instantaneous light scattering pattern as a photonic Mach cone. We envision that our technology will find a diverse range of applications in biomedical imaging, materials science, and physics
RELLIS-3D Dataset: Data, Benchmarks and Analysis
Semantic scene understanding is crucial for robust and safe autonomous
navigation, particularly so in off-road environments. Recent deep learning
advances for 3D semantic segmentation rely heavily on large sets of training
data, however existing autonomy datasets either represent urban environments or
lack multimodal off-road data. We fill this gap with RELLIS-3D, a multimodal
dataset collected in an off-road environment, which contains annotations for
13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis
Campus of Texas A&M University, and presents challenges to existing algorithms
related to class imbalance and environmental topography. Additionally, we
evaluate the current state of the art deep learning semantic segmentation
models on this dataset. Experimental results show that RELLIS-3D presents
challenges for algorithms designed for segmentation in urban environments. This
novel dataset provides the resources needed by researchers to continue to
develop more advanced algorithms and investigate new research directions to
enhance autonomous navigation in off-road environments. RELLIS-3D will be
published at https://github.com/unmannedlab/RELLIS-3D
A survey on human performance capture and animation
With the rapid development of computing technology, three-dimensional (3D) human body
models and their dynamic motions are widely used in the digital entertainment industry. Human perfor-
mance mainly involves human body shapes and motions. Key research problems include how to capture
and analyze static geometric appearance and dynamic movement of human bodies, and how to simulate
human body motions with physical e�ects. In this survey, according to main research directions of human body performance capture and animation, we summarize recent advances in key research topics, namely
human body surface reconstruction, motion capture and synthesis, as well as physics-based motion sim-
ulation, and further discuss future research problems and directions. We hope this will be helpful for
readers to have a comprehensive understanding of human performance capture and animatio
Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather
The fusion of multimodal sensor streams, such as camera, lidar, and radar
measurements, plays a critical role in object detection for autonomous
vehicles, which base their decision making on these inputs. While existing
methods exploit redundant information in good environmental conditions, they
fail in adverse weather where the sensory streams can be asymmetrically
distorted. These rare "edge-case" scenarios are not represented in available
datasets, and existing fusion architectures are not designed to handle them. To
address this challenge we present a novel multimodal dataset acquired in over
10,000km of driving in northern Europe. Although this dataset is the first
large multimodal dataset in adverse weather, with 100k labels for lidar,
camera, radar, and gated NIR sensors, it does not facilitate training as
extreme weather is rare. To this end, we present a deep fusion network for
robust fusion without a large corpus of labeled training data covering all
asymmetric distortions. Departing from proposal-level fusion, we propose a
single-shot model that adaptively fuses features, driven by measurement
entropy. We validate the proposed method, trained on clean data, on our
extensive validation dataset. Code and data are available here
https://github.com/princeton-computational-imaging/SeeingThroughFog
Implementation of an Ultra-Bright Thermographic Phosphor for Gas Turbine Engine Temperature Measurements
The overall goal of the Aeronautics Research Mission Directorate (ARMD) Seedling Phase II effort was to build on the promising temperature-sensing characteristics of the ultrabright thermographic phosphor Cr-doped gadolinium aluminum perovskite (Cr:GAP) demonstrated in Phase I by transitioning towards an engine environment implementation. The strategy adopted was to take advantage of the unprecedented retention of ultra-bright luminescence from Cr:GAP at temperatures over 1000 C to enable fast 2D temperature mapping of actual component surfaces as well as to utilize inexpensive low-power laser-diode excitation suitable for on-wing diagnostics. A special emphasis was placed on establishing Cr:GAP luminescence-based surface temperature mapping as a new tool for evaluating engine component surface cooling effectiveness
- …