3,865 research outputs found
A single-chip FPGA implementation of real-time adaptive background model
This paper demonstrates the use of a single-chip
FPGA for the extraction of highly accurate background
models in real-time. The models are based
on 24-bit RGB values and 8-bit grayscale intensity
values. Three background models are presented, all
using a camcorder, single FPGA chip, four blocks
of RAM and a display unit. The architectures have
been implemented and tested using a Panasonic NVDS60B
digital video camera connected to a Celoxica
RC300 Prototyping Platform with a Xilinx Virtex
II XC2v6000 FPGA and 4 banks of onboard RAM.
The novel FPGA architecture presented has the advantages
of minimizing latency and the movement of
large datasets, by conducting time critical processes
on BlockRAM. The systems operate at clock rates
ranging from 57MHz to 65MHz and are capable
of performing pre-processing functions like temporal
low-pass filtering on standard frame size of 640X480
pixels at up to 210 frames per second
Accelerated hardware video object segmentation: From foreground detection to connected components labelling
This is the preprint version of the Article - Copyright @ 2010 ElsevierThis paper demonstrates the use of a single-chip FPGA for the segmentation of moving objects in a video sequence. The system maintains highly accurate background models, and integrates the detection of foreground pixels with the labelling of objects using a connected components algorithm. The background models are based on 24-bit RGB values and 8-bit gray scale intensity values. A multimodal background differencing algorithm is presented, using a single FPGA chip and four blocks of RAM. The real-time connected component labelling algorithm, also designed for FPGA implementation, run-length encodes the output of the background subtraction, and performs connected component analysis on this representation. The run-length encoding, together with other parts of the algorithm, is performed in parallel; sequential operations are minimized as the number of run-lengths are typically less than the number of pixels. The two algorithms are pipelined together for maximum efficiency
Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots
Autonomous robots that assist humans in day to day living tasks are becoming
increasingly popular. Autonomous mobile robots operate by sensing and
perceiving their surrounding environment to make accurate driving decisions. A
combination of several different sensors such as LiDAR, radar, ultrasound
sensors and cameras are utilized to sense the surrounding environment of
autonomous vehicles. These heterogeneous sensors simultaneously capture various
physical attributes of the environment. Such multimodality and redundancy of
sensing need to be positively utilized for reliable and consistent perception
of the environment through sensor data fusion. However, these multimodal sensor
data streams are different from each other in many ways, such as temporal and
spatial resolution, data format, and geometric alignment. For the subsequent
perception algorithms to utilize the diversity offered by multimodal sensing,
the data streams need to be spatially, geometrically and temporally aligned
with each other. In this paper, we address the problem of fusing the outputs of
a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image
sensor for free space detection. The outputs of LiDAR scanner and the image
sensor are of different spatial resolutions and need to be aligned with each
other. A geometrical model is used to spatially align the two sensor outputs,
followed by a Gaussian Process (GP) regression-based resolution matching
algorithm to interpolate the missing data with quantifiable uncertainty. The
results indicate that the proposed sensor data fusion framework significantly
aids the subsequent perception steps, as illustrated by the performance
improvement of a uncertainty aware free space detection algorith
A Genre-aware Approach to Online Journalism Education
Revised paperpublished_or_final_versio
Enabling Data-Driven Transportation Safety Improvements in Rural Alaska
Safety improvements require funding. A clear need must be demonstrated to secure funding. For transportation safety, data, especially data about past crashes, is the usual method of demonstrating need. However, in rural locations, such data is often not available, or is not in a form amenable to use in funding applications. This research aids rural entities, often federally recognized tribes and small villages acquire data needed for funding applications. Two aspects of work product are the development of a traffic counting application for an iPad or similar device, and a review of the data requirements of the major transportation funding agencies. The traffic-counting app, UAF Traffic, demonstrated its ability to count traffic and turning movements for cars and trucks, as well as ATVs, snow machines, pedestrians, bicycles, and dog sleds. The review of the major agencies demonstrated that all the likely funders would accept qualitative data and Road Safety Audits. However, quantitative data, if it was available, was helpful
Climate Change Impact Assessment for Surface Transportation in the Pacific Northwest and Alaska
WA-RD 772.
- …