25,830 research outputs found
3D Indoor Scene Reconstruction using RGB-D Sensor
A new methodology for 3D scene reconstruction, which can support effective robotic sensing and navigation in an indoor environment with only a low-cost RGB-D sensor is presented in this research. The 3D scene model can be used for many applications such as virtual reality visualization and robot navigation. Motivated by these applications, our goal is to create a system that takes a sequence of RGB and depth images captured with a hand-held camera as input and produces a globally consistent 3D probabilistic occupancy map model as output. This research introduces a robust system that estimates camera position for multiple RGB video frames based on a key-frame selection strategy. In order to create the 3D scene in real time, a direct method to minimize the photometric error is utilized. The camera pose is tracked using the ray casting model which means we use a frame-to-model method instead of the frame-to-frame Iterative Closest Point (ICP) tracking. The point to plan ICP algorithm is used to establish geometric constraints between the point-cloud as they become aligned. To fill in the holes, the raw depth map is improved using a Truncated Signed Distance Function (TSDF) to voxelize the 3D space, accumulating the depth map from nearby frames using the camera poses obtained above. Finally, a high resolution efficient probabilistic 3D mapping framework based on octrees (Octomap) is used to store the wide range of indoor environments. The saved 3D occupancy map could help the robot to avoid obstacle and display the robot location in the 3D virtual scene in real time.https://ecommons.udayton.edu/stander_posters/1922/thumbnail.jp
Recommended from our members
Hierarchical video summarisation in reference frame subspace
In this paper, a hierarchical video structure summarization approach using Laplacian Eigenmap is proposed, where a small set of reference frames is selected from the video sequence to form a reference subspace to measure the dissimilarity between two arbitrary frames. In the proposed summarization scheme, the shot-level key frames are first detected from the continuity of inter-frame dissimilarity, and the sub-shot level and scene level representative frames are then summarized by using k-mean clustering. The experiment is carried on both test videos and movies, and the results show that in comparison with a similar approach using latent semantic analysis, the proposed approach using Laplacian Eigenmap can achieve a better recall rate in keyframe detection, and gives an efficient hierarchical summarization at sub shot, shot and scene levels subsequently
Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences
Results: We present an application that enables the quantitative analysis of
multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence
microscopy images. The image sequences show stem cells together with blood
vessels, enabling quantification of the dynamic behaviors of stem cells in
relation to their vascular niche, with applications in developmental and cancer
biology. Our application automatically segments, tracks, and lineages the image
sequence data and then allows the user to view and edit the results of
automated algorithms in a stereoscopic 3-D window while simultaneously viewing
the stem cell lineage tree in a 2-D window. Using the GPU to store and render
the image sequence data enables a hybrid computational approach. An
inference-based approach utilizing user-provided edits to automatically correct
related mistakes executes interactively on the system CPU while the GPU handles
3-D visualization tasks. Conclusions: By exploiting commodity computer gaming
hardware, we have developed an application that can be run in the laboratory to
facilitate rapid iteration through biological experiments. There is a pressing
need for visualization and analysis tools for 5-D live cell image data. We
combine accurate unsupervised processes with an intuitive visualization of the
results. Our validation interface allows for each data set to be corrected to
100% accuracy, ensuring that downstream data analysis is accurate and
verifiable. Our tool is the first to combine all of these aspects, leveraging
the synergies obtained by utilizing validation information from stereo
visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc
Flame Detection for Video-based Early Fire Warning Systems and 3D Visualization of Fire Propagation
Early and accurate detection and localization of flame is an essential requirement of modern early fire warning systems. Video-based systems can be used for this purpose; however, flame detection remains a challenging issue due to the fact that many natural objects have similar characteristics with fire. In this paper, we present a new algorithm for video based flame detection, which employs various spatio-temporal features such as colour probability, contour irregularity, spatial energy, flickering and spatio-temporal energy. Various background subtraction algorithms are tested and comparative results in terms of computational efficiency and accuracy are presented. Experimental results with two classification methods show that the proposed methodology provides high fire detection rates with a reasonable false alarm ratio. Finally, a 3D visualization tool for the estimation of the fire propagation is outlined and simulation results are presented and discussed.The original article was published by ACTAPRESS and is available here: http://www.actapress.com/Content_of_Proceeding.aspx?proceedingid=73
Unwind: Interactive Fish Straightening
The ScanAllFish project is a large-scale effort to scan all the world's
33,100 known species of fishes. It has already generated thousands of
volumetric CT scans of fish species which are available on open access
platforms such as the Open Science Framework. To achieve a scanning rate
required for a project of this magnitude, many specimens are grouped together
into a single tube and scanned all at once. The resulting data contain many
fish which are often bent and twisted to fit into the scanner. Our system,
Unwind, is a novel interactive visualization and processing tool which
extracts, unbends, and untwists volumetric images of fish with minimal user
interaction. Our approach enables scientists to interactively unwarp these
volumes to remove the undesired torque and bending using a piecewise-linear
skeleton extracted by averaging isosurfaces of a harmonic function connecting
the head and tail of each fish. The result is a volumetric dataset of a
individual, straight fish in a canonical pose defined by the marine biologist
expert user. We have developed Unwind in collaboration with a team of marine
biologists: Our system has been deployed in their labs, and is presently being
used for dataset construction, biomechanical analysis, and the generation of
figures for scientific publication
- âŠ