6,892 research outputs found
Mosaiced-Based Panoramic Depth Imaging with a Single Standard Camera
In this article we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the cameraâs optical center from the rotational center of the system we are able to capture the motion parallax effect which enables the stereo reconstruction. The camera is rotating on a circular path with the step defined by an angle, equivalent to one column of the captured image. The equation for depth estimation can be easily extracted from system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. We focused mainly on the system analysis. Results of the stereo reconstruction procedure and quality evaluation of generated depth images are quite promissing. The system performs well in the reconstruction of small indoor spaces. Our finall goal is to develop a system for automatic navigation of a mobile robot in a room
Panoramic Depth Imaging: Single Standard Camera Approach
In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the cameraâs optical center from the rotational center of the system we are able to capture the motion parallax effect which enables stereo reconstruction. The camera is rotating on a circular path with a step defined by the angle, equivalent to one pixel column of the captured image. The equation for depth estimation can be easily extracted from the system geometry. To find the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric pixel columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. The search space on the epipolar line can be additionaly constrained. The focus of the paper is mainly on the system analysis. Results of the stereo reconstruction procedure and quality evaluation of generated depth images are quite promissing. The system performs well for reconstruction of small indoor spaces. Our finall goal is to develop a system for automatic navigation of a mobile robot in a room
Minimalist and High-Quality Panoramic Imaging with PSF-aware Transformers
High-quality panoramic images with a Field of View (FoV) of 360-degree are
essential for contemporary panoramic computer vision tasks. However,
conventional imaging systems come with sophisticated lens designs and heavy
optical components. This disqualifies their usage in many mobile and wearable
applications where thin and portable, minimalist imaging systems are desired.
In this paper, we propose a Panoramic Computational Imaging Engine (PCIE) to
address minimalist and high-quality panoramic imaging. With less than three
spherical lenses, a Minimalist Panoramic Imaging Prototype (MPIP) is
constructed based on the design of the Panoramic Annular Lens (PAL), but with
low-quality imaging results due to aberrations and small image plane size. We
propose two pipelines, i.e. Aberration Correction (AC) and Super-Resolution and
Aberration Correction (SR&AC), to solve the image quality problems of MPIP,
with imaging sensors of small and large pixel size, respectively. To provide a
universal network for the two pipelines, we leverage the information from the
Point Spread Function (PSF) of the optical system and design a PSF-aware
Aberration-image Recovery Transformer (PART), in which the self-attention
calculation and feature extraction are guided via PSF-aware mechanisms. We
train PART on synthetic image pairs from simulation and put forward the PALHQ
dataset to fill the gap of real-world high-quality PAL images for low-level
vision. A comprehensive variety of experiments on synthetic and real-world
benchmarks demonstrates the impressive imaging results of PCIE and the
effectiveness of plug-and-play PSF-aware mechanisms. We further deliver
heuristic experimental findings for minimalist and high-quality panoramic
imaging. Our dataset and code will be available at
https://github.com/zju-jiangqi/PCIE-PART.Comment: The dataset and code will be available at
https://github.com/zju-jiangqi/PCIE-PAR
Capturing Panoramic Depth Images with a Single Standard Camera
In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the cameraâs optical center from the rotational center of the system we are able to capture the motion parallax effect which enables the stereo reconstruction. The camera is rotating on a circular path with the step deïŹned by an angle equivalent to one column of the captured image. The equation for depth estimation can be easily extracted from system geometry. To ïŹnd the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. We focused mainly on the system analysis. The system performs well in the reconstruction of small indoor spaces
Under vehicle perception for high level safety measures using a catadioptric camera system
In recent years, under vehicle surveillance and the classification of the vehicles become an indispensable task that must be achieved for security measures in certain areas such as shopping centers, government buildings, army camps etc. The main challenge to achieve this task is to monitor the under
frames of the means of transportations. In this paper, we present a novel solution to achieve this aim. Our solution consists of three main parts: monitoring, detection and classification. In the first part we design a new catadioptric camera system in which the perspective camera points downwards to the catadioptric mirror mounted to the body of a mobile robot. Thanks to the
catadioptric mirror the scenes against the camera optical axis direction can be viewed. In the second part we use speeded up robust features (SURF) in an object recognition algorithm. Fast appearance based mapping algorithm (FAB-MAP) is exploited for the classification of the means of transportations in the third
part. Proposed technique is implemented in a laboratory environment
Panoramic Annular Localizer: Tackling the Variation Challenges of Outdoor Localization Using Panoramic Annular Images and Active Deep Descriptors
Visual localization is an attractive problem that estimates the camera
localization from database images based on the query image. It is a crucial
task for various applications, such as autonomous vehicles, assistive
navigation and augmented reality. The challenging issues of the task lie in
various appearance variations between query and database images, including
illumination variations, dynamic object variations and viewpoint variations. In
order to tackle those challenges, Panoramic Annular Localizer into which
panoramic annular lens and robust deep image descriptors are incorporated is
proposed in this paper. The panoramic annular images captured by the single
camera are processed and fed into the NetVLAD network to form the active deep
descriptor, and sequential matching is utilized to generate the localization
result. The experiments carried on the public datasets and in the field
illustrate the validation of the proposed system.Comment: Accepted by ITSC 201
Knowledge, attitude and perception on radiation imaging among children's caregivers in the pediatric dental clinic
OBJECTIVE: Nuclear medicine provides important clinical information for diagnostic and therapeutic purposes. Use of medical imaging has gradually increased in the United States and this has raised health concerns about the potential future risks associated with radiation exposure in children. While studies have evaluated the adverse effects of imaging procedures, there is insufficient evidence about communicating radiation risks. The overall purpose of this paper is to review radiation risks in pediatric imaging using published evidence by the World Health Organization and to evaluate the knowledge and attitude of caregivers towards radiation risks in pediatric imaging. Specifically, we aim to determine whether an educational brochure improves parental knowledge of radiation and/or changes in attitude and perception to allow their children to undergo dental radiographs.
METHODS: A prospective sample survey was performed of caregivers who presented with their child to the Boston University Pediatric Oral Healthcare Center. Parents or legal guardians (18 years or older) who accompanied a child were eligible for inclusion and approached for enrollment. Pre- and post-survey questionnaires were used to evaluate parentsâ or guardiansâ level of knowledge and attitude about the risks and benefits of dental radiographs. Parents were also asked their comfort level to allow their child to undergo dental radiographs. After completing the pre-survey questionnaire, parents were asked to read the English-language informational handout. Statistical analysis was performed through Microsoft Excel 2013. Descriptive analysis was conducted to summarize the survey responses.
RESULTS: Among 30 parents who were surveyed, a small proportion (30%) of parents were very comfortable with dentist using dental radiographs on their child, versus 57% after reading the handout. Results showed that the informational handout improved the parental knowledge of risks and benefits of ionizing radiation. Most parents indicated that the handout was helpful and they reported increased level of comfort and willingness in their children receiving radiation imaging during dental treatment procedures.
DISCUSSION: Educating parents or caregivers through an informational handout is a helpful resource in improving their knowledge and in relieving their concerns. Informing parents about the risks of ionizing radiation does not change parental willingness for their children to undergo dental radiographs
Face tracking using a hyperbolic catadioptric omnidirectional system
In the first part of this paper, we present a brief review on catadioptric omnidirectional
systems. The special case of the hyperbolic omnidirectional system is analysed in depth.
The literature shows that a hyperboloidal mirror has two clear advantages over alternative
geometries. Firstly, a hyperboloidal mirror has a single projection centre [1]. Secondly, the
image resolution is uniformly distributed along the mirrorâs radius [2].
In the second part of this paper we show empirical results for the detection and tracking
of faces from the omnidirectional images using Viola-Jones method. Both panoramic and
perspective projections, extracted from the omnidirectional image, were used for that purpose.
The omnidirectional image size was 480x480 pixels, in greyscale. The tracking method used
regions of interest (ROIs) set as the result of the detections of faces from a panoramic projection
of the image. In order to avoid losing or duplicating detections, the panoramic projection was
extended horizontally. Duplications were eliminated based on the ROIs established by previous
detections. After a confirmed detection, faces were tracked from perspective projections (which
are called virtual cameras), each one associated with a particular face. The zoom, pan and tilt
of each virtual camera was determined by the ROIs previously computed on the panoramic
image.
The results show that, when using a careful combination of the two projections, good frame
rates can be achieved in the task of tracking faces reliably
Dynamic Illumination for Augmented Reality with Real-Time Interaction
Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost
- âŠ