2,779 research outputs found
Video-rate computational super-resolution and integral imaging at longwave-infrared wavelengths
We report the first computational super-resolved, multi-camera integral
imaging at long-wave infrared (LWIR) wavelengths. A synchronized array of FLIR
Lepton cameras was assembled, and computational super-resolution and
integral-imaging reconstruction employed to generate video with light-field
imaging capabilities, such as 3D imaging and recognition of partially obscured
objects, while also providing a four-fold increase in effective pixel count.
This approach to high-resolution imaging enables a fundamental reduction in the
track length and volume of an imaging system, while also enabling use of
low-cost lens materials.Comment: Supplementary multimedia material in
http://dx.doi.org/10.6084/m9.figshare.530302
Recommended from our members
Consistency and Standardization of Color in Medical Imaging: a Consensus Report
This article summarizes the consensus reached at the Summit on Color in Medical Imaging held at the Food and Drug Administration (FDA) on May 8–9, 2013, co-sponsored by the FDA and ICC (International Color Consortium). The purpose of the meeting was to gather information on how color is currently handled by medical imaging systems to identify areas where there is a need for improvement, to define objective requirements, and to facilitate consensus development of best practices. Participants were asked to identify areas of concern and unmet needs. This summary documents the topics that were discussed at the meeting and recommendations that were made by the participants. Key areas identified where improvements in color would provide immediate tangible benefits were those of digital microscopy, telemedicine, medical photography (particularly ophthalmic and dental photography), and display calibration. Work in these and other related areas has been started within several professional groups, including the creation of the ICC Medical Imaging Working Group
Comparing of radial and tangencial geometric for cylindric panorama
Cameras generally have a field of view only large enough to capture a portion of their surroundings. The goal of immersion is to replace many of your senses with virtual ones, so that the virtual environment will feel as real as possible. Panoramic cameras are used to capture the entire 360°view, also known as panoramic images.Virtual reality makes use of these panoramic images to provide a more immersive experience compared to seeing images on a 2D screen. This thesis, which is in the field of Computer vision, focuses on establishing a multi-camera geometry to generate a cylindrical panorama image and successfully implementing it with the cheapest cameras possible. The specific goal of this project is to propose the cameras geometry which will decrease artifact problems related to parallax in the panorama image. We present a new approach of cylindrical panoramic images from multiple cameras which its setup has cameras placed evenly around a circle. Instead of looking outward, which is the traditional ”radial” configuration, we propose to make the optical axes tangent to the camera circle, a ”tangential” configuration. Beside an analysis and comparison of radial and tangential geometries, we provide an experimental setup with real panoramas obtained in realistic conditionsLes caméras ont généralement un champ de vision à peine assez grand pour capturer partie de leur environnement. L’objectif de l’immersion est de remplacer virtuellement un grand nombre de sens, de sorte que l’environnement virtuel soit perçu comme le plus réel possible. Une caméra panoramique est utilisée pour capturer l’ensemble d’une vue 360°, également connue sous le nom d’image panoramique. La réalité virtuelle fait usage de ces images panoramiques pour fournir une expérience plus immersive par rapport aux images sur un écran 2D. Cette thèse, qui est dans le domaine de la vision par ordinateur, s’intéresse à la création d’une géométrie multi-caméras pour générer une image cylindrique panoramique et vise une mise en œuvre avec les caméras moins chères possibles. L’objectif spécifique de ce projet est de proposer une géométrie de caméra qui va diminuer au maximum les problèmes d’artefacts liés au parallaxe présent dans l’image panoramique. Nous présentons une nouvelle approche de capture des images panoramiques cylindriques à partir de plusieurs caméras disposées uniformément autour d’un cercle. Au lieu de regarder vers l’extérieur, ce qui est la configuration traditionnelle ”radiale”, nous proposons de rendre les axes optiques tangents au cercle des caméras, une configuration ”tangentielle”. Outre une analyse et la comparaison des géométries radiales et tangentielles, nous fournissons un montage expérimental avec de vrais panoramas obtenus dans des conditions réaliste
Temporally Coherent General Dynamic Scene Reconstruction
Existing techniques for dynamic scene reconstruction from multiple
wide-baseline cameras primarily focus on reconstruction in controlled
environments, with fixed calibrated cameras and strong prior constraints. This
paper introduces a general approach to obtain a 4D representation of complex
dynamic scenes from multi-view wide-baseline static or moving cameras without
prior knowledge of the scene structure, appearance, or illumination.
Contributions of the work are: An automatic method for initial coarse
reconstruction to initialize joint estimation; Sparse-to-dense temporal
correspondence integrated with joint multi-view segmentation and reconstruction
to introduce temporal coherence; and a general robust approach for joint
segmentation refinement and dense reconstruction of dynamic scenes by
introducing shape constraint. Comparison with state-of-the-art approaches on a
variety of complex indoor and outdoor scenes, demonstrates improved accuracy in
both multi-view segmentation and dense reconstruction. This paper demonstrates
unsupervised reconstruction of complete temporally coherent 4D scene models
with improved non-rigid object segmentation and shape reconstruction and its
application to free-viewpoint rendering and virtual reality.Comment: Submitted to IJCV 2019. arXiv admin note: substantial text overlap
with arXiv:1603.0338
U-DiVE: Design and evaluation of a distributed photorealistic virtual reality environment
This dissertation presents a framework that allows low-cost devices to visualize and
interact with photorealistic scenes. To accomplish this task, the framework makes use of
Unity’s high-definition rendering pipeline, which has a proprietary Ray Tracing algorithm,
and Unity’s streaming package, which allows an application to be streamed within its
editor. The framework allows the composition of a realistic scene using a Ray Tracing
algorithm, and a virtual reality camera with barrel shaders, to correct the lens distortion
needed for the use on an inexpensive cardboard. It also includes a method to collect
the mobile device’s spatial orientation through a web browser to control the user’s view,
delivered via WebRTC. The proposed framework can produce low-latency, realistic and
immersive environments to be accessed through low-cost HMDs and mobile devices. To
evaluate the structure, this work includes the verification of the frame rate achieved by the
server and mobile device, which should be higher than 30 FPS for a smooth experience. In
addition, it discusses whether the overall quality of experience is acceptable by evaluating
the delay of image delivery from the server up to the mobile device, in face of user’s
movement. Our tests showed that the framework reaches a mean latency around 177 (ms)
with household Wi-Fi equipment and a maximum latency variation of 77.9 (ms), among
the 8 scenes tested.Esta dissertação apresenta um framework que permite que dispositivos de baixo
custo visualizem e interajam com cenas fotorrealĂsticas. Para realizar essa tarefa, o
framework faz uso do pipeline de renderização de alta definição do Unity, que tem um
algoritmo de rastreamento de raio proprietário, e o pacote de streaming do Unity, que
permite o streaming de um aplicativo em seu editor. O framework permite a composição
de uma cena realista usando um algoritmo de Ray Tracing, e uma câmera de realidade
virtual com shaders de barril, para corrigir a distorção da lente necessária para usar um
cardboard de baixo custo. Inclui também um método para coletar a orientação espacial
do dispositivo móvel por meio de um navegador Web para controlar a visão do usuário,
entregue via WebRTC. O framework proposto pode produzir ambientes de baixa latĂŞncia,
realistas e imersivos para serem acessados por meio de HMDs e dispositivos mĂłveis de
baixo custo. Para avaliar a estrutura, este trabalho considera a verificação da taxa de
quadros alcançada pelo servidor e pelo dispositivo móvel, que deve ser superior a 30 FPS
para uma experiência fluida. Além disso, discute se a qualidade geral da experiência é
aceitável, ao avaliar o atraso da entrega das imagens desde o servidor até o dispositivo
móvel, em face da movimentação do usuário. Nossos testes mostraram que o framework
atinge uma latência média em torno dos 177 (ms) com equipamentos wi-fi de uso doméstico
e uma variação máxima das latências igual a 77.9 (ms), entre as 8 cenas testadas
Recommended from our members
Focal Sweep Camera for Space-Time Refocusing
A conventional camera has a limited depth of field (DOF), which often results in defocus blur and loss of image detail. The technique of image refocusing allows a user to interactively change the plane of focus and DOF of an image after it is captured. One way to achieve refocusing is to capture the entire light field. But this requires a significant compromise of spatial resolution. This is because of the dimensionality gap - the captured information (a light field) is 4-D, while the information required for refocusing (a focal stack) is only 3-D. In this paper, we present an imaging system that directly captures a focal stack by physically sweeping the focal plane. We first describe how to sweep the focal plane so that the aggregate DOF of the focal stack covers the entire desired depth range without gaps or overlaps. Since the focal stack is captured in a duration of time when scene objects can move, we refer to the captured focal stack as a duration focal stack. We then propose an algorithm for computing a space-time in-focus index map from the focal stack, which represents the time at which each pixel is best focused. The algorithm is designed to enable a seamless refocusing experience, even for textureless regions and at depth discontinuities. We have implemented two prototype focal-sweep cameras and captured several duration focal stacks. Results obtained using our method can be viewed at www.focalsweep.com
- …