1,007 research outputs found
Depth Estimation and Image Restoration by Deep Learning from Defocused Images
Monocular depth estimation and image deblurring are two fundamental tasks in
computer vision, given their crucial role in understanding 3D scenes.
Performing any of them by relying on a single image is an ill-posed problem.
The recent advances in the field of Deep Convolutional Neural Networks (DNNs)
have revolutionized many tasks in computer vision, including depth estimation
and image deblurring. When it comes to using defocused images, the depth
estimation and the recovery of the All-in-Focus (Aif) image become related
problems due to defocus physics. Despite this, most of the existing models
treat them separately. There are, however, recent models that solve these
problems simultaneously by concatenating two networks in a sequence to first
estimate the depth or defocus map and then reconstruct the focused image based
on it. We propose a DNN that solves the depth estimation and image deblurring
in parallel. Our Two-headed Depth Estimation and Deblurring Network (2HDED:NET)
extends a conventional Depth from Defocus (DFD) networks with a deblurring
branch that shares the same encoder as the depth branch. The proposed method
has been successfully tested on two benchmarks, one for indoor and the other
for outdoor scenes: NYU-v2 and Make3D. Extensive experiments with 2HDED:NET on
these benchmarks have demonstrated superior or close performances to those of
the state-of-the-art models for depth estimation and image deblurring
Towards Object-Centric Scene Understanding
Visual perception for autonomous agents continues to attract community attention due to the disruptive technologies and the wide applicability of such solutions. Autonomous Driving (AD), a major application in this domain, promises to revolutionize our approach to mobility while bringing critical advantages in limiting accident fatalities.
Fueled by recent advances in Deep Learning (DL), more computer vision tasks are being addressed using a learning paradigm. Deep Neural Networks (DNNs) succeeded consistently in pushing performances to unprecedented levels and demonstrating the ability of such approaches to generalize to an increasing number of difficult problems, such as 3D vision tasks.
In this thesis, we address two main challenges arising from the current approaches. Namely, the computational complexity of multi-task pipelines, and the increasing need for manual annotations. On the one hand, AD systems need to perceive the surrounding environment on different levels of detail and, subsequently, take timely actions. This multitasking further limits the time available for each perception task. On the other hand, the need for universal generalization of such systems to massively diverse situations requires the use of large-scale datasets covering long-tailed cases. Such requirement renders the use of traditional supervised approaches, despite the data readily available in the AD domain, unsustainable in terms of annotation costs, especially for 3D tasks.
Driven by the AD environment nature and the complexity dominated (unlike indoor scenes) by the presence of other scene elements (mainly cars and pedestrians) we focus on the above-mentioned challenges in object-centric tasks. We, then, situate our contributions appropriately in fast-paced literature, while supporting our claims with extensive experimental analysis leveraging up-to-date state-of-the-art results and community-adopted benchmarks
Adaptive-SpikeNet: Event-based Optical Flow Estimation using Spiking Neural Networks with Learnable Neuronal Dynamics
Event-based cameras have recently shown great potential for high-speed motion
estimation owing to their ability to capture temporally rich information
asynchronously. Spiking Neural Networks (SNNs), with their neuro-inspired
event-driven processing can efficiently handle such asynchronous data, while
neuron models such as the leaky-integrate and fire (LIF) can keep track of the
quintessential timing information contained in the inputs. SNNs achieve this by
maintaining a dynamic state in the neuron memory, retaining important
information while forgetting redundant data over time. Thus, we posit that SNNs
would allow for better performance on sequential regression tasks compared to
similarly sized Analog Neural Networks (ANNs). However, deep SNNs are difficult
to train due to vanishing spikes at later layers. To that effect, we propose an
adaptive fully-spiking framework with learnable neuronal dynamics to alleviate
the spike vanishing problem. We utilize surrogate gradient-based
backpropagation through time (BPTT) to train our deep SNNs from scratch. We
validate our approach for the task of optical flow estimation on the
Multi-Vehicle Stereo Event-Camera (MVSEC) dataset and the DSEC-Flow dataset.
Our experiments on these datasets show an average reduction of 13% in average
endpoint error (AEE) compared to state-of-the-art ANNs. We also explore several
down-scaled models and observe that our SNN models consistently outperform
similarly sized ANNs offering 10%-16% lower AEE. These results demonstrate the
importance of SNNs for smaller models and their suitability at the edge. In
terms of efficiency, our SNNs offer substantial savings in network parameters
(48.3x) and computational energy (10.2x) while attaining ~10% lower EPE
compared to the state-of-the-art ANN implementations
Object Detection and Classification in the Visible and Infrared Spectrums
The over-arching theme of this dissertation is the development of automated detection and/or classification systems for challenging infrared scenarios. The six works presented herein can be categorized into four problem scenarios. In the first scenario, long-distance detection and classification of vehicles in thermal imagery, a custom convolutional network architecture is proposed for small thermal target detection. For the second scenario, thermal face landmark detection and thermal cross-spectral face verification, a publicly-available visible and thermal face dataset is introduced, along with benchmark results for several landmark detection and face verification algorithms. Furthermore, a novel visible-to-thermal transfer learning algorithm for face landmark detection is presented. The third scenario addresses near-infrared cross-spectral periocular recognition with a coupled conditional generative adversarial network guided by auxiliary synthetic loss functions. Finally, a deep sparse feature selection and fusion is proposed to detect the presence of textured contact lenses prior to near-infrared iris recognition
FSNet: Redesign Self-Supervised MonoDepth for Full-Scale Depth Prediction for Autonomous Driving
Predicting accurate depth with monocular images is important for low-cost
robotic applications and autonomous driving. This study proposes a
comprehensive self-supervised framework for accurate scale-aware depth
prediction on autonomous driving scenes utilizing inter-frame poses obtained
from inertial measurements. In particular, we introduce a Full-Scale depth
prediction network named FSNet. FSNet contains four important improvements over
existing self-supervised models: (1) a multichannel output representation for
stable training of depth prediction in driving scenarios, (2) an
optical-flow-based mask designed for dynamic object removal, (3) a
self-distillation training strategy to augment the training process, and (4) an
optimization-based post-processing algorithm in test time, fusing the results
from visual odometry. With this framework, robots and vehicles with only one
well-calibrated camera can collect sequences of training image frames and
camera poses, and infer accurate 3D depths of the environment without extra
labeling work or 3D data. Extensive experiments on the KITTI dataset, KITTI-360
dataset and the nuScenes dataset demonstrate the potential of FSNet. More
visualizations are presented in \url{https://sites.google.com/view/fsnet/home}Comment: 12 pages. conditionally accepted by IEEE T-AS
Visual Guidance for Unmanned Aerial Vehicles with Deep Learning
Unmanned Aerial Vehicles (UAVs) have been widely applied in the military and civilian domains. In recent years, the operation mode of UAVs is evolving from teleoperation to autonomous flight. In order to fulfill the goal of autonomous flight, a reliable guidance system is essential. Since the combination of Global Positioning System (GPS) and Inertial Navigation System (INS) systems cannot sustain autonomous flight in some situations where GPS can be degraded or unavailable, using computer vision as a primary method for UAV guidance has been widely explored. Moreover, GPS does not provide any information to the robot on the presence of obstacles.
Stereo cameras have complex architecture and need a minimum baseline to generate disparity map. By contrast, monocular cameras are simple and require less hardware resources. Benefiting from state-of-the-art Deep Learning (DL) techniques, especially Convolutional Neural Networks (CNNs), a monocular camera is sufficient to extrapolate mid-level visual representations such as depth maps and optical flow (OF) maps from the environment. Therefore, the objective of this thesis is to develop a real-time visual guidance method for UAVs in cluttered environments using a monocular camera and DL.
The three major tasks performed in this thesis are investigating the development of DL techniques and monocular depth estimation (MDE), developing real-time CNNs for MDE, and developing visual guidance methods on the basis of the developed MDE system. A comprehensive survey is conducted, which covers Structure from Motion (SfM)-based methods, traditional handcrafted feature-based methods, and state-of-the-art DL-based methods. More importantly, it also investigates the application of MDE in robotics. Based on the survey, two CNNs for MDE are developed. In addition to promising accuracy performance, these two CNNs run at high frame rates (126 fps and 90 fps respectively), on a single modest power Graphical Processing Unit (GPU).
As regards the third task, the visual guidance for UAVs is first developed on top of the designed MDE networks. To improve the robustness of UAV guidance, OF maps are integrated into the developed visual guidance method. A cross-attention module is applied to fuse the features learned from the depth maps and OF maps. The fused features are then passed through a deep reinforcement learning (DRL) network to generate the policy for guiding the flight of UAV. Additionally, a simulation framework is developed which integrates AirSim, Unreal Engine and PyTorch. The effectiveness of the developed visual guidance method is validated through extensive experiments in the simulation framework
Estrategias de visión por computador para la estimación de pose en el contexto de aplicaciones robóticas industriales: avances en el uso de modelos tanto clásicos como de Deep Learning en imágenes 2D
184 p.La visión por computador es una tecnología habilitadora que permite a los robots y sistemas autónomos percibir su entorno. Dentro del contexto de la industria 4.0 y 5.0, la visión por ordenador es esencial para la automatización de procesos industriales. Entre las técnicas de visión por computador, la detección de objetos y la estimación de la pose 6D son dos de las más importantes para la automatización de procesos industriales. Para dar respuesta a estos retos, existen dos enfoques principales: los métodos clásicos y los métodos de aprendizaje profundo. Los métodos clásicos son robustos y precisos, pero requieren de una gran cantidad de conocimiento experto para su desarrollo. Por otro lado, los métodos de aprendizaje profundo son fáciles de desarrollar, pero requieren de una gran cantidad de datos para su entrenamiento.En la presente memoria de tesis se presenta una revisión de la literatura sobre técnicas de visión por computador para la detección de objetos y la estimación de la pose 6D. Además se ha dado respuesta a los siguientes retos: (1) estimación de pose mediante técnicas de visión clásicas, (2) transferencia de aprendizaje de modelos 2D a 3D, (3) la utilización de datos sintéticos para entrenar modelos de aprendizaje profundo y (4) la combinación de técnicas clásicas y de aprendizaje profundo. Para ello, se han realizado contribuciones en revistas de alto impacto que dan respuesta a los anteriores retos
Neural Reflectance Decomposition
Die Erstellung von fotorealistischen Modellen von Objekten aus Bildern oder Bildersammlungen ist eine grundlegende Herausforderung in der Computer Vision und Grafik. Dieses Problem wird auch als inverses Rendering bezeichnet. Eine der größten Herausforderungen bei dieser Aufgabe ist die vielfältige Ambiguität. Der Prozess Bilder aus 3D-Objekten zu erzeugen wird Rendering genannt. Allerdings beeinflussen sich mehrere Eigenschaften wie Form, Beleuchtung und die Reflektivität der Oberfläche gegenseitig. Zusätzlich wird eine Integration dieser Einflüsse durchgeführt, um das endgültige Bild zu erzeugen. Die Umkehrung dieser integrierten Abhängigkeiten ist eine äußerst schwierige und mehrdeutige Aufgabenstellung. Die Lösung dieser Aufgabe ist jedoch von entscheidender Bedeutung, da die automatisierte Erstellung solcher wieder beleuchtbaren Objekte verschiedene Anwendungen in den Bereichen Online-Shopping, Augmented Reality (AR), Virtual Reality (VR), Spiele oder Filme hat.
In dieser Arbeit werden zwei Ansätze zur Lösung dieser Aufgabe beschrieben. Erstens wird eine Netzwerkarchitektur vorgestellt, die die Erfassung eines Objekts und dessen Materialien von zwei Aufnahmen ermöglicht. Der Grad der Blicksynthese von diesen Objekten ist jedoch begrenzt, da bei der Dekomposition nur eine einzige Perspektive verwendet wird. Daher wird eine zweite Reihe von Ansätzen vorgeschlagen, bei denen eine Sammlung von 360 Grad verteilten Bildern in die Form, Reflektanz und Beleuchtung gespalten werden. Diese Multi-View-Bilder werden pro Objekt optimiert. Das resultierende Objekt kann direkt in handelsüblicher Rendering-Software oder in Spielen verwendet werden. Wir erreichen dies, indem wir die aktuelle Forschung zu neuronalen Feldern erweitern Reflektanz zu speichern. Durch den Einsatz von Volumen-Rendering-Techniken können wir ein Reflektanzfeld aus natürlichen Bildsammlungen ohne jegliche Ground Truth (GT) Überwachung optimieren.
Die von uns vorgeschlagenen Methoden erreichen eine erstklassige Qualität der Dekomposition und ermöglichen neuartige Aufnahmesituationen, in denen sich Objekte unter verschiedenen Beleuchtungsbedingungen oder an verschiedenen Orten befinden können, was üblich für Online-Bildsammlungen ist.Creating relightable objects from images or collections is a fundamental challenge in computer vision and graphics. This problem is also known as inverse rendering. One of the main challenges in this task is the high ambiguity. The creation of images from 3D objects is well defined as rendering. However, multiple properties such as shape, illumination, and surface reflectiveness influence each other. Additionally, an integration of these influences is performed to form the final image. Reversing these integrated dependencies is highly ill-posed and ambiguous. However, solving the task is essential, as automated creation of relightable objects has various applications in online shopping, augmented reality (AR), virtual reality (VR), games, or movies.
In this thesis, we propose two approaches to solve this task. First, a network architecture is discussed, which generalizes the decomposition of a two-shot capture of an object from large training datasets. The degree of novel view synthesis is limited as only a singular perspective is used in the decomposition. Therefore, the second set of approaches is proposed, which decomposes a set of 360-degree images. These multi-view images are optimized per object, and the result can be directly used in standard rendering software or games. We achieve this by extending recent research on Neural Fields, which can store information in a 3D neural volume. Leveraging volume rendering techniques, we can optimize a reflectance field from in-the-wild image collections without any ground truth (GT) supervision.
Our proposed methods achieve state-of-the-art decomposition quality and enable novel capture setups where objects can be under varying illumination or in different locations, which is typical for online image collections
Event Fusion Photometric Stereo Network
We present a novel method to estimate the surface normal of an object in an
ambient light environment using RGB and event cameras. Modern photometric
stereo methods rely on an RGB camera, mainly in a dark room, to avoid ambient
illumination. To alleviate the limitations of the darkroom environment and to
use essential light information, we employ an event camera with a high dynamic
range and low latency. This is the first study that uses an event camera for
the photometric stereo task, which works on continuous light sources and
ambient light environment. In this work, we also curate a novel photometric
stereo dataset that is constructed by capturing objects with event and RGB
cameras under numerous ambient lights environment. Additionally, we propose a
novel framework named Event Fusion Photometric Stereo Network~(EFPS-Net), which
estimates the surface normals of an object using both RGB frames and event
signals. Our proposed method interpolates event observation maps that generate
light information with sparse event signals to acquire fluent light
information. Subsequently, the event-interpolated observation maps are fused
with the RGB observation maps. Our numerous experiments showed that EFPS-Net
outperforms state-of-the-art methods on a dataset captured in the real world
where ambient lights exist. Consequently, we demonstrate that incorporating
additional modalities with EFPS-Net alleviates the limitations that occurred
from ambient illumination.Comment: 33 pages, 11 figure
- …