75 research outputs found
How do neural networks see depth in single images?
Deep neural networks have lead to a breakthrough in depth estimation from
single images. Recent work often focuses on the accuracy of the depth map,
where an evaluation on a publicly available test set such as the KITTI vision
benchmark is often the main result of the article. While such an evaluation
shows how well neural networks can estimate depth, it does not show how they do
this. To the best of our knowledge, no work currently exists that analyzes what
these networks have learned.
In this work we take the MonoDepth network by Godard et al. and investigate
what visual cues it exploits for depth estimation. We find that the network
ignores the apparent size of known obstacles in favor of their vertical
position in the image. Using the vertical position requires the camera pose to
be known; however we find that MonoDepth only partially corrects for changes in
camera pitch and roll and that these influence the estimated depth towards
obstacles. We further show that MonoDepth's use of the vertical image position
allows it to estimate the distance towards arbitrary obstacles, even those not
appearing in the training set, but that it requires a strong edge at the ground
contact point of the object to do so. In future work we will investigate
whether these observations also apply to other neural networks for monocular
depth estimation.Comment: Submitte
Controlling spacecraft landings with constantly and exponentially decreasing time-to-contact
Two bio-inspired landing strategies are studied. Both strategies enforce a constant ventral optic flow with, respectively, (1) constantly decreasing time-to-contact, or (2) exponentially decreasing time-to-contact. Until now these strategies have only been studied assuming the visual quantities to be known, i.e., without sensor noise and delay. In this study, the control laws executing the aforementioned landing strategies are studied both theoretically and empirically, taking into account the actual extraction of the visual cues from images
Evolving Spiking Neural Networks to Mimic PID Control for Autonomous Blimps
In recent years, Artificial Neural Networks (ANN) have become a standard in
robotic control. However, a significant drawback of large-scale ANNs is their
increased power consumption. This becomes a critical concern when designing
autonomous aerial vehicles, given the stringent constraints on power and
weight. Especially in the case of blimps, known for their extended endurance,
power-efficient control methods are essential. Spiking neural networks (SNN)
can provide a solution, facilitating energy-efficient and asynchronous
event-driven processing. In this paper, we have evolved SNNs for accurate
altitude control of a non-neutrally buoyant indoor blimp, relying solely on
onboard sensing and processing power. The blimp's altitude tracking performance
significantly improved compared to prior research, showing reduced oscillations
and a minimal steady-state error. The parameters of the SNNs were optimized via
an evolutionary algorithm, using a Proportional-Derivative-Integral (PID)
controller as the target signal. We developed two complementary SNN controllers
while examining various hidden layer structures. The first controller responds
swiftly to control errors, mitigating overshooting and oscillations, while the
second minimizes steady-state errors due to non-neutral buoyancy-induced drift.
Despite the blimp's drivetrain limitations, our SNN controllers ensured stable
altitude control, employing only 160 spiking neurons
- …