6 research outputs found
Synthetic Aperture Anomaly Imaging
Previous research has shown that in the presence of foliage occlusion,
anomaly detection performs significantly better in integral images resulting
from synthetic aperture imaging compared to applying it to conventional aerial
images. In this article, we hypothesize and demonstrate that integrating
detected anomalies is even more effective than detecting anomalies in
integrals. This results in enhanced occlusion removal, outlier suppression, and
higher chances of visually as well as computationally detecting targets that
are otherwise occluded. Our hypothesis was validated through both: simulations
and field experiments. We also present a real-time application that makes our
findings practically available for blue-light organizations and others using
commercial drone platforms. It is designed to address use-cases that suffer
from strong occlusion caused by vegetation, such as search and rescue, wildlife
observation, early wildfire detection, and sur-veillance
Stereoscopic Depth Perception Through Foliage
Both humans and computational methods struggle to discriminate the depths of
objects hidden beneath foliage. However, such discrimination becomes feasible
when we combine computational optical synthetic aperture sensing with the human
ability to fuse stereoscopic images. For object identification tasks, as
required in search and rescue, wildlife observation, surveillance, and early
wildfire detection, depth assists in differentiating true from false findings,
such as people, animals, or vehicles vs. sun-heated patches at the ground level
or in the tree crowns, or ground fires vs. tree trunks. We used video captured
by a drone above dense woodland to test users' ability to discriminate depth.
We found that this is impossible when viewing monoscopic video and relying on
motion parallax. The same was true with stereoscopic video because of the
occlusions caused by foliage. However, when synthetic aperture sensing was used
to reduce occlusions and disparity-scaled stereoscopic video was presented,
whereas computational (stereoscopic matching) methods were unsuccessful, human
observers successfully discriminated depth. This shows the potential of systems
which exploit the synergy between computational methods and human vision to
perform tasks that neither can perform alone
Stereoscopic Depth Perception Through Foliage
Abstract:
Both humans and computational methods struggle to discriminate the depth of objects hidden under foliage. However, such discrimination becomes feasible when we combine computational optical synthetic aperture sensing with human’s ability to fuse stereoscopic images. For object identification tasks, as required in search and rescue, wildlife observation, surveillance, or early wildfire detection, depth provides an additional hint to differentiate between true and false findings, such as people, animals, or vehicles vs. sun-heated patches on the ground surfaces or the tree crowns, or ground fires vs. tree trunks. We used video captured by a drone above dense forest to test user’s ability to discriminate depth. We found that discriminating the depth of objects is infeasible when inspecting monoscopic video and relying on motion parallax. This was also impossible for stereoscopic video because of the occlusions from the foliage. However, when the occlusions were reduced with synthetic aperture sensing and disparity-scaled stereoscopic video was presented, human observers were successful in the depth discrimination. At the same time, computational (stereoscopic matching) methods were unsuccessful. This shows the potential of systems which use the synergy of computational methods and human vision to perform tasks that are infeasible for either of them alone
Stereoscopic Depth Perception Through Foliage
Abstract:
Both humans and computational methods struggle to discriminate the depth of objects hidden under foliage. However, such discrimination becomes feasible when we combine computational optical synthetic aperture sensing with human’s ability to fuse stereoscopic images. For object identification tasks, as required in search and rescue, wildlife observation, surveillance, or early wildfire detection, depth provides an additional hint to differentiate between true and false findings, such as people, animals, or vehicles vs. sun-heated patches on the ground surfaces or the tree crowns, or ground fires vs. tree trunks. We used video captured by a drone above dense forest to test user’s ability to discriminate depth. We found that discriminating the depth of objects is infeasible when inspecting monoscopic video and relying on motion parallax. This was also impossible for stereoscopic video because of the occlusions from the foliage. However, when the occlusions were reduced with synthetic aperture sensing and disparity-scaled stereoscopic video was presented, human observers were successful in the depth discrimination. At the same time, computational (stereoscopic matching) methods were unsuccessful. This shows the potential of systems which use the synergy of computational methods and human vision to perform tasks that are infeasible for either of them alone
Recommended from our members
Stereoscopic depth perception through foliage
Acknowledgements: This research was funded by the Austrian Science Fund (FWF) and German Research Foundation (DFG) under grant numbers P32185-NBL and I 6046-N, as well as by the State of Upper Austria and the Austrian Federal Ministry of Education, Science and Research via the LIT-Linz Institute of Technology under grant number LIT2019-8-SEE114.Both humans and computational methods struggle to discriminate the depths of objects hidden beneath foliage. However, such discrimination becomes feasible when we combine computational optical synthetic aperture sensing with the human ability to fuse stereoscopic images. For object identification tasks, as required in search and rescue, wildlife observation, surveillance, and early wildfire detection, depth assists in differentiating true from false findings, such as people, animals, or vehicles vs. sun-heated patches at the ground level or in the tree crowns, or ground fires vs. tree trunks. We used video captured by a drone above dense woodland to test users’ ability to discriminate depth. We found that this is impossible when viewing monoscopic video and relying on motion parallax. The same was true with stereoscopic video because of the occlusions caused by foliage. However, when synthetic aperture sensing was used to reduce occlusions and disparity-scaled stereoscopic video was presented, whereas computational (stereoscopic matching) methods were unsuccessful, human observers successfully discriminated depth. This shows the potential of systems which exploit the synergy between computational methods and human vision to perform tasks that neither can perform alone
Stereoscopic Depth Perception Through Foliage
<p><strong>Abstract:</strong></p><p>Both humans and computational methods struggle to discriminate the depths of objects hidden beneath foliage. However, such discrimination becomes feasible when we combine computational optical synthetic aperture sensing with the human ability to fuse stereoscopic images. For object identification tasks, as required in search and rescue, wildlife observation, surveillance, and early wildfire detection, depth assists in differentiating true from false findings, such as people, animals, or vehicles vs. sun-heated patches at the ground level or in the tree crowns, or ground fires vs. tree trunks. We used video captured by a drone above dense woodland to test users' ability to discriminate depth. We found that this is impossible when viewing monoscopic video and relying on motion parallax. The same was true with stereoscopic video because of the occlusions caused by foliage. However, when synthetic aperture sensing was used to reduce occlusions and disparity-scaled stereoscopic video was presented, whereas computational (stereoscopic matching) methods were unsuccessful, human observers successfully discriminated depth. This shows the potential of systems which exploit the synergy between computational methods and human vision to perform tasks that neither can perform alone.</p>