3,818 research outputs found
Microarcsecond Radio Imaging using Earth Orbit Synthesis
The observed interstellar scintillation pattern of an intra-day variable
radio source is influenced by its source structure. If the velocity of the
interstellar medium responsible for the scattering is comparable to the
earth's, the vector sum of these allows an observer to probe the scintillation
pattern of a source in two dimensions and, in turn, to probe two-dimensional
source structure on scales comparable to the angular scale of the scintillation
pattern, typically as for weak scattering. We review the theory on
the extraction of an ``image'' from the scintillation properties of a source,
and show how earth's orbital motion changes a source's observed scintillation
properties during the course of a year. The imaging process, which we call
Earth Orbit Synthesis, requires measurements of the statistical properties of
the scintillations at epochs spread throughout the course of a year.Comment: ApJ in press. 25 pages, 7 fig
Event-based Simultaneous Localization and Mapping: A Comprehensive Survey
In recent decades, visual simultaneous localization and mapping (vSLAM) has
gained significant interest in both academia and industry. It estimates camera
motion and reconstructs the environment concurrently using visual sensors on a
moving robot. However, conventional cameras are limited by hardware, including
motion blur and low dynamic range, which can negatively impact performance in
challenging scenarios like high-speed motion and high dynamic range
illumination. Recent studies have demonstrated that event cameras, a new type
of bio-inspired visual sensor, offer advantages such as high temporal
resolution, dynamic range, low power consumption, and low latency. This paper
presents a timely and comprehensive review of event-based vSLAM algorithms that
exploit the benefits of asynchronous and irregular event streams for
localization and mapping tasks. The review covers the working principle of
event cameras and various event representations for preprocessing event data.
It also categorizes event-based vSLAM methods into four main categories:
feature-based, direct, motion-compensation, and deep learning methods, with
detailed discussions and practical guidance for each approach. Furthermore, the
paper evaluates the state-of-the-art methods on various benchmarks,
highlighting current challenges and future opportunities in this emerging
research area. A public repository will be maintained to keep track of the
rapid developments in this field at
{\url{https://github.com/kun150kun/ESLAM-survey}}
Bioinspired symmetry detection on resource limited embedded platforms
This work is inspired by the vision of flying insects which enables them to detect and locate a set of relevant objects with remarkable effectiveness despite very limited
brainpower. The bioinspired approach worked out here focuses on detection of symmetric objects to be performed by resource-limited embedded platforms such as micro air vehicles. Symmetry detection is posed as a pattern matching problem which is solved by an approach based on the use of composite correlation filters. Two variants of the approach are proposed, analysed and tested in which symmetry detection is cast as 1) static and 2) dynamic pattern matching problems. In the static variant, images of objects are input to two dimentional spatial composite correlation filters. In the dynamic variant, a video (resulting from platform motion) is input to a composite correlation filter of which its peak response is used to define symmetry. In both cases, a novel method is used for designing the composite filter templates for symmetry detection. This method significantly reduces the level of detail which needs to be matched to achieve good detection performance. The resulting performance is systematically quantified using the ROC analysis; it is demonstrated that the bioinspired detection approach is better and with a lower computational cost compared to the best state-of-the-art solution hitherto available
A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes
Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5Ā° in random dot or photo-realistically rendered scenes and within 3Ā° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016
Sparse variational regularization for visual motion estimation
The computation of visual motion is a key component in numerous computer vision tasks such as object detection, visual object tracking and activity recognition. Despite exten- sive research effort, efficient handling of motion discontinuities, occlusions and illumina- tion changes still remains elusive in visual motion estimation. The work presented in this thesis utilizes variational methods to handle the aforementioned problems because these methods allow the integration of various mathematical concepts into a single en- ergy minimization framework. This thesis applies the concepts from signal sparsity to the variational regularization for visual motion estimation. The regularization is designed in such a way that it handles motion discontinuities and can detect object occlusions
- ā¦