21,101 research outputs found
Recommended from our members
Fly eyes are not still: a motion illusion in Drosophila flight supports parallel visual processing.
Most animals shift gaze by a 'fixate and saccade' strategy, where the fixation phase stabilizes background motion. A logical prerequisite for robust detection and tracking of moving foreground objects, therefore, is to suppress the perception of background motion. In a virtual reality magnetic tether system enabling free yaw movement, Drosophila implemented a fixate and saccade strategy in the presence of a static panorama. When the spatial wavelength of a vertical grating was below the Nyquist wavelength of the compound eyes, flies drifted continuously and gaze could not be maintained at a single location. Because the drift occurs from a motionless stimulus - thus any perceived motion stimuli are generated by the fly itself - it is illusory, driven by perceptual aliasing. Notably, the drift speed was significantly faster than under a uniform panorama, suggesting perceptual enhancement as a result of aliasing. Under the same visual conditions in a rigid-tether paradigm, wing steering responses to the unresolvable static panorama were not distinguishable from those to a resolvable static pattern, suggesting visual aliasing is induced by ego motion. We hypothesized that obstructing the control of gaze fixation also disrupts detection and tracking of objects. Using the illusory motion stimulus, we show that magnetically tethered Drosophila track objects robustly in flight even when gaze is not fixated as flies continuously drift. Taken together, our study provides further support for parallel visual motion processing and reveals the critical influence of body motion on visuomotor processing. Motion illusions can reveal important shared principles of information processing across taxa
The Importance of Anti-Aliasing in Tiny Object Detection
Tiny object detection has gained considerable attention in the research
community owing to the frequent occurrence of tiny objects in numerous critical
real-world scenarios. However, convolutional neural networks (CNNs) used as the
backbone for object detection architectures typically neglect Nyquist's
sampling theorem during down-sampling operations, resulting in aliasing and
degraded performance. This is likely to be a particular issue for tiny objects
that occupy very few pixels and therefore have high spatial frequency features.
This paper applied an existing approach WaveCNet for anti-aliasing to tiny
object detection. WaveCNet addresses aliasing by replacing standard
down-sampling processes in CNNs with Wavelet Pooling (WaveletPool) layers,
effectively suppressing aliasing. We modify the original WaveCNet to apply
WaveletPool in a consistent way in both pathways of the residual blocks in
ResNets. Additionally, we also propose a bottom-heavy version of the backbone,
which further improves the performance of tiny object detection while also
reducing the required number of parameters by almost half. Experimental results
on the TinyPerson, WiderFace, and DOTA datasets demonstrate the importance of
anti-aliasing in tiny object detection and the effectiveness of the proposed
method which achieves new state-of-the-art results on all three datasets. Codes
and experiment results are released at
https://github.com/freshn/Anti-aliasing-Tiny-Object-Detection.git
Auto-Mobiles: Optimised Message-Passing
Some message-passing concurrent systems, such as occam 2, prohibit aliasing of data objects. Communicated data must thus be copied, which can be time-intensive for large data packets such as video frames. We introduce automatic mobility, a compiler optimisation that performs communications by reference and deduces when these communications can be performed without copying. We discuss bounds for speed-up and memory use, and benchmark the automatic mobility optimisation. We show that in the best case it can transform an operation from being linear with respect to packet size into constant-time
- …