87,050 research outputs found

    Surface Flow from Visual Cues

    Get PDF
    In this paper we study the estimation of dense, instantaneous 3D motion fields over non-rigidly moving surface observed by multi-camera systems. The motivation arises from multi-camera applications that require motion information for arbitrary subjects, in order to perform tasks such as surface tracking or segmentation. To this aim, we present a novel framework that allows to efficiently compute dense 3D displacement fields using low level visual cues and geometric constraints. The main contribution is a unified framework that combines flow constraints for small displacements with temporal feature constraints for large displacements and fuses them over the surface using local rigidity constraints. The resulting linear optimization problem allows for variational solutions and fast implementations. Experiments conducted on synthetic and real data demonstrate the respective interests of flow and feature constraints as well as their efficiency to provide robust surface motion cues when combined.Dans ce papier nous nous intéressons à l'estimation des champs de déplacement denses d'une surface non rigide, en mouvement, capturée par un système multi-caméra. La motivation vient des applications multi-caméra qui nécessitent une information de mouvement pour accomplir des tâches telles que le suivi de surface ou la segmentation. Dans cette optique, nous présentons une approche nouvelle, qui permet de calculer efficacement un champ de déplacement 3D, en utilisant des informations visuelles de bas niveau et des contraintes géométriques. La contribution principale est la proposition d'un cadre unifié qui combine des contraintes de flot pour de petits déplacements et des correspondances temporelles éparses pour les déplacements importants. Ces deux types d'informations sont fusionnés sur la surface en utilisant une contrainte de rigidité locale. Le problème se formule comme une optimisation linéaire permettant une implémentation rapide grâce à une approche variationnelle. Les expérimentations menées sur des données synthétiques et réelles démontrent les intérêts respectifs du flot et des informations éparses, ainsi que leur efficacité conjointe pour calculer les déplacements d'une surface de manière robuste

    The perception of surface layout during low level flight

    Get PDF
    Although it is fairly well established that information about surface layout can be gained from motion cues, it is not so clear as to what information humans can use and what specific information they should be provided. Theoretical analyses tell us that the information is in the stimulus. It will take more experiments to verify that this information can be used by humans to extract surface layout from the 2D velocity flow field. The visual motion factors that can affect the pilot's ability to control an aircraft and to infer the layout of the terrain ahead are discussed

    Optimal sensing for fish school identification

    Full text link
    Fish schooling implies an awareness of the swimmers for their companions. In flow mediated environments, in addition to visual cues, pressure and shear sensors on the fish body are critical for providing quantitative information that assists the quantification of proximity to other swimmers. Here we examine the distribution of sensors on the surface of an artificial swimmer so that it can optimally identify a leading group of swimmers. We employ Bayesian experimental design coupled with two-dimensional Navier Stokes equations for multiple self-propelled swimmers. The follower tracks the school using information from its own surface pressure and shear stress. We demonstrate that the optimal sensor distribution of the follower is qualitatively similar to the distribution of neuromasts on fish. Our results show that it is possible to identify accurately the center of mass and even the number of the leading swimmers using surface only information

    Self-motion and the perception of stationary objects

    Get PDF
    One of the ways we perceive shape is through seeing motion. Visual motion may be actively generated (for example, in locomotion), or passively observed. In the study of how we perceive 3D structure from motion (SfM), the non-moving, passive observer in an environment of moving rigid objects has been used as a substitute for an active observer moving in an environment of stationary objects; the 'rigidity hypothesis' has played a central role in computational and experimental studies of SfM. Here we demonstrate that this substitution is not fully adequate, because active observers perceive 3D structure differently from passive observers, despite experiencing the same visual stimulus: active observers' perception of 3D structure depends on extra-visual self-motion information. Moreover, the visual system, making use of the self-motion information treats objects that are stationary (in an allocentric, earth-fixed reference frame) differently from objects that are merely rigid. These results show that action plays a central role in depth perception, and argue for a revision of the rigidity hypothesis to incorporate the special case of stationary objects

    Female mating preferences in blind cave tetras Astyanax fasciatus (Characidae, Teleostei).

    Get PDF
    The Mexican tetra Astyanax fasciatus has evolved a variety of more or less color- and eyeless cave populations. Here we examined the evolution of the female preference for large male body size within different populations of this species, either surface- or cave-dwelling. Given the choice between visual cues from a large and a small male, females from the surface form as well as females from an eyed cave form showed a strong preference for large males. When only non-visual cues were presented in darkness, the surface females did not prefer either males. Among the six cave populations studied, females of the eyed cave form and females of one of the five eyeless cave populations showed a preference for large males. Apparently, not all cave populations of Astyanax have evolved non-visual mating preferences. We discuss the role of selection by benefits of non-visual mate choice for the evolution of non-visual mating preferences

    How do neural networks see depth in single images?

    Full text link
    Deep neural networks have lead to a breakthrough in depth estimation from single images. Recent work often focuses on the accuracy of the depth map, where an evaluation on a publicly available test set such as the KITTI vision benchmark is often the main result of the article. While such an evaluation shows how well neural networks can estimate depth, it does not show how they do this. To the best of our knowledge, no work currently exists that analyzes what these networks have learned. In this work we take the MonoDepth network by Godard et al. and investigate what visual cues it exploits for depth estimation. We find that the network ignores the apparent size of known obstacles in favor of their vertical position in the image. Using the vertical position requires the camera pose to be known; however we find that MonoDepth only partially corrects for changes in camera pitch and roll and that these influence the estimated depth towards obstacles. We further show that MonoDepth's use of the vertical image position allows it to estimate the distance towards arbitrary obstacles, even those not appearing in the training set, but that it requires a strong edge at the ground contact point of the object to do so. In future work we will investigate whether these observations also apply to other neural networks for monocular depth estimation.Comment: Submitte

    Multi-touch 3D Exploratory Analysis of Ocean Flow Models

    Get PDF
    Modern ocean flow simulations are generating increasingly complex, multi-layer 3D ocean flow models. However, most researchers are still using traditional 2D visualizations to visualize these models one slice at a time. Properly designed 3D visualization tools can be highly effective for revealing the complex, dynamic flow patterns and structures present in these models. However, the transition from visualizing ocean flow patterns in 2D to 3D presents many challenges, including occlusion and depth ambiguity. Further complications arise from the interaction methods required to navigate, explore, and interact with these 3D datasets. We present a system that employs a combination of stereoscopic rendering, to best reveal and illustrate 3D structures and patterns, and multi-touch interaction, to allow for natural and efficient navigation and manipulation within the 3D environment. Exploratory visual analysis is facilitated through the use of a highly-interactive toolset which leverages a smart particle system. Multi-touch gestures allow users to quickly position dye emitting tools within the 3D model. Finally, we illustrate the potential applications of our system through examples of real world significance
    corecore