247,960 research outputs found

    A Distributed Algorithm for Gathering Many Fat Mobile Robots in the Plane

    Full text link
    In this work we consider the problem of gathering autonomous robots in the plane. In particular, we consider non-transparent unit-disc robots (i.e., fat) in an asynchronous setting. Vision is the only mean of coordination. Using a state-machine representation we formulate the gathering problem and develop a distributed algorithm that solves the problem for any number of robots. The main idea behind our algorithm is for the robots to reach a configuration in which all the following hold: (a) The robots' centers form a convex hull in which all robots are on the convex, (b) Each robot can see all other robots, and (c) The configuration is connected, that is, every robot touches another robot and all robots together form a connected formation. We show that starting from any initial configuration, the robots, making only local decisions and coordinate by vision, eventually reach such a configuration and terminate, yielding a solution to the gathering problem.Comment: 39 pages, 5 figure

    Robots with Lights: Overcoming Obstructed Visibility Without Colliding

    Full text link
    Robots with lights is a model of autonomous mobile computational entities operating in the plane in Look-Compute-Move cycles: each agent has an externally visible light which can assume colors from a fixed set; the lights are persistent (i.e., the color is not erased at the end of a cycle), but otherwise the agents are oblivious. The investigation of computability in this model, initially suggested by Peleg, is under way, and several results have been recently established. In these investigations, however, an agent is assumed to be capable to see through another agent. In this paper we start the study of computing when visibility is obstructable, and investigate the most basic problem for this setting, Complete Visibility: The agents must reach within finite time a configuration where they can all see each other and terminate. We do not make any assumption on a-priori knowledge of the number of agents, on rigidity of movements nor on chirality. The local coordinate system of an agent may change at each activation. Also, by definition of lights, an agent can communicate and remember only a constant number of bits in each cycle. In spite of these weak conditions, we prove that Complete Visibility is always solvable, even in the asynchronous setting, without collisions and using a small constant number of colors. The proof is constructive. We also show how to extend our protocol for Complete Visibility so that, with the same number of colors, the agents solve the (non-uniform) Circle Formation problem with obstructed visibility

    Efficient moving point handling for incremental 3D manifold reconstruction

    Get PDF
    As incremental Structure from Motion algorithms become effective, a good sparse point cloud representing the map of the scene becomes available frame-by-frame. From the 3D Delaunay triangulation of these points, state-of-the-art algorithms build a manifold rough model of the scene. These algorithms integrate incrementally new points to the 3D reconstruction only if their position estimate does not change. Indeed, whenever a point moves in a 3D Delaunay triangulation, for instance because its estimation gets refined, a set of tetrahedra have to be removed and replaced with new ones to maintain the Delaunay property; the management of the manifold reconstruction becomes thus complex and it entails a potentially big overhead. In this paper we investigate different approaches and we propose an efficient policy to deal with moving points in the manifold estimation process. We tested our approach with four sequences of the KITTI dataset and we show the effectiveness of our proposal in comparison with state-of-the-art approaches.Comment: Accepted in International Conference on Image Analysis and Processing (ICIAP 2015

    Mesh-based 3D Textured Urban Mapping

    Get PDF
    In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201

    Tracking Streamer Blobs Into the Heliosphere

    Full text link
    In this paper, we use coronal and heliospheric images from the STEREO spacecraft to track streamer blobs into the heliosphere and to observe them being swept up and compressed by the fast wind from low-latitude coronal holes. From an analysis of their elongation/time tracks, we discover a 'locus of enhanced visibility' where neighboring blobs pass each other along the line of sight and their corotating spiral is seen edge on. The detailed shape of this locus accounts for a variety of east-west asymmetries and allows us to recognize the spiral of blobs by its signatures in the STEREO images: In the eastern view from STEREO-A, the leading edge of the spiral is visible as a moving wavefront where foreground ejections overtake background ejections against the sky and then fade. In the western view from STEREO-B, the leading edge is only visible close to the Sun-spacecraft line where the radial path of ejections nearly coincides with the line of sight. In this case, we can track large-scale waves continuously back to the lower corona and see that they originate as face-on blobs.Comment: 15 pages plus 11 figures; figure 6 shows the 'locus of enhanced visibility', which we call 'the bean'. (accepted by ApJ 4/02/2010

    Method and apparatus for predicting the direction of movement in machine vision

    Get PDF
    A computer-simulated cortical network is presented. The network is capable of computing the visibility of shifts in the direction of movement. Additionally, the network can compute the following: (1) the magnitude of the position difference between the test and background patterns; (2) localized contrast differences at different spatial scales analyzed by computing temporal gradients of the difference and sum of the outputs of paired even- and odd-symmetric bandpass filters convolved with the input pattern; and (3) the direction of a test pattern moved relative to a textured background. The direction of movement of an object in the field of view of a robotic vision system is detected in accordance with nonlinear Gabor function algorithms. The movement of objects relative to their background is used to infer the 3-dimensional structure and motion of object surfaces

    Performance of astrometric detection of a hotspot orbiting on the innermost stable circular orbit of the galactic centre black hole

    Full text link
    The galactic central black hole Sgr A* exhibits outbursts of radiation in the near infrared (so-called IR flares). One model of these events consists in a hotspot orbiting on the innermost stable circular orbit (ISCO) of the hole. These outbursts can be used as a probe of the central gravitational potential. One main scientific goal of the second generation VLTI instrument GRAVITY is to observe these flares astrometrically. Here, the astrometric precision of GRAVITY is investigated in imaging mode, which consists in analysing the image computed from the interferometric data. The capability of the instrument to put in light the motion of a hotspot orbiting on the ISCO of our central black hole is then discussed. We find that GRAVITY's astrometric precision for a single star in imaging mode is smaller than the Schwarzschild radius of Sgr A*. The instrument can also demonstrate that a body orbiting on the last stable orbit of the black hole is indeed moving. It yields a typical size of the orbit, if the source is as bright as m_K=14. These results show that GRAVITY allows one to study the close environment of Sgr A*. Having access to the ISCO of the central massive black hole probably allows constraining general relativity in its strong regime. Moreover, if the hotspot model is appropriate, the black hole spin can be constrained.Comment: 13 pages, 11 figures ; accepted by MNRA

    Occlusion Handling using Semantic Segmentation and Visibility-Based Rendering for Mixed Reality

    Full text link
    Real-time occlusion handling is a major problem in outdoor mixed reality system because it requires great computational cost mainly due to the complexity of the scene. Using only segmentation, it is difficult to accurately render a virtual object occluded by complex objects such as trees, bushes etc. In this paper, we propose a novel occlusion handling method for real-time, outdoor, and omni-directional mixed reality system using only the information from a monocular image sequence. We first present a semantic segmentation scheme for predicting the amount of visibility for different type of objects in the scene. We also simultaneously calculate a foreground probability map using depth estimation derived from optical flow. Finally, we combine the segmentation result and the probability map to render the computer generated object and the real scene using a visibility-based rendering method. Our results show great improvement in handling occlusions compared to existing blending based methods
    corecore