10,424 research outputs found

    The Random Bit Complexity of Mobile Robots Scattering

    Full text link
    We consider the problem of scattering nn robots in a two dimensional continuous space. As this problem is impossible to solve in a deterministic manner, all solutions must be probabilistic. We investigate the amount of randomness (that is, the number of random bits used by the robots) that is required to achieve scattering. We first prove that nlognn \log n random bits are necessary to scatter nn robots in any setting. Also, we give a sufficient condition for a scattering algorithm to be random bit optimal. As it turns out that previous solutions for scattering satisfy our condition, they are hence proved random bit optimal for the scattering problem. Then, we investigate the time complexity of scattering when strong multiplicity detection is not available. We prove that such algorithms cannot converge in constant time in the general case and in o(loglogn)o(\log \log n) rounds for random bits optimal scattering algorithms. However, we present a family of scattering algorithms that converge as fast as needed without using multiplicity detection. Also, we put forward a specific protocol of this family that is random bit optimal (nlognn \log n random bits are used) and time optimal (loglogn\log \log n rounds are used). This improves the time complexity of previous results in the same setting by a logn\log n factor. Aside from characterizing the random bit complexity of mobile robot scattering, our study also closes its time complexity gap with and without strong multiplicity detection (that is, O(1)O(1) time complexity is only achievable when strong multiplicity detection is available, and it is possible to approach it as needed otherwise)

    Local descriptors for visual SLAM

    Get PDF
    We present a comparison of several local image descriptors in the context of visual Simultaneous Localization and Mapping (SLAM). In visual SLAM a set of points in the environment are extracted from images and used as landmarks. The points are represented by local descriptors used to resolve the association between landmarks. In this paper, we study the class separability of several descriptors under changes in viewpoint and scale. Several experiments were carried out using sequences of images in 2D and 3D scenes

    A comparative evaluation of interest point detectors and local descriptors for visual SLAM

    Get PDF
    Abstract In this paper we compare the behavior of different interest points detectors and descriptors under the conditions needed to be used as landmarks in vision-based simultaneous localization and mapping (SLAM). We evaluate the repeatability of the detectors, as well as the invariance and distinctiveness of the descriptors, under different perceptual conditions using sequences of images representing planar objects as well as 3D scenes. We believe that this information will be useful when selecting an appropriat

    Hectospec, the MMT's 300 Optical Fiber-Fed Spectrograph

    Full text link
    The Hectospec is a 300 optical fiber fed spectrograph commissioned at the MMT in the spring of 2004. A pair of high-speed six-axis robots move the 300 fiber buttons between observing configurations within ~300 s and to an accuracy ~25 microns. The optical fibers run for 26 m between the MMT's focal surface and the bench spectrograph operating at R~1000-2000. Another high dispersion bench spectrograph offering R~5,000, Hectochelle, is also available. The system throughput, including all losses in the telescope optics, fibers, and spectrograph peaks at ~10% at the grating blaze in 1" FWHM seeing. Correcting for aperture losses at the 1.5" diameter fiber entrance aperture, the system throughput peaks at \sim17%. Hectospec has proven to be a workhorse instrument at the MMT. Hectospec and Hectochelle together were scheduled for 1/3 of the available nights since its commissioning. Hectospec has returned \~60,000 reduced spectra for 16 scientific programs during its first year of operation.Comment: 68 pages, 28 figures, to appear in December 2005 PAS
    corecore