5 research outputs found

    Visual Cue based (vehicle to vehicle) cooperative positioning

    Get PDF
    Formation flight helps multi-agents cooperate visually and accomplish missions effectively. But in order to achieve a good formation shape, the agents forming the swarm must have good inter-communication among them. Currently, the main way of communication between vehicles is done by Radio Frequency, but due to its various drawbacks, the use of RF wants to be limited. Therefore, another way of communication is proposed: the visual cue based communication. In this project, a set of autonomous vehicles will be forced to adopt a certain shape that is related to the received visual cue based marker, that is, a lead vehicle will show a marker to the followers and these will have to perform the shape that is related to the received marker. Therefore, first, a marker detection algorithm is developed which is able of detecting markers. Then, depending on the marker that has been identified, one or another formation will be executed using a potential function based approach. The approach has some visual constraints included in order to adapt it to the real scenario and only relative distances and angles between vehicles will be employed in the potential functions. The whole thesis has been developed in a simulation environment in Matlab and Python. The results show that besides the visual constraints included, the agents are able to position themselves in a formation with equal inter-agent distance

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Biol Cybern DOI 10.1007/s00422-006-0097-1 ORIGINAL PAPER Depth estimation using the compound eye of dipteran flies

    No full text
    Abstract In the neural superposition eye of a dipteran fly every ommatidium has eight photoreceptors, each associated with a rhabdomere, two central and six peripheral, which altogether result in seven functional light guides. Groups of eight rhabdomeres in neighboring ommatidia have largely overlapping fields of view. Based on the hypothesis that the light signals collected by these rhabdomeres can be used individually, we investigated the feasibility of estimating 3D scene information. According to Pick (Biol Cybern 26:215–224, 1977) the visual axes of these rhabdomeres are not parallel, but converge to a point 3–6 mm in front of the cornea. Such a structure theoretically could estimate depth in a very simple way by assuming that locally the image intensity is well approximated by a linear function of the spatial coordinates. Using the measurements of Pick (Biol Cybern 26:215–224, 1977) we performed simulation experiments to find whether this is practically possible. Our results indicate that depth estimation at small distances (up to about 1.5–2 cm) is reasonably accurate. This would allow the insect to obtain at least an ordinal spatial layout of its operational space when walking.
    corecore