37 research outputs found

    BatMobility: Towards Flying Without Seeing for Autonomous Drones

    Full text link
    Unmanned aerial vehicles (UAVs) rely on optical sensors such as cameras and lidar for autonomous operation. However, such optical sensors are error-prone in bad lighting, inclement weather conditions including fog and smoke, and around textureless or transparent surfaces. In this paper, we ask: is it possible to fly UAVs without relying on optical sensors, i.e., can UAVs fly without seeing? We present BatMobility, a lightweight mmWave radar-only perception system for UAVs that eliminates the need for optical sensors. BatMobility enables two core functionalities for UAVs -- radio flow estimation (a novel FMCW radar-based alternative for optical flow based on surface-parallel doppler shift) and radar-based collision avoidance. We build BatMobility using commodity sensors and deploy it as a real-time system on a small off-the-shelf quadcopter running an unmodified flight controller. Our evaluation shows that BatMobility achieves comparable or better performance than commercial-grade optical sensors across a wide range of scenarios

    Visual Odometry and Control for an Omnidirectional Mobile Robot with a Downward-Facing Camera

    Get PDF
    ยฉ2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.DOI: 10.1109/IROS.2010.5649749Presented at the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 18-22 Oct. 2010, Taipei.An omnidirectional Mecanum base allows for more flexible mobile manipulation. However, slipping of the Mecanum wheels results in poor dead-reckoning estimates from wheel encoders, limiting the accuracy and overall utility of this type of base. We present a system with a downwardfacing camera and light ring to provide robust visual odometry estimates. We mounted the system under the robot which allows it to operate in conditions such as large crowds or low ambient lighting. We demonstrate that the visual odometry estimates are sufficient to generate closed-loop PID (Proportional Integral Derivative) and LQR (Linear Quadratic Regulator) controllers for motion control in three different scenarios: waypoint tracking, small disturbance rejection, and sideways motion. We report quantitative measurements that demonstrate superior control performance when using visual odometry compared to wheel encoders. Finally, we show that this system provides highfidelity odometry estimates and is able to compensate for wheel slip on a four-wheeled omnidirectional mobile robot base

    Positioning device for outdoor mobile robots using optical sensors and lasers

    Get PDF
    We propose a novel method for positioning a mobile robot in an outdoor environment using lasers and optical sensors. Position estimation via a noncontact optical method is useful because the information from the wheel odometer and the global positioning system in a mobile robot is unreliable in some situations. Contact optical sensors such as computer mouse are designed to be in contact with a surface and do not function well in strong ambient light conditions. To mitigate the challenges of an outdoor environment, we developed an optical device with a bandpass filter and a pipe to restrict solar light and to detect translation. The use of two devices enables sensing of the mobile robotโ€™s position, including posture. Furthermore, employing a collimated laser beam allows measurements against a surface to be invariable with the distance to the surface. In this paper, we describe motion estimation, device configurations, and several tests for performance evaluation. We also present the experimental positioning results from a vehicle equipped with our optical device on an outdoor path. Finally, we discuss an improvement in postural accuracy by combining an optical device with precise gyroscopes

    A Survey on Odometry for Autonomous Navigation Systems

    Get PDF
    The development of a navigation system is one of the major challenges in building a fully autonomous platform. Full autonomy requires a dependable navigation capability not only in a perfect situation with clear GPS signals but also in situations, where the GPS is unreliable. Therefore, self-contained odometry systems have attracted much attention recently. This paper provides a general and comprehensive overview of the state of the art in the field of self-contained, i.e., GPS denied odometry systems, and identifies the out-coming challenges that demand further research in future. Self-contained odometry methods are categorized into five main types, i.e., wheel, inertial, laser, radar, and visual, where such categorization is based on the type of the sensor data being used for the odometry. Most of the research in the field is focused on analyzing the sensor data exhaustively or partially to extract the vehicle pose. Different combinations and fusions of sensor data in a tightly/loosely coupled manner and with filtering or optimizing fusion method have been investigated. We analyze the advantages and weaknesses of each approach in terms of different evaluation metrics, such as performance, response time, energy efficiency, and accuracy, which can be a useful guideline for researchers and engineers in the field. In the end, some future research challenges in the field are discussed

    Unmanned Aerial Vehicle Navigation Using Wide-Field Optical Flow and Intertial Sensors

    Get PDF
    This paper offers a set of novel navigation techniques that rely on the use of inertial sensors and wide-field optical flow information. The aircraft ground velocity and attitude states are estimated with an Unscented Information Filter (UIF) and are evaluated with respect to two sets of experimental flight data collected from an Unmanned Aerial Vehicle (UAV). Two different formulations are proposed, a full state formulation including velocity and attitude and a simplified formulation which assumes that the lateral and vertical velocity of the aircraft are negligible. An additional state is also considered within each formulation to recover the image distance which can be measured using a laser rangefinder. The results demonstrate that the full state formulation is able to estimate the aircraft ground velocity to within 1.3 m/s of a GPS receiver solution used as reference "truth" and regulate attitude angles within 1.4 degrees standard deviation of error for both sets of flight data

    Novel Camera Architectures for Localization and Mapping on Intelligent Mobile Platforms

    Get PDF
    Self-localization and environment mapping play a very important role in many robotics application such as autonomous driving and mixed reality consumer products. Although the most powerful solutions rely on a multitude of sensors including lidars and camera, the community maintains a high interest in developing cost-effective, purely vision-based localization and mapping approaches. The core problem of standard vision-only solutions is accuracy and robustness, especially in challenging visual conditions. The thesis aims to introduce new solutions to localization and mapping problems on intelligent mobile devices by taking advantages of novel camera architectures. The thesis investigates on using surround-view multi-camera systems, which combine the benefits of omni-directional measurements with a sufficient baseline for producing measurements in metric scale, and event cameras, that perform well under challenging illumination conditions and have high temporal resolutions. The thesis starts by looking into the motion estimation framework with multi-perspective camera systems. The framework could be divided into two sub-parts, a front-end module that initializes motion and estimates absolute pose after bootstrapping, and a back-end module that refines the estimate over a larger-scale sequence. First, the thesis proposes a complete real-time pipeline for visual odometry with non-overlapping, multi-perspective camera systems, and in particular presents a solution to the scale initialization problem, in order to solve the unobservability of metric scale under degenerate cases with such systems. Second, the thesis focuses on the further improvement of front-end relative pose estimation for vehicle-mounted surround-view multi-camera systems. It presents a new, reliable solution able to handle all kinds of relative displacements in the plane despite the possibly non-holonomic characteristics, and furthermore introduces a novel two-view optimization scheme which minimizes a geometrically relevant error without relying on 3D points related optimization variables. Third, the thesis explores the continues-time parametrization for exact modelling of non-holonomic ground vehicle trajectories in the back-end optimization of visual SLAM pipeline. It demonstrates the use of B-splines for an exact imposition of smooth, non-holonomic trajectories inside the 6 DoF bundle adjustment, and show that a significant improvement in robustness and accuracy in degrading visual conditions can be achieved. In order to deal with challenges in scenarios with high dynamics, low texture distinctiveness, or challenging illumination conditions, the thesis focuses on the solution to localization and mapping problem on Autonomous Ground Vehicle(AGV) using event cameras. Inspired by the time-continuous parametrizations of image warping functions introduced by previous works, the thesis proposes two new algorithms to tackle several motion estimation problems by performing contrast maximization approach. It firstly looks at the fronto-parallel motion estimation of an event camera, in stark contrast to the prior art, a globally optimal solution to this motion estimation problem is derived by using a branch-and-bound optimization scheme. Then, the thesis introduces a new solution to handle the localization and mapping problem of single event camera by continuous ray warping and volumetric contrast maximization, which can perform joint optimization over motion and structure for cameras exerting both translational and rotational displacements in an arbitrarily structured environment. The present thesis thus makes important contributions on both front-end and back-end of SLAM pipelines based on novel, promising camera architectures

    Localization in Low Luminance, Slippery Indoor Environment Using Afocal Optical Flow Sensor and Image Processing

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2017. 8. ์กฐ๋™์ผ.์‹ค๋‚ด ์„œ๋น„์Šค๋กœ๋ด‡์˜ ์œ„์น˜ ์ถ”์ •์€ ์ž์œจ ์ฃผํ–‰์„ ์œ„ํ•œ ํ•„์ˆ˜ ์š”๊ฑด์ด๋‹ค. ํŠนํžˆ ์นด๋ฉ”๋ผ๋กœ ์œ„์น˜๋ฅผ ์ถ”์ •ํ•˜๊ธฐ ์–ด๋ ค์šด ์‹ค๋‚ด ์ €์กฐ๋„ ํ™˜๊ฒฝ์—์„œ ๋ฏธ๋„๋Ÿฌ์ง์ด ๋ฐœ์ƒํ•  ๊ฒฝ์šฐ์—๋Š” ์œ„์น˜ ์ถ”์ •์˜ ์ •ํ™•๋„๊ฐ€ ๋‚ฎ์•„์ง„๋‹ค. ๋ฏธ๋„๋Ÿฌ์ง์€ ์ฃผ๋กœ ์นดํŽซ์ด๋‚˜ ๋ฌธํ„ฑ ๋“ฑ์„ ์ฃผํ–‰ํ•  ๋•Œ ๋ฐœ์ƒํ•˜๋ฉฐ, ํœ  ์—”์ฝ”๋” ๊ธฐ๋ฐ˜์˜ ์ฃผํ–‰๊ธฐ๋ก์œผ๋กœ๋Š” ์ฃผํ–‰ ๊ฑฐ๋ฆฌ์˜ ์ •ํ™•ํ•œ ์ธ์‹์— ํ•œ๊ณ„๊ฐ€ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์นด๋ฉ”๋ผ ๊ธฐ๋ฐ˜ ๋™์‹œ์  ์œ„์น˜์ถ”์ • ๋ฐ ์ง€๋„์ž‘์„ฑ ๊ธฐ์ˆ (simultaneous localization and mappingSLAM)์ด ๋™์ž‘ํ•˜๊ธฐ ์–ด๋ ค์šด ์ €์กฐ๋„, ๋ฏธ๋„๋Ÿฌ์šด ํ™˜๊ฒฝ์—์„œ ์ €๊ฐ€์˜ ๋ชจ์…˜์„ผ์„œ์™€ ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(afocal optical flow sensorAOFS) ๋ฐ VGA๊ธ‰ ์ „๋ฐฉ ๋‹จ์•ˆ์นด๋ฉ”๋ผ๋ฅผ ์œตํ•ฉํ•˜์—ฌ ๊ฐ•์ธํ•˜๊ฒŒ ์œ„์น˜๋ฅผ ์ถ”์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ–ˆ๋‹ค. ๋กœ๋ด‡์˜ ์œ„์น˜ ์ถ”์ •์€ ์ฃผํ–‰๊ฑฐ๋ฆฌ ์ˆœ๊ฐ„ ๋ณ€ํ™”๋Ÿ‰๊ณผ ๋ฐฉ์œ„๊ฐ ์ˆœ๊ฐ„ ๋ณ€ํ™”๋Ÿ‰์„ ๋ˆ„์  ์œตํ•ฉํ•˜์—ฌ ์‚ฐ์ถœํ–ˆ์œผ๋ฉฐ, ๋ฏธ๋„๋Ÿฌ์šด ํ™˜๊ฒฝ์—์„œ๋„ ์ข€ ๋” ์ •ํ™•ํ•œ ์ฃผํ–‰๊ฑฐ๋ฆฌ ์ถ”์ •์„ ์œ„ํ•ด ํœ  ์—”์ฝ”๋”์™€ AOFS๋กœ๋ถ€ํ„ฐ ํš๋“ํ•œ ์ด๋™ ๋ณ€์œ„ ์ •๋ณด๋ฅผ ์œตํ•ฉํ–ˆ๊ณ , ๋ฐฉ์œ„๊ฐ ์ถ”์ •์„ ์œ„ํ•ด ๊ฐ์†๋„ ์„ผ์„œ์™€ ์ „๋ฐฉ ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ํŒŒ์•…๋œ ์‹ค๋‚ด ๊ณต๊ฐ„์ •๋ณด๋ฅผ ํ™œ์šฉํ–ˆ๋‹ค. ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋Š” ๋ฐ”ํ€ด ๋ฏธ๋„๋Ÿฌ์ง์— ๊ฐ•์ธํ•˜๊ฒŒ ์ด๋™ ๋ณ€์œ„๋ฅผ ์ถ”์ • ํ•˜์ง€๋งŒ, ์นดํŽซ์ฒ˜๋Ÿผ ํ‰ํ‰ํ•˜์ง€ ์•Š์€ ํ‘œ๋ฉด์„ ์ฃผํ–‰ํ•˜๋Š” ์ด๋™ ๋กœ๋ด‡์— ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋ฅผ ์žฅ์ฐฉํ•  ๊ฒฝ์šฐ, ์ฃผํ–‰ ์ค‘ ๋ฐœ์ƒํ•˜๋Š” ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ์™€ ๋ฐ”๋‹ฅ ๊ฐ„์˜ ๋†’์ด ๋ณ€ํ™”๊ฐ€ ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋ฅผ ์ด์šฉํ•œ ์ด๋™๊ฑฐ๋ฆฌ ์ถ”์ • ์˜ค์ฐจ์˜ ์ฃผ์š”์ธ์œผ๋กœ ์ž‘์šฉํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ์— ๋ฌดํ•œ์ดˆ์ ๊ณ„ ์›๋ฆฌ๋ฅผ ์ ์šฉํ•˜์—ฌ ์ด ์˜ค์ฐจ ์š”์ธ์„ ์™„ํ™”ํ•˜๋Š” ๋ฐฉ์•ˆ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ๋กœ๋ด‡ ๋ฌธํ˜• ์‹œ์Šคํ…œ(robotic gantry system)์„ ์ด์šฉํ•˜์—ฌ ์นดํŽซ ๋ฐ ์„ธ๊ฐ€์ง€ ์ข…๋ฅ˜์˜ ๋ฐ”๋‹ฅ์žฌ์งˆ์—์„œ ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ์˜ ๋†’์ด๋ฅผ 30 mm ์—์„œ 50 mm ๋กœ ๋ณ€ํ™”์‹œํ‚ค๋ฉฐ 80 cm ๊ฑฐ๋ฆฌ๋ฅผ ์ด๋™ํ•˜๋Š” ์‹คํ—˜์„ 10๋ฒˆ์”ฉ ๋ฐ˜๋ณตํ•œ ๊ฒฐ๊ณผ, ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•˜๋Š” AOFS ๋ชจ๋“ˆ์€ 1 mm ๋†’์ด ๋ณ€ํ™” ๋‹น 0.1% ์˜ ๊ณ„ํ†ต์˜ค์ฐจ(systematic error)๋ฅผ ๋ฐœ์ƒ์‹œ์ผฐ์œผ๋‚˜, ๊ธฐ์กด์˜ ๊ณ ์ •์ดˆ์ ๋ฐฉ์‹์˜ ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋Š” 14.7% ์˜ ๊ณ„ํ†ต์˜ค์ฐจ๋ฅผ ๋‚˜ํƒ€๋ƒˆ๋‹ค. ์‹ค๋‚ด ์ด๋™์šฉ ์„œ๋น„์Šค ๋กœ๋ด‡์— AOFS๋ฅผ ์žฅ์ฐฉํ•˜์—ฌ ์นดํŽซ ์œ„์—์„œ 1 m ๋ฅผ ์ฃผํ–‰ํ•œ ๊ฒฐ๊ณผ ํ‰๊ท  ๊ฑฐ๋ฆฌ ์ถ”์ • ์˜ค์ฐจ๋Š” 0.02% ์ด๊ณ , ๋ถ„์‚ฐ์€ 17.6% ์ธ ๋ฐ˜๋ฉด, ๊ณ ์ •์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋ฅผ ๋กœ๋ด‡์— ์žฅ์ฐฉํ•˜์—ฌ ๊ฐ™์€ ์‹คํ—˜์„ ํ–ˆ์„ ๋•Œ์—๋Š” 4.09% ์˜ ํ‰๊ท  ์˜ค์ฐจ ๋ฐ 25.7% ์˜ ๋ถ„์‚ฐ์„ ๋‚˜ํƒ€๋ƒˆ๋‹ค. ์ฃผ์œ„๊ฐ€ ๋„ˆ๋ฌด ์–ด๋‘์›Œ์„œ ์˜์ƒ์„ ์œ„์น˜ ๋ณด์ •์— ์‚ฌ์šฉํ•˜๊ธฐ ์–ด๋ ค์šด ๊ฒฝ์šฐ, ์ฆ‰, ์ €์กฐ๋„ ์˜์ƒ์„ ๋ฐ๊ฒŒ ๊ฐœ์„ ํ–ˆ์œผ๋‚˜ SLAM์— ํ™œ์šฉํ•  ๊ฐ•์ธํ•œ ํŠน์ง•์  ํ˜น์€ ํŠน์ง•์„ ์„ ์ถ”์ถœํ•˜๊ธฐ ์–ด๋ ค์šด ๊ฒฝ์šฐ์—๋„ ๋กœ๋ด‡ ์ฃผํ–‰ ๊ฐ๋„ ๋ณด์ •์— ์ €์กฐ๋„ ์ด๋ฏธ์ง€๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ์•ˆ์„ ์ œ์‹œํ–ˆ๋‹ค. ์ €์กฐ๋„ ์˜์ƒ์— ํžˆ์Šคํ† ๊ทธ๋žจ ํ‰ํ™œํ™”(histogram equalization) ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜๋ฉด ์˜์ƒ์ด ๋ฐ๊ฒŒ ๋ณด์ • ๋˜๋ฉด์„œ ๋™์‹œ์— ์žก์Œ๋„ ์ฆ๊ฐ€ํ•˜๊ฒŒ ๋˜๋Š”๋ฐ, ์˜์ƒ ์žก์Œ์„ ์—†์• ๋Š” ๋™์‹œ์— ์ด๋ฏธ์ง€ ๊ฒฝ๊ณ„๋ฅผ ๋šœ๋ ทํ•˜๊ฒŒ ํ•˜๋Š” ๋กค๋ง ๊ฐ€์ด๋˜์Šค ํ•„ํ„ฐ(rolling guidance filterRGF)๋ฅผ ์ ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ๊ฐœ์„ ํ•˜๊ณ , ์ด ์ด๋ฏธ์ง€์—์„œ ์‹ค๋‚ด ๊ณต๊ฐ„์„ ๊ตฌ์„ฑํ•˜๋Š” ์ง๊ต ์ง์„  ์„ฑ๋ถ„์„ ์ถ”์ถœ ํ›„ ์†Œ์‹ค์ (vanishing pointVP)์„ ์ถ”์ •ํ•˜๊ณ  ์†Œ์‹ค์ ์„ ๊ธฐ์ค€์œผ๋กœ ํ•œ ๋กœ๋ด‡ ์ƒ๋Œ€ ๋ฐฉ์œ„๊ฐ์„ ํš๋“ํ•˜์—ฌ ๊ฐ๋„ ๋ณด์ •์— ํ™œ์šฉํ–ˆ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋กœ๋ด‡์— ์ ์šฉํ•˜์—ฌ 0.06 ~ 0.21 lx ์˜ ์ €์กฐ๋„ ์‹ค๋‚ด ๊ณต๊ฐ„(77 sqm)์— ์นดํŽซ์„ ์„ค์น˜ํ•˜๊ณ  ์ฃผํ–‰ํ–ˆ์„ ๊ฒฝ์šฐ, ๋กœ๋ด‡์˜ ๋ณต๊ท€ ์œ„์น˜ ์˜ค์ฐจ๊ฐ€ ๊ธฐ์กด 401 cm ์—์„œ 21 cm๋กœ ์ค„์–ด๋“ฆ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค.์ œ 1 ์žฅ ์„œ ๋ก  1 1.1 ์—ฐ๊ตฌ์˜ ๋ฐฐ๊ฒฝ 1 1.2 ์„ ํ–‰ ์—ฐ๊ตฌ ์กฐ์‚ฌ 6 1.2.1 ์‹ค๋‚ด ์ด๋™ํ˜• ์„œ๋น„์Šค ๋กœ๋ด‡์˜ ๋ฏธ๋„๋Ÿฌ์ง ๊ฐ์ง€ ๊ธฐ์ˆ  6 1.2.2 ์ €์กฐ๋„ ์˜์ƒ ๊ฐœ์„  ๊ธฐ์ˆ  8 1.3 ๊ธฐ์—ฌ๋„ 12 1.4 ๋…ผ๋ฌธ์˜ ๊ตฌ์„ฑ 14 ์ œ 2 ์žฅ ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ 16 2.1 ๋ฌดํ•œ์ดˆ์  ์‹œ์Šคํ…œ(afocal system) 16 2.2 ๋ฐ”๋Š˜๊ตฌ๋ฉ ํšจ๊ณผ 18 2.3 ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ ํ”„๋กœํ† ํƒ€์ž… 20 2.4 ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ ์‹คํ—˜ ๊ณ„ํš 24 2.5 ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ ์‹คํ—˜ ๊ฒฐ๊ณผ 29 ์ œ 3 ์žฅ ์ €์กฐ๋„์˜์ƒ์˜ ๋ฐฉ์œ„๊ฐ๋ณด์ • ํ™œ์šฉ๋ฐฉ๋ฒ• 36 3.1 ์ €์กฐ๋„ ์˜์ƒ ๊ฐœ์„  ๋ฐฉ๋ฒ• 36 3.2 ํ•œ ์žฅ์˜ ์˜์ƒ์œผ๋กœ ์‹ค๋‚ด ๊ณต๊ฐ„ ํŒŒ์•… ๋ฐฉ๋ฒ• 38 3.3 ์†Œ์‹ค์  ๋žœ๋“œ๋งˆํฌ๋ฅผ ์ด์šฉํ•œ ๋กœ๋ด‡ ๊ฐ๋„ ์ถ”์ • 41 3.4 ์ตœ์ข… ์ฃผํ–‰๊ธฐ๋ก ์•Œ๊ณ ๋ฆฌ์ฆ˜ 46 3.5 ์ €์กฐ๋„์˜์ƒ์˜ ๋ฐฉ์œ„๊ฐ ๋ณด์ • ์‹คํ—˜ ๊ณ„ํš 48 3.6 ์ €์กฐ๋„์˜์ƒ์˜ ๋ฐฉ์œ„๊ฐ ๋ณด์ • ์‹คํ—˜ ๊ฒฐ๊ณผ 50 ์ œ 4 ์žฅ ์ €์กฐ๋„ ํ™˜๊ฒฝ ์œ„์น˜์ธ์‹ ์‹คํ—˜ ๊ฒฐ๊ณผ 54 4.1 ์‹คํ—˜ ํ™˜๊ฒฝ 54 4.2 ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์‹คํ—˜ ๊ฒฐ๊ณผ 59 4.3 ์ž„๋ฒ ๋””๋“œ ์‹คํ—˜ ๊ฒฐ๊ณผ 61 ์ œ 5 ์žฅ ๊ฒฐ๋ก  62Docto

    Visual control of multi-rotor UAVs

    Get PDF
    Recent miniaturization of computer hardware, MEMs sensors, and high energy density batteries have enabled highly capable mobile robots to become available at low cost. This has driven the rapid expansion of interest in multi-rotor unmanned aerial vehicles. Another area which has expanded simultaneously is small powerful computers, in the form of smartphones, which nearly always have a camera attached, many of which now contain a OpenCL compatible graphics processing units. By combining the results of those two developments a low-cost multi-rotor UAV can be produced with a low-power onboard computer capable of real-time computer vision. The system should also use general purpose computer vision software to facilitate a variety of experiments. To demonstrate this I have built a quadrotor UAV based on control hardware from the Pixhawk project, and paired it with an ARM based single board computer, similar those in high-end smartphones. The quadrotor weights 980 g and has a flight time of 10 minutes. The onboard computer capable of running a pose estimation algorithm above the 10 Hz requirement for stable visual control of a quadrotor. A feature tracking algorithm was developed for efficient pose estimation, which relaxed the requirement for outlier rejection during matching. Compared with a RANSAC- only algorithm the pose estimates were less variable with a Z-axis standard deviation 0.2 cm compared with 2.4 cm for RANSAC. Processing time per frame was also faster with tracking, with 95 % confidence that tracking would process the frame within 50 ms, while for RANSAC the 95 % confidence time was 73 ms. The onboard computer ran the algorithm with a total system load of less than 25 %. All computer vision software uses the OpenCV library for common computer vision algorithms, fulfilling the requirement for running general purpose software. The tracking algorithm was used to demonstrate the capability of the system by per- forming visual servoing of the quadrotor (after manual takeoff). Response to external perturbations was poor however, requiring manual intervention to avoid crashing. This was due to poor visual controller tuning, and to variations in image acquisition and attitude estimate timing due to using free running image acquisition. The system, and the tracking algorithm, serve as proof of concept that visual control of a quadrotor is possible using small low-power computers and general purpose computer vision software

    Planetary micro-rover operations on Mars using a Bayesian framework for inference and control

    Get PDF
    With the recent progress toward the application of commercially-available hardware to small-scale space missions, it is now becoming feasible for groups of small, efficient robots based on low-power embedded hardware to perform simple tasks on other planets in the place of large-scale, heavy and expensive robots. In this paper, we describe design and programming of the Beaver micro-rover developed for Northern Light, a Canadian initiative to send a small lander and rover to Mars to study the Martian surface and subsurface. For a small, hardware-limited rover to handle an uncertain and mostly unknown environment without constant management by human operators, we use a Bayesian network of discrete random variables as an abstraction of expert knowledge about the rover and its environment, and inference operations for control. A framework for efficient construction and inference into a Bayesian network using only the C language and fixed-point mathematics on embedded hardware has been developed for the Beaver to make intelligent decisions with minimal sensor data. We study the performance of the Beaver as it probabilistically maps a simple outdoor environment with sensor models that include uncertainty. Results indicate that the Beaver and other small and simple robotic platforms can make use of a Bayesian network to make intelligent decisions in uncertain planetary environments
    corecore