33 research outputs found

    Path Planning for Motion Dependent State Estimation on Micro Aerial Vehicles

    Get PDF
    Abstract — With navigation algorithms reaching a certain maturity in the field of mobile robots, the community now focuses on more advanced tasks like path planning towards increased autonomy. While the goal is to efficiently compute a path to a target destination, the uncertainty in the robot’s perception cannot be ignored if a realistic path is to be computed. With most state of the art navigation systems providing the uncertainty in motion estimation, here we propose to exploit this information. This leads to a system that can plan safe avoidance of obstacles, and more importantly, it can actively aid navigation by choosing a path that minimizes the uncertainty in the monitored states. Our proposed approach is applicable to systems requiring certain excitations in order to render all their states observable, such as a MAV with visual-inertial based localization. In this work, we propose an approach which takes into account this necessary motion during path planning: by employing Rapidly exploring Random Belief Trees (RRBT), the proposed approach chooses a path to a goal which allows for best estimation of the robot’s states, while inherently avoiding motion in unobservable modes. We discuss our findings within the scenario of vision-based aerial navigation as one of the most challenging navigation problem, requiring sufficient excitation to reach full observability. I

    A ROBUST AND MODULAR MULTI-SENSOR FUSION APPROACH APPLIED TO MAV NAVIGATION

    Get PDF
    Abstract — It has been long known that fusing information from multiple sensors for robot navigation results in increased robustness and accuracy. However, accurate calibration of the sensor ensemble prior to deployment in the field as well as coping with sensor outages, different measurement rates and delays, render multi-sensor fusion a challenge. As a result, most often, systems do not exploit all the sensor information available in exchange for simplicity. For example, on a mission requiring transition of the robot from indoors to outdoors, it is the norm to ignore the Global Positioning System (GPS) signals which become freely available once outdoors and instead, rely only on sensor feeds (e.g., vision and laser) continuously available throughout the mission. Naturally, this comes at the expense of robustness and accuracy in real deployment. This paper presents a generic framework, dubbed Multi-Sensor-Fusion Extended Kalman Filter (MSF-EKF), able to process delayed, relative and absolute measurements from a theoretically unlimited number of different sensors and sensor types, allowing self-calibration of the sensor-suite. The modularity of MSF-EKF allows seamless handling of additional/lost sensor signals online during operation while employing an state buffering scheme augmented with Iterated EKF (IEKF) updates to allow for efficient re-linearization of the propagation to get near optimal linearlization points for both absolute and relative state updates. We demonstrate our approach in outdoor navigation experiments using a Micro Aerial Vehicle (MAV) equipped with a GPS receiver as well as visual, inertial, and pressure sensors. I

    Dacarbazine and interferon α with or without interleukin 2 in metastatic melanoma: a randomized phase III multicentre trial of the Dermatologic Cooperative Oncology Group (DeCOG)

    Get PDF
    In several phase II-trials encouraging tumour responses rates in advanced metastatic melanoma (stage IV; AJCC-classification) have been reported for the application of biochemotherapy containing interleukin 2. This study was designed to compare the efficacy of therapy with dacarbazine (DTIC) and interferon α (IFN-α) only to that of therapy with DTIC and IFN-α with the addition of interleukin 2 (IL-2) in terms of the overall survival time and rate of objective remissions and to provide an elaborated toxicity profile for both types of therapy. 290 patients were randomized to receive either DTIC (850 mg/m2every 28 days) plus IFN-α2a/b (3 MIU/m2, twice on day 1, once daily from days 2 to 5; 5 MIU/m23 times a week from week 2 to 4) with or without IL-2 (4.5 MIU/m2for 3 hours i.v. on day 3; 9.0 MIU/m2i.v. day 3/4; 4.5 MIU/m2s.c. days 4 to 7). The treatment plan required at least 2 treatment cycles (8 weeks of therapy) for every patient. Of 290 randomized patients 281 were eligible for an intention-to-treat analysis. There was no difference in terms of survival time from treatment onset between the two arms (median 11.0 months each). In 273 patients treated according to protocol tumour response was assessable. The response rates did not differ between both arms (P = 0.87) with 18.0% objective responses (9.7% PR; 8.3% CR) for DTIC plus IFN-α as compared to 16.1% (8.8% PR; 7.3% CR) for DTIC, IFN-α and IL-2. Treatment cessation due to adverse reactions was significantly more common in patients receiving IL-2 (13.9%) than in patients receiving DTIC/IFN-α only (5.6%). In conclusion, there was neither a difference in survival time nor in tumour response rates when IL-2, applied according to the combined intravenous and subcutaneous schedule used for this study, was added to DTIC and IFN-α. However, toxicity was increased in melanoma patients treated with IL-2. Further phase III trials with continuous infusion and higher dosages must be performed before any final conclusions can be drawn on the potential usefulness of IL-2 in biochemotherapy of advanced melanoma. © 2001 Cancer Research Campaign http://www.bjcancer.co

    Monocular Vision for Long-term Micro Aerial Vehicle State Estimation: A Compendium

    No full text
    The recent technological advances in Micro Aerial Vehicles (MAVs) have triggered great interest in the robotics community, as their deployability in missions of surveillance and reconnaissance has now become a realistic prospect. The state of the art, however, still lacks solutions that can work for a long duration in large, unknown, and GPS-denied environments. Here, we present our visual pipeline and MAV state-estimation framework, which uses feeds from a monocular camera and an Inertial Measurement Unit (IMU) to achieve real-time and onboard autonomous flight in general and realistic scenarios. The challenge lies in dealing with the power and weight restrictions onboard a MAV while providing the robustness necessary in real and long-term missions. This article provides a concise summary of our work on achieving the first onboard vision-based power-on-and-go system for autonomous MAV flights. We discuss our insights on the lessons learned throughout the different stages of this research, from the conception of the idea to the thorough theoretical analysis of the proposed framework and, finally, the real-world implementation and deployment. Looking into the onboard estimation of monocular visual odometry, the sensor fusion strategy, the state estimation and self-calibration of the system, and finally some implementation issues, the reader is guided through the different modules comprising our framework. The validity and power of this framework are illustrated via a comprehensive set of experiments in a large outdoor mission, demonstrating successful operation over flights of more than 360 m trajectory and 70 m altitude change

    Monocular vision for long-term micro aerial vehicle state estimation: A compendium

    No full text
    The recent technological advances in Micro Aerial Vehicles (MAVs) have triggered great interest in the robotics community, as their deployability in missions of surveillance and reconnaissance has now become a realistic prospect. The state of the art, however, still lacks solutions that can work for a long duration in large, unknown, and GPS-denied environments. Here, we present our visual pipeline and MAV state-estimation framework, which uses feeds from a monocular camera and an Inertial Measurement Unit (IMU) to achieve real-time and onboard autonomous flight in general and realistic scenarios. The challenge lies in dealing with the power and weight restrictions onboard a MAV while providing the robustness necessary in real and long-term missions. This article provides a concise. of our work on achieving the first onboard vision-based power-on-andgo system for autonomousMAV flights.We discuss our insights on the lessons learned throughout the different stages of this research, from the conception of the idea to the thorough theoretical analysis of the proposed framework and, finally, the real-world implementation and deployment. Looking into the onboard estimation of monocular visual odometry, the sensor fusion strategy, the state estimation and self-calibration of the system, and finally some implementation issues, the reader is guided through the different modules comprising our framework. The validity and power of this framework are illustrated via a comprehensive set of experiments in a large outdoor mission, demonstrating successful operation over flights of more than 360 m trajectory and 70 m altitude change
    corecore