3 research outputs found

    Three-Dimensional Integrated Guidance and Control Based on Small-Gain Theorem

    Full text link
    A three-dimensional (3D) integrated guidance and control (IGC) design approach is proposed by using small-gain theorem in this paper. The 3D IGC model is formulated by combining nonlinear pursuer dynamics with the nonlinear dynamics describing pursuitevasion motion. Small-gain theorem and ISS theory are iteratively utilized to design desired attack angle, sideslip angle and attitude angular rates (virtual controls), and eventually an IGC law is proposed. Theoretical analysis shows that the IGC approach can make the LOS rate converge into a small neighborhood of zero, and the stability of the overall system can be guaranteed as well.Comment: 20 pages, 2 figure

    Integrated guidance and control framework for the waypoint navigation of a miniature aircraft with highly coupled longitudinal and lateral dynamics

    Full text link
    A solution to the waypoint navigation problem for fixed wing micro air vehicles (MAV) is addressed in this paper, in the framework of integrated guidance and control (IGC). IGC yields a single step solution to the waypoint navigation problem, unlike conventional multiple loop design. The pure proportional navigation (PPN) guidance law is integrated with the MAV dynamics. A multivariable static output feedback (SOF) controller is designed for the linear state space model formulated in the IGC framework. The waypoint navigation algorithm handles the minimum turn radius constraint of the MAV. The algorithm also evaluates the feasibility of reaching a waypoint. Extensive non-linear simulations are performed on high fidelity 150 mm wingspan MAV model to demonstrate the potential advantages of the proposed waypoint navigation algorithm

    Deep Reinforcement Learning for Six Degree-of-Freedom Planetary Powered Descent and Landing

    Full text link
    Future Mars missions will require advanced guidance, navigation, and control algorithms for the powered descent phase to target specific surface locations and achieve pinpoint accuracy (landing error ellipse << 5 m radius). The latter requires both a navigation system capable of estimating the lander's state in real-time and a guidance and control system that can map the estimated lander state to a commanded thrust for each lander engine. In this paper, we present a novel integrated guidance and control algorithm designed by applying the principles of reinforcement learning theory. The latter is used to learn a policy mapping the lander's estimated state directly to a commanded thrust for each engine, with the policy resulting in accurate and fuel-efficient trajectories. Specifically, we use proximal policy optimization, a policy gradient method, to learn the policy. Another contribution of this paper is the use of different discount rates for terminal and shaping rewards, which significantly enhances optimization performance. We present simulation results demonstrating the guidance and control system's performance in a 6-DOF simulation environment and demonstrate robustness to noise and system parameter uncertainty.Comment: 37 page
    corecore