14,301 research outputs found
Hybrid Satellite-Terrestrial Communication Networks for the Maritime Internet of Things: Key Technologies, Opportunities, and Challenges
With the rapid development of marine activities, there has been an increasing
number of maritime mobile terminals, as well as a growing demand for high-speed
and ultra-reliable maritime communications to keep them connected.
Traditionally, the maritime Internet of Things (IoT) is enabled by maritime
satellites. However, satellites are seriously restricted by their high latency
and relatively low data rate. As an alternative, shore & island-based base
stations (BSs) can be built to extend the coverage of terrestrial networks
using fourth-generation (4G), fifth-generation (5G), and beyond 5G services.
Unmanned aerial vehicles can also be exploited to serve as aerial maritime BSs.
Despite of all these approaches, there are still open issues for an efficient
maritime communication network (MCN). For example, due to the complicated
electromagnetic propagation environment, the limited geometrically available BS
sites, and rigorous service demands from mission-critical applications,
conventional communication and networking theories and methods should be
tailored for maritime scenarios. Towards this end, we provide a survey on the
demand for maritime communications, the state-of-the-art MCNs, and key
technologies for enhancing transmission efficiency, extending network coverage,
and provisioning maritime-specific services. Future challenges in developing an
environment-aware, service-driven, and integrated satellite-air-ground MCN to
be smart enough to utilize external auxiliary information, e.g., sea state and
atmosphere conditions, are also discussed
LQG Control and Sensing Co-Design
We investigate a Linear-Quadratic-Gaussian (LQG) control and sensing
co-design problem, where one jointly designs sensing and control policies. We
focus on the realistic case where the sensing design is selected among a finite
set of available sensors, where each sensor is associated with a different cost
(e.g., power consumption). We consider two dual problem instances:
sensing-constrained LQG control, where one maximizes control performance
subject to a sensor cost budget, and minimum-sensing LQG control, where one
minimizes sensor cost subject to performance constraints. We prove no
polynomial time algorithm guarantees across all problem instances a constant
approximation factor from the optimal. Nonetheless, we present the first
polynomial time algorithms with per-instance suboptimality guarantees. To this
end, we leverage a separation principle, that partially decouples the design of
sensing and control. Then, we frame LQG co-design as the optimization of
approximately supermodular set functions; we develop novel algorithms to solve
the problems; and we prove original results on the performance of the
algorithms, and establish connections between their suboptimality and
control-theoretic quantities. We conclude the paper by discussing two
applications, namely, sensing-constrained formation control and
resource-constrained robot navigation.Comment: Accepted to IEEE TAC. Includes contributions to submodular function
optimization literature, and extends conference paper arXiv:1709.0882
Reinforcement Learning for UAV Attitude Control
Autopilot systems are typically composed of an "inner loop" providing
stability and control, while an "outer loop" is responsible for mission-level
objectives, e.g. way-point navigation. Autopilot systems for UAVs are
predominately implemented using Proportional, Integral Derivative (PID) control
systems, which have demonstrated exceptional performance in stable
environments. However more sophisticated control is required to operate in
unpredictable, and harsh environments. Intelligent flight control systems is an
active area of research addressing limitations of PID control most recently
through the use of reinforcement learning (RL) which has had success in other
applications such as robotics. However previous work has focused primarily on
using RL at the mission-level controller. In this work, we investigate the
performance and accuracy of the inner control loop providing attitude control
when using intelligent flight control systems trained with the state-of-the-art
RL algorithms, Deep Deterministic Gradient Policy (DDGP), Trust Region Policy
Optimization (TRPO) and Proximal Policy Optimization (PPO). To investigate
these unknowns we first developed an open-source high-fidelity simulation
environment to train a flight controller attitude control of a quadrotor
through RL. We then use our environment to compare their performance to that of
a PID controller to identify if using RL is appropriate in high-precision,
time-critical flight control.Comment: 13 pages, 9 figure
Intelligent flight control systems
The capabilities of flight control systems can be enhanced by designing them to emulate functions of natural intelligence. Intelligent control functions fall in three categories. Declarative actions involve decision-making, providing models for system monitoring, goal planning, and system/scenario identification. Procedural actions concern skilled behavior and have parallels in guidance, navigation, and adaptation. Reflexive actions are spontaneous, inner-loop responses for control and estimation. Intelligent flight control systems learn knowledge of the aircraft and its mission and adapt to changes in the flight environment. Cognitive models form an efficient basis for integrating 'outer-loop/inner-loop' control functions and for developing robust parallel-processing algorithms
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
- …