1,497 research outputs found
Data based identification and prediction of nonlinear and complex dynamical systems
We thank Dr. R. Yang (formerly at ASU), Dr. R.-Q. Su (formerly at ASU), and Mr. Zhesi Shen for their contributions to a number of original papers on which this Review is partly based. This work was supported by ARO under Grant No. W911NF-14-1-0504. W.-X. Wang was also supported by NSFC under Grants No. 61573064 and No. 61074116, as well as by the Fundamental Research Funds for the Central Universities, Beijing Nova Programme.Peer reviewedPostprin
Reinforcement Learning Based Minimum State-flipped Control for the Reachability of Boolean Control Networks
To realize reachability as well as reduce control costs of Boolean Control
Networks (BCNs) with state-flipped control, a reinforcement learning based
method is proposed to obtain flip kernels and the optimal policy with minimal
flipping actions to realize reachability. The method proposed is model-free and
of low computational complexity. In particular, Q-learning (QL), fast QL, and
small memory QL are proposed to find flip kernels. Fast QL and small memory QL
are two novel algorithms. Specifically, fast QL, namely, QL combined with
transfer-learning and special initial states, is of higher efficiency, and
small memory QL is applicable to large-scale systems. Meanwhile, we present a
novel reward setting, under which the optimal policy with minimal flipping
actions to realize reachability is the one of the highest returns. Then, to
obtain the optimal policy, we propose QL, and fast small memory QL for
large-scale systems. Specifically, on the basis of the small memory QL
mentioned before, the fast small memory QL uses a changeable reward setting to
speed up the learning efficiency while ensuring the optimality of the policy.
For parameter settings, we give some system properties for reference. Finally,
two examples, which are a small-scale system and a large-scale one, are
considered to verify the proposed method
- …