113 research outputs found

    Event-triggered resilient consensus control of multiple unmanned systems against periodic DoS attacks based on state predictor

    Get PDF
    This paper develops an event-triggered resilient consensus control method for the nonlinear multiple unmanned systems with a data-based autoregressive integrated moving average (ARIMA) agent state prediction mechanism against periodic denial-of-service (DoS) attacks. The state predictor is used to predict the state of neighbor agents during periodic DoS attacks and maintain consistent control of multiple unmanned systems under DoS attacks. Considering the existing prediction error between the actual state and the predicted state, the estimated error is regarded as the uncertainty system disturbance, which is dealt with by the designed disturbance observer. The estimated result is used in the design of the consistent controller to compensate for the system uncertainty error term. Furthermore, this paper investigates dynamic event-triggered consensus controllers to improve resilience and consensus under periodic DoS attacks and reduce the frequency of actuator output changes. It is proved that the Zeno behavior can be excluded. Finally, the resilience and consensus capability of the proposed controller and the superiority of introducing a state predictor are demonstrated through numerical simulations

    Learning a Dual-Mode Speech Recognition Model via Self-Pruning

    Full text link
    There is growing interest in unifying the streaming and full-context automatic speech recognition (ASR) networks into a single end-to-end ASR model to simplify the model training and deployment for both use cases. While in real-world ASR applications, the streaming ASR models typically operate under more storage and computational constraints - e.g., on embedded devices - than any server-side full-context models. Motivated by the recent progress in Omni-sparsity supernet training, where multiple subnetworks are jointly optimized in one single model, this work aims to jointly learn a compact sparse on-device streaming ASR model, and a large dense server non-streaming model, in a single supernet. Next, we present that, performing supernet training on both wav2vec 2.0 self-supervised learning and supervised ASR fine-tuning can not only substantially improve the large non-streaming model as shown in prior works, and also be able to improve the compact sparse streaming model.Comment: 7 pages, 1 figure. Accepted for publication at IEEE Spoken Language Technology Workshop (SLT), 202
    • …
    corecore