25,414 research outputs found
Quasi-optimum design of control systems for moving base simulators
Optimal control of six degree of freedom moving-base simulato
Quasi-optimum design of a six degree of freedom moving base simulator control system
The design of a washout control system for a moving base simulator is treated by a quasi-optimum control technique. The broad objective of the design is to reproduce the sensed motion of a six degree of freedom simulator as accurately as possible without causing the simulator excursions to exceed specified limits. A performance criterion is established that weights magnitude and direction errors in specific force and in angular velocity and attempts to maintain the excursion within set limits by penalizing excessive excursions. A FORTRAN routine for relizing the washout law was developed and typical time histories using the washout routine were simulated for a range of parameters in the penalty- and weighting-functions. These time histories and the listing of the routine are included in the report
RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets
In this paper, we propose a class of robust stochastic subgradient methods
for distributed learning from heterogeneous datasets at presence of an unknown
number of Byzantine workers. The Byzantine workers, during the learning
process, may send arbitrary incorrect messages to the master due to data
corruptions, communication failures or malicious attacks, and consequently bias
the learned model. The key to the proposed methods is a regularization term
incorporated with the objective function so as to robustify the learning task
and mitigate the negative effects of Byzantine attacks. The resultant
subgradient-based algorithms are termed Byzantine-Robust Stochastic Aggregation
methods, justifying our acronym RSA used henceforth. In contrast to most of the
existing algorithms, RSA does not rely on the assumption that the data are
independent and identically distributed (i.i.d.) on the workers, and hence fits
for a wider class of applications. Theoretically, we show that: i) RSA
converges to a near-optimal solution with the learning error dependent on the
number of Byzantine workers; ii) the convergence rate of RSA under Byzantine
attacks is the same as that of the stochastic gradient descent method, which is
free of Byzantine attacks. Numerically, experiments on real dataset corroborate
the competitive performance of RSA and a complexity reduction compared to the
state-of-the-art alternatives.Comment: To appear in AAAI 201
- …