This paper develops a novel approach to the consensus problem of multi-agent
systems by minimizing a weighted state error with neighbor agents via linear
quadratic (LQ) optimal control theory. Existing consensus control algorithms
only utilize the current state of each agent, and the design of distributed
controller depends on nonzero eigenvalues of the communication topology. The
presented optimal consensus controller is obtained by solving Riccati equations
and designing appropriate observers to account for agents' historical state
information. It is shown that the corresponding cost function under the
proposed controllers is asymptotically optimal. Simulation examples demonstrate
the effectiveness of the proposed scheme, and a much faster convergence speed
than the conventional consensus methods. Moreover, the new method avoids
computing nonzero eigenvalues of the communication topology as in the
traditional consensus methods