167,637 research outputs found
Parameter Optimization of Multi-Agent Formations based on LQR Design
In this paper we study the optimal formation control of multiple agents whose
interaction parameters are adjusted upon a cost function consisting of both the
control energy and the geometrical performance. By optimizing the interaction
parameters and by the linear quadratic regulation(LQR) controllers, the upper
bound of the cost function is minimized. For systems with homogeneous agents
interconnected over sparse graphs, distributed controllers are proposed that
inherit the same underlying graph as the one among agents. For the more general
case, a relaxed optimization problem is considered so as to eliminate the
nonlinear constraints. Using the subgradient method, interaction parameters
among agents are optimized under the constraint of a sparse graph, and the
optimum of the cost function is a better result than the one when agents
interacted only through the control channel. Numerical examples are provided to
validate the effectiveness of the method and to illustrate the geometrical
performance of the system.Comment: Submitte
Control of Multi-Agent Formations with Only Shape Constraints
This paper considers a novel problem of how to choose an appropriate geometry
for a group of agents with only shape constraints but with a flexible scale.
Instead of assigning the formation system with a specific geometry, here the
only requirement on the desired geometry is a shape without any location,
rotation and, most importantly, scale constraints. Optimal rigid transformation
between two different geometries is discussed with especial focus on the
scaling operation, and the cooperative performance of the system is evaluated
by what we call the geometries degrees of similarity (DOS) with respect to the
desired shape during the entire convergence process. The design of the scale
when measuring the DOS is discussed from constant value and time-varying
function perspectives respectively. Fixed structured nonlinear control laws
that are functions on the scale are developed to guarantee the exponential
convergence of the system to the assigned shape. Our research is originated
from a three-agent formation system and is further extended to multiple (n > 3)
agents by defining a triangular complement graph. Simulations demonstrate that
formation system with the time-varying scale function outperforms the one with
an arbitrary constant scale, and the relationship between underlying topology
and the system performance is further discussed based on the simulation
observations. Moveover, the control scheme is applied to bearing-only
sensor-target localization to show its application potentials.Comment: Submitte
Non-Chiral S-Matrix of N=4 Super Yang-Mills
We discuss the construction of non-chiral S-matrix of four-dimensional N=4
super Yang-Mills using a non-chiral superspace. This construction utilizes the
non-chiral representation of dual superconformal symmetry, which is the natural
representation from the point of view of the six-dimensional parent theory. The
superspace in discussion is projective superspace constructed by Hatsuda and
Siegel, and is based on a half coset U(2,2|4)/U(1,1|2)^2_+. We obtain the
non-chiral representation of the five-point and general n-point MHV and
anti-MHV amplitude. The non-chiral formulation can be straightforwardly lifted
to six dimensions, which is equivalent to massive amplitudes in four
dimensions.Comment: 30 pages, 2 figure
Exponential stability of nonhomogeneous matrix-valued Markovian chains
In this paper, we characterize the stability of matrix-valued Markovian
chains by periodic data.Comment: 12 page
CP and CPT Violating Parameters Determined from the Joint Decays of Entangled Neutral Pseudoscalar Mesons
Entangled pseudoscalar neutral meson pairs have been used in studying CP
violation and searching CPT violation, but almost all the previous works
concern entangled state. Here we consider entangled state of
pseudoscalar neutral mesons, which is quite different from entangled
state and provides complementary information on symmetry violating parameters.
After developing a general formalism, we consider three kinds of decay
processes, namely, semileptonic-semileptonic, hadronic-hadronic and
semileptonic-hadronic processes. For each kind of processes, we calculate the
integrated rates of joint decays with a fixed time interval, as well as
asymmetries defined for these joint rates of different channels. In turn, these
asymmetries can be used to determine the four real numbers of the two indirect
symmetry violating parameters, based on a general relation between the symmetry
violating parameters and the decay asymmetries presented here. Various
discussions are made on indirect and direct violations and the violation of
rule, with some results presented as theorems.Comment: 22 pages, to appear in PR
Devaney's chaos revisited
In this note, we give several equivalent definitions of Devaney's chao
Chaos expansion of 2D parabolic Anderson model
We prove a chaos expansion for the 2D parabolic Anderson Model in small time,
with the expansion coefficients expressed in terms of the annealed density
function of the polymer in a white noise environment.Comment: 11 pages, minor revision
Selection of the Regularization Parameter in the Ambrosio-Tortorelli Approximation of the Mumford-Shah Functional for Image Segmentation
The Ambrosio-Tortorelli functional is a phase-field approximation of the
Mumford-Shah functional that has been widely used for image segmentation. The
approximation has the advantages of being easy to implement, maintaining the
segmentation ability, and -converging to the Mumford-Shah functional.
However, it has been observed in actual computation that the segmentation
ability of the Ambrosio-Tortorelli functional varies significantly with
different values of the parameter and it even fails to -converge to the
original functional for some cases. In this paper we present an asymptotic
analysis on the gradient flow equation of the Ambrosio-Tortorelli functional
and show that the functional can have different segmentation behavior for small
but finite values of the regularization parameter and eventually loses its
segmentation ability as the parameter goes to zero when the input image is
treated as a continuous function. This is consistent with the existing
observation as well as the numerical examples presented in this work. A
selection strategy for the regularization parameter and a scaling procedure for
the solution are devised based on the analysis. Numerical results show that
they lead to good segmentation of the Ambrosio-Tortorelli functional for real
images.Comment: 22 page
Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization
We consider the stochastic composition optimization problem proposed in
\cite{wang2017stochastic}, which has applications ranging from estimation to
statistical and machine learning. We propose the first ADMM-based algorithm
named com-SVR-ADMM, and show that com-SVR-ADMM converges linearly for strongly
convex and Lipschitz smooth objectives, and has a convergence rate of , which improves upon the rate in
\cite{wang2016accelerating} when the objective is convex and Lipschitz smooth.
Moreover, com-SVR-ADMM possesses a rate of when the objective
is convex but without Lipschitz smoothness. We also conduct experiments and
show that it outperforms existing algorithms
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
We study how to set channel numbers in a neural network to achieve better
accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or
model size). A simple and one-shot solution, named AutoSlim, is presented.
Instead of training many network samples and searching with reinforcement
learning, we train a single slimmable network to approximate the network
accuracy of different channel configurations. We then iteratively evaluate the
trained slimmable model and greedily slim the layer with minimal accuracy drop.
By this single pass, we can obtain the optimized channel configurations under
different resource constraints. We present experiments with MobileNet v1,
MobileNet v2, ResNet-50 and RL-searched MNasNet on ImageNet classification. We
show significant improvements over their default channel configurations. We
also achieve better accuracy than recent channel pruning methods and neural
architecture search methods.
Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at
305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2
(301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our
AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3%
better accuracy than MobileNet-v1 (569M FLOPs). Code and models will be
available at: https://github.com/JiahuiYu/slimmable_networksComment: tech repor
- β¦