84,054 research outputs found
Towards time-varying proximal dynamics in Multi-Agent Network Games
Distributed decision making in multi-agent networks has recently attracted
significant research attention thanks to its wide applicability, e.g. in the
management and optimization of computer networks, power systems, robotic teams,
sensor networks and consumer markets. Distributed decision-making problems can
be modeled as inter-dependent optimization problems, i.e., multi-agent
game-equilibrium seeking problems, where noncooperative agents seek an
equilibrium by communicating over a network. To achieve a network equilibrium,
the agents may decide to update their decision variables via proximal dynamics,
driven by the decision variables of the neighboring agents. In this paper, we
provide an operator-theoretic characterization of convergence with a
time-invariant communication network. For the time-varying case, we consider
adjacency matrices that may switch subject to a dwell time. We illustrate our
investigations using a distributed robotic exploration example.Comment: 6 pages, 3 figure
Worst-Case Robust Distributed Power Allocation in Shared Unlicensed Spectrum
This paper considers non-cooperative and fully-distributed power-allocation
for selfish transmitter-receiver pairs in shared unlicensed spectrum when
normalized-interference to each receiver is uncertain. We model each uncertain
parameter by the sum of its nominal (estimated) value and a bounded additive
error in a convex set, and show that the allocated power always converges to
its equilibrium, called robust Nash equilibrium (RNE). In the case of a bounded
and symmetric uncertainty region, we show that the power allocation problem for
each user is simplified, and can be solved in a distributed manner. We derive
the conditions for RNE's uniqueness and for convergence of the distributed
algorithm; and show that the total throughput (social utility) is less than
that at NE when RNE is unique. We also show that for multiple RNEs, the social
utility may be higher at a RNE as compared to that at the corresponding NE, and
demonstrate that this is caused by users' orthogonal utilization of bandwidth
at RNE. Simulations confirm our analysis
Physical demand but not dexterity is associated with motor flexibility during rapid reaching in healthy young adults
Healthy humans are able to place light and heavy objects in small and large target locations with remarkable accuracy. Here we examine how dexterity demand and physical demand affect flexibility in joint coordination and end-effector kinematics when healthy young adults perform an upper extremity reaching task. We manipulated dexterity demand by changing target size and physical demand by increasing external resistance to reaching. Uncontrolled manifold analysis was used to decompose variability in joint coordination patterns into variability stabilizing the end-effector and variability de-stabilizing the end-effector during reaching. Our results demonstrate a proportional increase in stabilizing and de-stabilizing variability without a change in the ratio of the two variability components as physical demands increase. We interpret this finding in the context of previous studies showing that sensorimotor noise increases with increasing physical demands. We propose that the larger de-stabilizing variability as a function of physical demand originated from larger sensorimotor noise in the neuromuscular system. The larger stabilizing variability with larger physical demands is a strategy employed by the neuromuscular system to counter the de-stabilizing variability so that performance stability is maintained. Our findings have practical implications for improving the effectiveness of movement therapy in a wide range of patient groups, maintaining upper extremity function in old adults, and for maximizing athletic performance
On the meaning of feedback parameter, transient climate response, and the greenhouse effect: Basic considerations and the discussion of uncertainties
In this paper we discuss the meaning of feedback parameter, greenhouse effect
and transient climate response usually related to the globally averaged energy
balance model of Schneider and Mass. After scrutinizing this model and the
corresponding planetary radiation balance we state that (a) the this globally
averaged energy balance model is flawed by unsuitable physical considerations,
(b) the planetary radiation balance for an Earth in the absence of an
atmosphere is fraught by the inappropriate assumption of a uniform surface
temperature, the so-called radiative equilibrium temperature of about 255 K,
and (c) the effect of the radiative anthropogenic forcing, considered as a
perturbation to the natural system, is much smaller than the uncertainty
involved in the solution of the model of Schneider and Mass. This uncertainty
is mainly related to the empirical constants suggested by various authors and
used for predicting the emission of infrared radiation by the Earth's skin.
Furthermore, after inserting the absorption of solar radiation by atmospheric
constituents and the exchange of sensible and latent heat between the Earth and
the atmosphere into the model of Schneider and Mass the surface temperatures
become appreciably lesser than the radiative equilibrium temperature. Moreover,
neither the model of Schneider and Mass nor the Dines-type two-layer energy
balance model for the Earth-atmosphere system, both contain the planetary
radiation balance for an Earth in the absence of an atmosphere as an asymptotic
solution, do not provide evidence for the existence of the so-called
atmospheric greenhouse effect if realistic empirical data are used.Comment: 69 pages, 3 tables and 16 figure
LARES a new satellite specifically designed for testing general relativity
It is estimated that today several hundred operational satellites are orbiting Earth while many more either already re-entered the atmosphere or are no longer operational. On the 13th of February 2012 one more satellite of the Italian Space Agency has been successfully launched. The main difference with respect to all other satellites is its extremely high density that makes LARES (LAser RElativity Satellite) not only the densest satellite but even the densest known orbiting object in the solar system. That implies the non-gravitational perturbations on its surface will have the smallest effects on its orbit with respect to all other artificial orbiting objects. Those design characteristics are required to perform an accurate test of frame dragging and specifically a test of Lense-Thirring effect, predicted by General Relativity. LARES satellite is passive and covered with 92 retroreflectors. Laser pulses, sent from several ground stations, allow an accurate orbit determination. Along with this last aspect and the mentioned special design one has to take into account the effects of the Earth gravitational perturbations due to the deviation from the spherical symmetry of the gravitational potential. To this aim the latest determinations of the Earth gravitational field, produced using gravitational data from several dedicated space missions including GRACE, and the combination of data from three laser ranged satellites is used in the LARES experiment. In spite of its simplicity LARES was a real engineering challenge both in term of manufacturing and testing. The launch was performed with the VEGA qualification flight provided by the European Space Agency. Data acquisition and processing is in progress. The paper will describe the scientific objectives, the status of the experiment, the special feature of the satellite and separation system including some manufacturing issues, and the special tests performed on its retroreflectors
A Parallel Monte Carlo Code for Simulating Collisional N-body Systems
We present a new parallel code for computing the dynamical evolution of
collisional N-body systems with up to N~10^7 particles. Our code is based on
the the Henon Monte Carlo method for solving the Fokker-Planck equation, and
makes assumptions of spherical symmetry and dynamical equilibrium. The
principal algorithmic developments involve optimizing data structures, and the
introduction of a parallel random number generation scheme, as well as a
parallel sorting algorithm, required to find nearest neighbors for interactions
and to compute the gravitational potential. The new algorithms we introduce
along with our choice of decomposition scheme minimize communication costs and
ensure optimal distribution of data and workload among the processing units.
The implementation uses the Message Passing Interface (MPI) library for
communication, which makes it portable to many different supercomputing
architectures. We validate the code by calculating the evolution of clusters
with initial Plummer distribution functions up to core collapse with the number
of stars, N, spanning three orders of magnitude, from 10^5 to 10^7. We find
that our results are in good agreement with self-similar core-collapse
solutions, and the core collapse times generally agree with expectations from
the literature. Also, we observe good total energy conservation, within less
than 0.04% throughout all simulations. We analyze the performance of the code,
and demonstrate near-linear scaling of the runtime with the number of
processors up to 64 processors for N=10^5, 128 for N=10^6 and 256 for N=10^7.
The runtime reaches a saturation with the addition of more processors beyond
these limits which is a characteristic of the parallel sorting algorithm. The
resulting maximum speedups we achieve are approximately 60x, 100x, and 220x,
respectively.Comment: 53 pages, 13 figures, accepted for publication in ApJ Supplement
- …