18,858 research outputs found

    Semiglobal leader-following consensus for generalized homogenous agents

    Get PDF
    In the present paper, the Leader-Following consensus problem is investigated and sufficient conditions are given for the solvability of the problem, assuming that the agents are described by a nonlinear dynamics incrementally homogeneous in the upper bound

    Incremental Stochastic Subgradient Algorithms for Convex Optimization

    Full text link
    In this paper we study the effect of stochastic errors on two constrained incremental sub-gradient algorithms. We view the incremental sub-gradient algorithms as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. We first study the standard cyclic incremental sub-gradient algorithm in which the agents form a ring structure and pass the iterate in a cycle. We consider the method with stochastic errors in the sub-gradient evaluations and provide sufficient conditions on the moments of the stochastic errors that guarantee almost sure convergence when a diminishing step-size is used. We also obtain almost sure bounds on the algorithm's performance when a constant step-size is used. We then consider \ram{the} Markov randomized incremental subgradient method, which is a non-cyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time non-homogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. We establish the convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes, respectively

    Non Expectations and Adaptive Behaviours: the Missing Trade-off in Models of Innovation

    Get PDF
    We explore the modelling of the determination of the level of R&D investment of firms. This means that we do not tackle the decision of being an innovator or not, nor the adoption of a new technology. We exclude these decisions and focus on the situations where firms invest in internal R&D in order to produce an innovation. In that case the problem is to determine the level of R&D investment. Our interest is to analyse how expectation and adaptation can be combined in the modelling of R&D investment rules. In the literature both dimensions are generally split up: rational expectations are assumed in neoclassical models whereas alternative approaches (institutional and/or evolutionary) generally adopt a purely adaptive representation.Bounded rationality, learning, expectations, innovation dynamics.

    Output consensus of nonlinear multi-agent systems with unknown control directions

    Get PDF
    In this paper, we consider an output consensus problem for a general class of nonlinear multi-agent systems without a prior knowledge of the agents' control directions. Two distributed Nussbaumtype control laws are proposed to solve the leaderless and leader-following adaptive consensus for heterogeneous multiple agents. Examples and simulations are given to verify their effectivenessComment: 10 pages;2 figure

    Connecting adaptive behaviour and expectations in models of innovation: The Potential Role of Artificial Neural Networks

    Get PDF
    In this methodological work I explore the possibility of explicitly modelling expectations conditioning the R&D decisions of firms. In order to isolate this problem from the controversies of cognitive science, I propose a black box strategy through the concept of “internal model”. The last part of the article uses artificial neural networks to model the expectations of firms in a model of industry dynamics based on Nelson & Winter (1982)

    Planning for Decentralized Control of Multiple Robots Under Uncertainty

    Full text link
    We describe a probabilistic framework for synthesizing control policies for general multi-robot systems, given environment and sensor models and a cost function. Decentralized, partially observable Markov decision processes (Dec-POMDPs) are a general model of decision processes where a team of agents must cooperate to optimize some objective (specified by a shared reward or cost function) in the presence of uncertainty, but where communication limitations mean that the agents cannot share their state, so execution must proceed in a decentralized fashion. While Dec-POMDPs are typically intractable to solve for real-world problems, recent research on the use of macro-actions in Dec-POMDPs has significantly increased the size of problem that can be practically solved as a Dec-POMDP. We describe this general model, and show how, in contrast to most existing methods that are specialized to a particular problem class, it can synthesize control policies that use whatever opportunities for coordination are present in the problem, while balancing off uncertainty in outcomes, sensor information, and information about other agents. We use three variations on a warehouse task to show that a single planner of this type can generate cooperative behavior using task allocation, direct communication, and signaling, as appropriate
    • …
    corecore