907 research outputs found

    Evolution and Correlated Equilibrium

    Get PDF
    We show that a set of outcomes outside the convex hull of Nash equilibria can be asymptotically stable with respect to convex monotonic evolutionary dynamics. Boundedly rational agents receive signals and condition the choice of strategies on the signals. A set of conditional strategies is asymptotically stable only if it represents a strict (correlated-)equilibrium set. There are correlated equilibria that cannot be represented by an asymptotically stable signal contingent strategy. For generic games it is shown that if signals are endogenous but no player has an incentive to manipulate the signal generating process and if the signal contingent strategy is asymptotically stable, then and only then, the outcome must be a strict Nash equilibrium.Dynamic Stability, Noncooperative Games, Correlated Equilibrium, Evolution

    Voronoi languages: Equilibria in cheap-talk games with high-dimensional types and few signals

    Get PDF
    We study a communication game of common interest in which the sender observes one of infinite types and sends one of finite messages which is interpreted by the receiver. In equilibrium there is no full separation but types are clustered into convex categories. We give a full characterization of the strict Nash equilibria of this game by representing these categories by Voronoi languages. As the strategy set is infinite static stability concepts for finite games such as ESS are no longer sufficient for Lyapunov stability in the replicator dynamics. We give examples of unstable strict Nash equilibria and stable inefficient Voronoi Languages. We derive efficient Voronoi languages with a large number of categories and numerically illustrate stability of some Voronoi languages with large message spaces and non-uniformly distributed types.Cheap Talk, Signaling Game, Communication Game, Dynamic stability, Voronoi tesselation

    Robust object detection for video surveillance using stereo vision and Gaussian mixture model

    Get PDF
    In this paper, a novel approach is presented for intrusion detection in the field of wide-area outdoor surveillance such as construction site monitoring, using a rotatable stereo camera system combined with a multi-pose object segmentation process. In many current surveillance applications, monocular cameras are used which are sensitive to illumination changes or shadow casts. Additionally, the object classification, spatial measurement and localization using the 2D projection of a 3D world is ambiguous. Hence, a stereo camera is used to calculate a 3D point cloud of the scenery which is nearly unaffected by illumination changes, therefore enabling robust object detection and localization in the 3D space. The limited viewing range of the stereo camera is expanded by mounting it onto a rotatable tripod. To detect objects in different poses of the camera, pose specific Gaussian Mixture Models (GMM) are used. However, changing illumination outside the current field of view of the camera or spontaneously changing lighting conditions caused by e.g. lights controlled by motion sensors, would lead to false-positives in the segmentation process if using the brightness values. Hence, segmentation is performed using the calculated point cloud which is demonstrated to be robust against changing illumination and shadow casts by comparing the results of the proposed method with other state of the art segmentation methods using a database of self-captured images of a public outdoor area

    Evolution in Structured Populations

    Get PDF
    How does social and economic interaction of agents within large populations depend on their perception of the matching-structure? When do evolutionary dynamics with limited information processing lead to stable outcomes prescribed by rational concepts? In the chapter "Anticipated Stability in Social and Economic Networks", I model agents to meet with non-uniform probabilities. As friends or colleagues are more likely to interact frequently, this deviation seems to be plausible. A comfortable approach to model such conditional interaction is the one of network formation. I transfer Jackson & Wolinsky (1996) and its dynamic interpretation Jackson & Watts (2002) to a non-cooperative model of network formation. Unsurprisingly, a pairwise stable network results from a Nash equilibrium. My focus rather is on closed cycles. A closed cycle is a subset of networks all of whose members are active periodically. Such a set could also be interpreted as a random graph. Jackson & Watts (2002) show that the process of network formation eventually stops in a pairwise stable network or is stuck in a closed cycle. The point of my paper is that this result crucially depends on the assumption of myopic optimization. I propose that agents may hold beliefs that are consistent with actual behavior for networks that have distance less than Îș to the current network and optimize given these beliefs. The parameter Îș captures the computational capabilites of the agents. If Îș is large enough, small cycles can be excluded if agents anticipate such cycles. I define anticipated stability as the result of optimization given consistent beliefs around the current network. If Îș is very large, agents are required to hold sophisticated beliefs for any network, as for example in Dutta, Ghosal and Ray (2005). For large populations this requirement seems inplausible to me since the number of dimensions of the strategy space grows fast with the population size. My concept can flexibly be adapted to small and large populations by fixing a large or small Îș. It may seem promising to apply this concept to infinite game trees in which a distance function on the set of nodes is plausible. In the chapter "Evolution and Correlated Equilibrium" I define a game in strategic form in which players receive signals and choose strategies. According to Aumann (1987), rational play induces a correlated equilibrium distribution on the set of outcomes. Players are rational if they compute conditional probabilities of signals received by other players and optimize given this information and equilibrium strategies of their opponents. I analyze a setting in which players do not know the signal generating process, are not able to apply Bayes’ rule and do not hold beliefs over the set of strategies chosen by their opponents. I approach the concept of correlated equilibrium by an evolutionary methodology and show that even if agents display extreme bounded rationality some correlated equilibria remain to be plausible (are stable with respect to imitation dynamics). The general formulation of the signal generation encompasses Lenzo & Sarver 2005 and Mailath, Samuelson & Shaked (1997). I apply the concept of strict equilibrium sets by Balkenborg (1994) and characterize hereby asymptotically stable sets of correlated equilibrium strategies with respect to convex monotonic dynamics Hofbauer & Weuibull (1996). Balkenborg & Schlag (2007) and Cressman (2003) show similar characterizations with respect to distinct dynamics. With this framework at hand I turn to characterize robust signals. In which situations agents would not influence the signal generating process if they could? I show that if one requires asymptotically stable behavior given signals, only signals inducing strict Nash equilibria yield no incentives to influence the process of signals. If one only imposes the weaker requirement of Aumann’s rational play, I show for the example of the Chicken Game that only those signals are robust to manipulation if equilibrium play yields payoffs within the convex hull of the Nash-payoffs. It remains to be studied whether this implication transfers to other games. The third chapter "Persistent Ideologies in an Evolutionary Setting" was inspired by the discussion on religious topics partially initiated by Richard Dawkins. From my point of view, religion attaches a set of unverifiable consequences to the set of material consequences of interaction. I show that a religion that views these consequences qualitatively different from the material consequences may face no disatvantage even if agents adopt this religion more frequently if the recommend behavior yields relatively high material payoffs. I hereby critizise the approach of selecting certain preferences by evolutionary methods, as my approach allows for more general interpretations as ideologies or preferences. I generalize Sandholm (2001) in which agents are biased to one of two actions in symmetric games. In my model agents hold biases for outcomes of general asymmetric games in normal form. I assume that agents choose optimal actions given their bias and given their belief of the action choice of their opponents. Biases are heterogeneous within the population and unobserved to other players. I show that if one is willing to adopt the ‘indirect evolutionary approach’ of faster growth of preferences that induce relative successful behavior, situations in which no agents hold preferences that are equivalent to material payoffs are stable for (almost all) games in strategic form if a general model is considered. This is contrary to Ok & Vega-Redondo’s (2001) result in a different setup.</p

    Rethinking the Role of Information in Chemicals Policy: Implications for TSCA and REACH

    Get PDF
    This article analyses the role of different kinds of information for minimizing or eliminating the risks due to the production, use, and disposal of chemical substances and contrasts it with present and planned (informational) regulation in the United States and the European Union, respectively. Some commentators who are disillusioned with regulatory approaches have argued that informational tools should supplant mandatory regulatory measures unflatteringly described as ‘‘command and control.’’ Critics of this reformist view are concerned with the lack of technology-innovation forcing that results frominformational policies alone.Weargue that informational tools can be made more technology inducing e and thus more oriented towards environmental innovations e than they are under current practices, with or without complementary regulatory mechanisms, although a combination of approachesmay yield the best results. The conventional approach to chemicals policy envisions a sequential process that includes three steps of (1) producing or collecting risk-relevant information, (2) performing a risk assessment or characterization, followed by (3) risk management practices, often driven by regulation. We argue that such a sequential process is too static, or linear, and spends too many resources on searching for, or generating information about present hazards, in comparison to searching for, and generating information related to safer alternatives which include input substitution, final product reformulation, and/or process changes. These pollution prevention or cleaner technology approaches are generally acknowledged to be superior to pollution control. We argue that the production of risk information necessary for risk assessment, on the one hand, and the search for safer alternatives on the other hand, should be approached simultaneously in two parallel quests. Overcoming deficits in hazard-related information and knowledge about risk reduction alternatives must take place in a more synchronized manner than is currently being practiced. This parallel approach blurs the alleged bright line between risk assessment and risk management, but reflects more closely how regulatory agencies actually approach the regulation of chemicals. These theoretical considerations are interpreted in the context of existing and planned informational tools in the United States and the European Union, respectively. The current political debate in the European Union concerned with reforming chemicals policy and implementing the REACH (Registration, Evaluation and Authorization of Chemicals) system is focused on improving the production and assessment of risk information with regard to existing chemicals, although it also contains some interesting risk management elements. To some extent, REACH mirrors the approach taken in the United States under the Toxics Substances Control Act (TSCA) of 1976. TSCAturned out not to be effectively implemented and provides lessons that should be relevant toREACH. In this context, we discuss the opportunities and limits of existing and planned informational tools for achieving risk reduction
    • 

    corecore