183,971 research outputs found

    How big is too big? Critical Shocks for Systemic Failure Cascades

    Full text link
    External or internal shocks may lead to the collapse of a system consisting of many agents. If the shock hits only one agent initially and causes it to fail, this can induce a cascade of failures among neighoring agents. Several critical constellations determine whether this cascade remains finite or reaches the size of the system, i.e. leads to systemic risk. We investigate the critical parameters for such cascades in a simple model, where agents are characterized by an individual threshold \theta_i determining their capacity to handle a load \alpha\theta_i with 1-\alpha being their safety margin. If agents fail, they redistribute their load equally to K neighboring agents in a regular network. For three different threshold distributions P(\theta), we derive analytical results for the size of the cascade, X(t), which is regarded as a measure of systemic risk, and the time when it stops. We focus on two different regimes, (i) EEE, an external extreme event where the size of the shock is of the order of the total capacity of the network, and (ii) RIE, a random internal event where the size of the shock is of the order of the capacity of an agent. We find that even for large extreme events that exceed the capacity of the network finite cascades are still possible, if a power-law threshold distribution is assumed. On the other hand, even small random fluctuations may lead to full cascades if critical conditions are met. Most importantly, we demonstrate that the size of the "big" shock is not the problem, as the systemic risk only varies slightly for changes of 10 to 50 percent of the external shock. Systemic risk depends much more on ingredients such as the network topology, the safety margin and the threshold distribution, which gives hints on how to reduce systemic risk.Comment: 23 pages, 7 Figure

    Cognition as management of meaningful information. Proposal for an evolutionary approach.

    Get PDF
    Humans are cognitive entities. Our behaviors and ongoing interactions with the environment are\ud threaded with creations and usages of meaningful information, be they conscious or unconscious.\ud Animal life is also populated with meaningful information related to the survival of the individual\ud and of the species. The meaningfulness of information managed by artificial agents can also be\ud considered as a reality once we accept that the meanings managed by an artificial agent are\ud derived from what we, the cognitive designers, have built the agent for.\ud This rapid overview brings to consider that cognition, in terms of management of meaningful\ud information, can be looked at as a reality for animal, humans and robots. But it is pretty clear\ud that the corresponding meanings will be very different in nature and content. Free will and selfconsciousness\ud are key drivers in the management of human meanings, but they do not exist for\ud animals or robots. Also, staying alive is a constraint that we share with animals. Robots do not\ud carry that constraint.\ud Such differences in meaningful information and cognition for animal, humans and robots could\ud bring us to believe that the analysis of cognitions for these three types of agents has to be done\ud separately. But if we agree that humans are the result of the evolution of life and that robots are a\ud product of human activities, we can then look at addressing the possibility for an evolutionary\ud approach at cognition based on meaningful information management. A bottom-up path would\ud begin by meaning management within basic living entities, then climb up the ladder of evolution\ud up to us humans, and continue with artificial agents.\ud This is what we propose to present here: address an evolutionary approach for cognition, based\ud on meaning management using a simple systemic tool.\ud We use for that an existing systemic approach on meaning generation where a system submitted\ud to a constraint generates a meaningful information (a meaning) that will initiate an action in order\ud to satisfy the constraint [1,2]. The action can be physical, mental or other.\ud This systemic approach defines a Meaning Generator System (MGS). The simplicity of the MGS\ud makes it available as a building block for meaning management in animals, humans and robots.\ud Contrary to approaches on meaning generation in psychology or linguistics, the MGS approach is\ud not based on human mind. To avoid circularity, an evolutionary approach has to be careful not to\ud include components of human mind in the starting point.\ud The MGS receives information from its environment and compares it with its constraint. The\ud generated meaning is the connection existing between the received information and the\ud constraint. The generated meaning is to trigger an action aimed at satisfying the constraint. The\ud action will modify the environment, and so the generated meaning. Meaning generation links\ud agents to their environments in a dynamic mode. The MGS approach is triadic, Peircean type.\ud The systemic approach allows wide usage of the MGS: a system is a set of elements linked by a\ud set of relations. Any system submitted to a constraint and capable of receiving information from\ud its environment can lead to a MGS. Meaning generation can be applied to many cases, assuming\ud we identify clearly enough the systems and the constraints. Animals, humans and robots are then\ud agents containing MGSs. Similar MGSs carrying different constraints will generate different\ud meanings. Cognition is system dependent.\ud We first apply the MGS approach to animals with “stay alive” and “group life” constraints. Such\ud constraints can bring to model many cases of meaning generation and actions in the organic\ud world. However, it is to be highlighted that even if the functions and characteristics of life are well\ud known, the nature of life is not really understood. Final causes are difficult to integrate in our\ud today science. So analyzing meaning and cognition in living entities will have to take into account\ud our limited understanding about the nature of life. Ongoing research on concepts like autopoiesis\ud could bring a better understanding about the nature of life [3].\ud We next address meaning generation for humans. The case is the most difficult as the nature of\ud human mind is a mystery for today science and philosophy. The natures of our feelings, free will\ud or self-consciousness are unknown. Human constraints, meanings and cognition are difficult to\ud define. Any usage of the MGS approach for humans will have to take into account the limitations\ud that result from the unknown nature of human mind.\ud We will however present some possible approaches to identify human constraints where the MGS\ud brings some openings in an evolutionary approach [4, 5]. But it is clear that the better human\ud mind will be understood, the more we will be in a position to address meaning management and\ud cognition for humans. Ongoing research activities relative to the nature of human mind cover\ud many scientific and philosophical domains [6].\ud The case of meaning management and cognition in artificial agents is rather straightforward with\ud the MGS approach as we, the designers, know the agents and the constraints. In addition, our\ud evolutionary approach brings to position notions like artificial constraints, meaning and autonomy\ud as derived from their animal or human source.\ud We next highlight that cognition as management of meaningful information by agents goes\ud beyond information and needs to address representations which belong to the central hypothesis\ud of cognitive sciences.\ud We define the meaningful representation of an item for an agent as being the networks of\ud meanings relative to the item for the agent, with the action scenarios involving the item.\ud Such meaningful representations embed the agents in their environments and are far from the\ud GOFAI type ones [4]. Meanings, representations and cognition exist by and for the agents.\ud We finish by summarizing the points presented and highlight some possible continuations.\ud [1] C. Menant "Information and Meaning" http://cogprints.org/3694/\ud [2] C. Menant “Introduction to a Systemic Theory of Meaning” (short paper)\ud http://crmenant.free.fr/ResUK/MGS.pdf\ud [3] A. Weber and F. Varela “Life after Kant: Natural purposes and the autopoietic foundations of\ud biological individuality”. Phenomenology and the Cognitive Sciences 1: 97–125, 2002.\ud [4] C. Menant "Computation on Information, Meaning and Representations. An Evolutionary\ud Approach" http://www.idt.mdh.se/ECAP-2005/INFOCOMPBOOK/CHAPTERS/10-Menant.pdf\ud http://crmenant.free.fr/2009BookChapter/C.Menant.211009\ud [5] C. Menant "Proposal for a shared evolutionary nature of language and consciousness"\ud http://cogprints.org/7067/\ud [6] Philpapers “philosophy of mind” http://philpapers.org/browse/philosophy-of-min

    Value Sinks: A Process Theory of Corruption Risk during Complex Organizing

    Get PDF
    Theories and studies of corruption typically focus on individual ethics and agency problems in organizations. In this paper, we use concepts from complexity science to propose a process theory that describes how corruption risk emerges from conditions of uncertainty that are intrinsic in social systems and social interactions. We posit that our theory is valid across multiple levels of scale in social systems. We theorize that corruption involves dynamics that emerge when agents in a system take actions that exploit disequilibrium conditions of uncertainty and ethical ambiguity. Further, systemic corruption emerges when agent interactions are amplified locally in ways that create a hidden value sink which we define as a structure that extracts, or ‘drains’, resources from the system for the exclusive use of certain agents. For those participating in corruption, the presence of a value sink reduces local uncertainties about access to resources. This dynamic can attract others to join the value sink, allowing it to persist and grow as a dynamical system attractor, eventually challenging broader norms. We close by identifying four distinct types of corruption risk and suggest policy interventions to manage them. Finally, we discuss ways in which our theoretical approach could motivate future research

    Systemic intervention for computer-supported collaborative learning

    Get PDF
    This paper presents a systemic intervention approach as a means to overcome the methodological challenges involved in research into computer-supported collaborative learning applied to the promotion of mathematical problem-solving (CSCL-MPS) skills in schools. These challenges include how to develop an integrated analysis of several aspects of the learning process; and how to reflect on learning purposes, the context of application and participants' identities. The focus of systemic intervention is on processes for thinking through whose views and what issues and values should be considered pertinent in an analysis. Systemic intervention also advocates mixing methods from different traditions to address the purposes of multiple stakeholders. Consequently, a design for CSCL-MPS research is presented that includes several methods. This methodological design is used to analyse and reflect upon both a CSCL-MPS project with Colombian schools, and the identities of the participants in that project
    • 

    corecore