467 research outputs found

    Can Foreign Aid Accelerate Stabilization?

    Get PDF
    This paper studies the effect of foreign aid on economic stabilization. Following Alesina and Drazen (1991), we model the delay in stabilizing as the result of a distributional struggle: reforms are postponed because they are costly and each distributional faction hopes to reduce its share of the cost by outlasting its opponents in obstructing the required policies. Since the delay is used to signal each faction's strength, the effect of the transfer depends on the role it plays in the release of information. We show that this role depends on the timing of the transfer: foreign aid decided and transferred sufficiently early into the game leads to earlier stabilization; but aid decided or transferred too late is destabilizing and encourages further postponement of reforms.

    Probabilistic Bisimulation: Naturally on Distributions

    Full text link
    In contrast to the usual understanding of probabilistic systems as stochastic processes, recently these systems have also been regarded as transformers of probabilities. In this paper, we give a natural definition of strong bisimulation for probabilistic systems corresponding to this view that treats probability distributions as first-class citizens. Our definition applies in the same way to discrete systems as well as to systems with uncountable state and action spaces. Several examples demonstrate that our definition refines the understanding of behavioural equivalences of probabilistic systems. In particular, it solves a long-standing open problem concerning the representation of memoryless continuous time by memory-full continuous time. Finally, we give algorithms for computing this bisimulation not only for finite but also for classes of uncountably infinite systems

    Common Knowledge and Game Theory

    Full text link
    Perhaps the most important area in which common knowledge problems arise is in the study of rational expectations equilibria in the trading of risky securities. How can there be trade if everybody's willingness to trade means that everybody knows that everybody expects to be a winner? (see Milgrom/Stokey [1982] and Geanakoplos [1988].) Since risky securities are traded on the basis of private information, there must presumably be some "agreeing to disagree" in the real world. But to assess its extent and its implications, one needs to have a precise theory of the norm from wich "agreeing to disagree" is seen as a deviation. The beginnings of such a theory are presented here. Some formalism is necessary in such a presentation because the English language is not geared up to express the appropriate ideas compactly. Without some formalism, it is therefore very easy to get confused. However, nothing requiring any mathematical expertise is to be described.Center for Research on Economic and Social Theory, Department of Economics, University of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/100630/1/ECON107.pd

    New Horizons in Studying the Cellular Mechanisms of Alzheimer's Disease

    Get PDF
    Following an analysis of the state of investigations and clinical outcomes in the Alzheimer’s research field, I argue that the widely accepted ‘amyloid cascade’ mechanistic explanation of Alzheimer’s disease appears to be fundamentally incomplete. In this context, I propose that a framework termed ‘principled mechanism’ (PM) can help remedy this problem. First, using a series of five ‘tests’, PM systematically compares different components of a given mechanistic explanation against a paradigmatic set of criteria and hints at various ways of making the mechanistic explanation more ‘complete’. I will demonstrate these steps using the amyloid explanation, highlighting its missing or problematic mechanistic elements. Second, PM makes an appeal for the discovery and application of ‘biological principles’ that approximate ceteris paribus generalisations or laws and are operative at the level of a biological cell. Although thermodynamic, evolutionary, ecological and other laws or principles from chemistry and the broader life sciences could inform them, biological principles should be considered ontologically unique. These principles could augment different facets of the mechanistic explanation but also allow further independent nomological explanation of the phenomenon. Whilst this overall strategy can be complementary to certain ‘new mechanist’ approaches, an important distinction of the PM framework is its equal attention to the explanatory utility of biological principles. Lastly, I detail two hypothetical biological principles and show how they could each inform and improve the potentially incomplete mechanistic aspects of the amyloid explanation and how they could provide independent explanations for the cellular features associated with Alzheimer’s disease

    Is Ethereum Proof of Stake Sustainable? −- Considering from the Perspective of Competition Among Smart Contract Platforms −-

    Full text link
    Since the Merge update upon which Ethereum transitioned to Proof of Stake, it has been touted that it resulted in lower power consumption and increased security. However, even if that is the case, can this state be sustained? In this paper, we focus on the potential impact of competition with other smart contract platforms on the price of Ethereum's native currency, Ether (ETH), thereby raising questions about the safety and sustainability purportedly brought about by the design of Proof of Stake.Comment: 30 pages, 1 figur

    Second Generation General System Theory: Perspectives in Philosophy and Approaches in Complex Systems

    Get PDF
    Following the classical work of Norbert Wiener, Ross Ashby, Ludwig von Bertalanffy and many others, the concept of System has been elaborated in different disciplinary fields, allowing interdisciplinary approaches in areas such as Physics, Biology, Chemistry, Cognitive Science, Economics, Engineering, Social Sciences, Mathematics, Medicine, Artificial Intelligence, and Philosophy. The new challenge of Complexity and Emergence has made the concept of System even more relevant to the study of problems with high contextuality. This Special Issue focuses on the nature of new problems arising from the study and modelling of complexity, their eventual common aspects, properties and approaches—already partially considered by different disciplines—as well as focusing on new, possibly unitary, theoretical frameworks. This Special Issue aims to introduce fresh impetus into systems research when the possible detection and correction of mistakes require the development of new knowledge. This book contains contributions presenting new approaches and results, problems and proposals. The context is an interdisciplinary framework dealing, in order, with electronic engineering problems; the problem of the observer; transdisciplinarity; problems of organised complexity; theoretical incompleteness; design of digital systems in a user-centred way; reaction networks as a framework for systems modelling; emergence of a stable system in reaction networks; emergence at the fundamental systems level; behavioural realization of memoryless functions

    Principled Mechanistic Explanations in Biology: A Case Study of Alzheimer's Disease

    Get PDF
    Following an analysis of the state of investigations and clinical outcomes in the Alzheimer's research field, I argue that the widely-accepted 'amyloid cascade' mechanistic explanation of Alzheimer's disease appears to be fundamentally incomplete. In this context, I propose that a framework termed 'principled mechanism' (PM) can help with remedying this problem. First, using a series of five 'tests', PM systematically compares different components of a given mechanistic explanation against a paradigmatic set of criteria, and hints at various ways of making the mechanistic explanation more 'complete'. These steps will be demonstrated using the amyloid explanation, and its missing or problematic mechanistic elements will be highlighted. Second, PM makes an appeal for the discovery and application of 'biological principles' (BPs), which approximate ceteris paribus laws and are operative at the level of a biological cell. As such, although thermodynamic, evolutionary, ecological and other laws or principles from chemistry and the broader life sciences could inform them, BPs should be considered ontologically unique. BPs could augment different facets of the mechanistic explanation but also allow further independent nomological explanation of the phenomenon. Whilst this overall strategy can be complementary to certain 'New Mechanist' approaches, an important distinction of the PM framework is its equal attention to the explanatory utility of biological principles. Lastly, I detail two hypothetical BPs, and show how they could each inform and improve the potentially incomplete mechanistic aspects of the amyloid explanation and also how they could provide independent explanations of the cellular features associated with Alzheimer's disease

    Lossy Compression applied to the Worst Case Execution Time Problem

    Get PDF
    Abstract Interpretation and Symbolic Model Checking are powerful techniques in the field of testing. These techniques can verify the correctness of systems by exploring the state space that the systems occupy. As this would normally be intractable for even moderately complicated systems, both techniques employ a system of using approximations in order to reduce the size of the state space considered without compromising on the reliability of the results. When applied to Real-time Systems, and in particular Worst Case Execution Time Estimation, Abstract Interpretation and Symbolic Model Checking are primarily used to verify the temporal properties of a system. This results in a large number of applications for the techniques, from verifying the properties of components to the values given variables may take. In turn, this results in a large problem area for researchers in devising the approximations required to reduce the size of the state space whilst ensuring the analysis remains safe. This thesis examines the use of Abstract Interpretation and Symbolic Model Checking, in particular focusing on the methods used to create approximations. To this end, this thesis introduces the ideas of Information Theory and Lossy Compression. Information Theory gives a structured framework which allows quantifying or valuing information. In other domains, Lossy Compression utilises this framework to achieve reasonably accurate approximations. However, unlike Abstract Interpretation or Symbolic Model Checking, lossy compression provides ideas on how one can find information to remove with minimal consequences. Having introduced lossy compression applications, this thesis introduces a generic approach to applying lossy compression to problems encountered in Worst Case Execution Time estimation. To test that the generic approach works, two distinct problems in Worst Case Execution Time estimation are considered. The first of these is providing a Must/May analysis for the PLRU cache; whilst common in usage, the logical complexity of a PLRU cache renders it difficult to analyse. The second problem is that of loop bound analysis, with a particular focus on removing the need for information supplied by annotations, due to the inherent unverifiability of annotations
    • …
    corecore