1,102 research outputs found

    Temporal Aspects of Smart Contracts for Financial Derivatives

    Full text link
    Implementing smart contracts to automate the performance of high-value over-the-counter (OTC) financial derivatives is a formidable challenge. Due to the regulatory framework and the scale of financial risk if a contract were to go wrong, the performance of these contracts must be enforceable in law and there is an absolute requirement that the smart contract will be faithful to the intentions of the parties as expressed in the original legal documentation. Formal methods provide an attractive route for validation and assurance, and here we present early results from an investigation of the semantics of industry-standard legal documentation for OTC derivatives. We explain the need for a formal representation that combines temporal, deontic and operational aspects, and focus on the requirements for the temporal aspects as derived from the legal text. The relevance of this work extends beyond OTC derivatives and is applicable to understanding the temporal semantics of a wide range of legal documentation

    Physical Logic

    Full text link
    In R.D. Sorkin's framework for logic in physics a clear separation is made between the collection of unasserted propositions about the physical world and the affirmation or denial of these propositions by the physical world. The unasserted propositions form a Boolean algebra because they correspond to subsets of an underlying set of spacetime histories. Physical rules of inference, apply not to the propositions in themselves but to the affirmation and denial of these propositions by the actual world. This physical logic may or may not respect the propositions' underlying Boolean structure. We prove that this logic is Boolean if and only if the following three axioms hold: (i) The world is affirmed, (ii) Modus Ponens and (iii) If a proposition is denied then its negation, or complement, is affirmed. When a physical system is governed by a dynamical law in the form of a quantum measure with the rule that events of zero measure are denied, the axioms (i) - (iii) prove to be too rigid and need to be modified. One promising scheme for quantum mechanics as quantum measure theory corresponds to replacing axiom (iii) with axiom (iv) Nature is as fine grained as the dynamics allows.Comment: 14 pages, v2 published version with a change in the title and other minor change

    Can processes make relationships work? The Triple Helix between structure and action

    Get PDF
    This contribution seeks to explore how complex adaptive theory can be applied at the conceptual level to unpack Triple Helix models. We use two cases to examine this issue – the Finnish Strategic Centres for Science, Technology & Innovation (SHOKs) and the Canadian Business-led Networks of Centres of Excellence (BL-NCE). Both types of centres are organisational structures that aspire to be business-led, with a considerable portion of their activities driven by (industrial) users’ interests and requirements. Reflecting on the centres’ activities along three dimensions – knowledge generation, consensus building and innovation – we contend that conceptualising the Triple Helix from a process perspective will improve the dialogue between stakeholders and shareholders

    Randomisation and Derandomisation in Descriptive Complexity Theory

    Full text link
    We study probabilistic complexity classes and questions of derandomisation from a logical point of view. For each logic L we introduce a new logic BPL, bounded error probabilistic L, which is defined from L in a similar way as the complexity class BPP, bounded error probabilistic polynomial time, is defined from PTIME. Our main focus lies on questions of derandomisation, and we prove that there is a query which is definable in BPFO, the probabilistic version of first-order logic, but not in Cinf, finite variable infinitary logic with counting. This implies that many of the standard logics of finite model theory, like transitive closure logic and fixed-point logic, both with and without counting, cannot be derandomised. Similarly, we present a query on ordered structures which is definable in BPFO but not in monadic second-order logic, and a query on additive structures which is definable in BPFO but not in FO. The latter of these queries shows that certain uniform variants of AC0 (bounded-depth polynomial sized circuits) cannot be derandomised. These results are in contrast to the general belief that most standard complexity classes can be derandomised. Finally, we note that BPIFP+C, the probabilistic version of fixed-point logic with counting, captures the complexity class BPP, even on unordered structures

    The art of being human : a project for general philosophy of science

    Get PDF
    Throughout the medieval and modern periods, in various sacred and secular guises, the unification of all forms of knowledge under the rubric of ‘science’ has been taken as the prerogative of humanity as a species. However, as our sense of species privilege has been called increasingly into question, so too has the very salience of ‘humanity’ and ‘science’ as general categories, let alone ones that might bear some essential relationship to each other. After showing how the ascendant Stanford School in the philosophy of science has contributed to this joint demystification of ‘humanity’ and ‘science’, I proceed on a more positive note to a conceptual framework for making sense of science as the art of being human. My understanding of ‘science’ is indebted to the red thread that runs from Christian theology through the Scientific Revolution and Enlightenment to the Humboldtian revival of the university as the site for the synthesis of knowledge as the culmination of self-development. Especially salient to this idea is science‘s epistemic capacity to manage modality (i.e. to determine the conditions under which possibilities can be actualised) and its political capacity to organize humanity into projects of universal concern. However, the challenge facing such an ideal in the twentyfirst century is that the predicate ‘human’ may be projected in three quite distinct ways, governed by what I call ‘ecological’, ‘biomedical’ and ‘cybernetic’ interests. Which one of these future humanities would claim today’s humans as proper ancestors and could these futures co-habit the same world thus become two important questions that general philosophy of science will need to address in the coming years

    The ‘Galilean Style in Science’ and the Inconsistency of Linguistic Theorising

    Get PDF
    Chomsky’s principle of epistemological tolerance says that in theoretical linguistics contradictions between the data and the hypotheses may be temporarily tolerated in order to protect the explanatory power of the theory. The paper raises the following problem: What kinds of contradictions may be tolerated between the data and the hypotheses in theoretical linguistics? First a model of paraconsistent logic is introduced which differentiates between week and strong contradiction. As a second step, a case study is carried out which exemplifies that the principle of epistemological tolerance may be interpreted as the tolerance of week contradiction. The third step of the argumentation focuses on another case study which exemplifies that the principle of epistemological tolerance must not be interpreted as the tolerance of strong contradiction. The reason for the latter insight is the unreliability and the uncertainty of introspective data. From this finding the author draws the conclusion that it is the integration of different data types that may lead to the improvement of current theoretical linguistics and that the integration of different data types requires a novel methodology which, for the time being, is not available

    The ‘Galilean Style in Science’ and the Inconsistency of Linguistic Theorising

    Get PDF
    Chomsky’s principle of epistemological tolerance says that in theoretical linguistics contradictions between the data and the hypotheses may be temporarily tolerated in order to protect the explanatory power of the theory. The paper raises the following problem: What kinds of contradictions may be tolerated between the data and the hypotheses in theoretical linguistics? First a model of paraconsistent logic is introduced which differentiates between week and strong contradiction. As a second step, a case study is carried out which exemplifies that the principle of epistemological tolerance may be interpreted as the tolerance of week contradiction. The third step of the argumentation focuses on another case study which exemplifies that the principle of epistemological tolerance must not be interpreted as the tolerance of strong contradiction. The reason for the latter insight is the unreliability and the uncertainty of introspective data. From this finding the author draws the conclusion that it is the integration of different data types that may lead to the improvement of current theoretical linguistics and that the integration of different data types requires a novel methodology which, for the time being, is not available

    Computational and Biological Analogies for Understanding Fine-Tuned Parameters in Physics

    Full text link
    In this philosophical paper, we explore computational and biological analogies to address the fine-tuning problem in cosmology. We first clarify what it means for physical constants or initial conditions to be fine-tuned. We review important distinctions such as the dimensionless and dimensional physical constants, and the classification of constants proposed by Levy-Leblond. Then we explore how two great analogies, computational and biological, can give new insights into our problem. This paper includes a preliminary study to examine the two analogies. Importantly, analogies are both useful and fundamental cognitive tools, but can also be misused or misinterpreted. The idea that our universe might be modelled as a computational entity is analysed, and we discuss the distinction between physical laws and initial conditions using algorithmic information theory. Smolin introduced the theory of "Cosmological Natural Selection" with a biological analogy in mind. We examine an extension of this analogy involving intelligent life. We discuss if and how this extension could be legitimated. Keywords: origin of the universe, fine-tuning, physical constants, initial conditions, computational universe, biological universe, role of intelligent life, cosmological natural selection, cosmological artificial selection, artificial cosmogenesis.Comment: 25 pages, Foundations of Science, in pres

    Toward a General Framework for Information Fusion

    Get PDF
    National audienceDepending on the representation setting, different combination rules have been proposed for fusing information from distinct sources. Moreover in each setting, different sets of axioms that combination rules should satisfy have been advocated, thus justifying the existence of alternative rules (usually motivated by situations where the behavior of other rules was found unsatisfactory). These sets of axioms are usually purely considered in their own settings, without in-depth analysis of common properties essential for all the settings. This paper introduces core properties that, once properly instantiated, are meaningful in different representation settings ranging from logic to imprecise probabilities. The following representation settings are especially considered: classical set representation, possibility theory, and evidence theory, the latter encompassing the two other ones as special cases. This unified discussion of combination rules across different settings is expected to provide a fresh look on some old but basic issues in information fusion

    On the Dialectics of Global Governance in the Twenty-first Century : A Polanyian Double Movement?

    Get PDF
    Following decades of economic globalisation and market-oriented reforms across the world, Karl Polanyi’s double movement has been invoked not only to explain what is happening but also to give reasons for being hopeful about a different future. Some have suggested a pendulum model of history: a swing from markets to society leading, in the next phase, to a swing from society to markets, and so on. The double movement can also be understood dialectically as a description of an irreversible historical development following its own inner laws or schemes of development. Going beyond a thesis – antithesis – synthesis pattern, I maintain that conceptions and schemes drawn from dialectics, and especially dialectical critical realism, can provide better geo-historical hypotheses for explaining past changes and for building scenarios about possible future changes. I analyse political economy contradictions and tendencies, and focus on normative rationality, to assess substantial claims about rational tendential directionality of world history. I argue that democratic global Keynesianism would enable processes of decommodification and new syntheses concerning the market/social nexus. A learning process towards qualitatively higher levels of reflexivity can help develop global transformative agency. Existing contradictions can be resolved by means of rational collective actions and building more adequate common institutions. These collective actions are likely to involve new forms of political agency such as world political parties.Peer reviewe
    corecore