973 research outputs found
Temporal Aspects of Smart Contracts for Financial Derivatives
Implementing smart contracts to automate the performance of high-value
over-the-counter (OTC) financial derivatives is a formidable challenge. Due to
the regulatory framework and the scale of financial risk if a contract were to
go wrong, the performance of these contracts must be enforceable in law and
there is an absolute requirement that the smart contract will be faithful to
the intentions of the parties as expressed in the original legal documentation.
Formal methods provide an attractive route for validation and assurance, and
here we present early results from an investigation of the semantics of
industry-standard legal documentation for OTC derivatives. We explain the need
for a formal representation that combines temporal, deontic and operational
aspects, and focus on the requirements for the temporal aspects as derived from
the legal text. The relevance of this work extends beyond OTC derivatives and
is applicable to understanding the temporal semantics of a wide range of legal
documentation
Why is there anything at all? What does it mean to be a person? Rescher on metaphysics
In this essay, I set out key aspects of Nicholasâ Rescherâs Metaphysical Perspectives. I illustrate the tenor and value of the text based on extended analysis of Chapter 1 on fundamental issues of what there is and Chapter 2 on personhood. Rescher is one of the most important philosophers of the twentieth century and early twenty-first century. His works are intrinsically interesting, but also important as resources for realists working in the social sciences
Physical Logic
In R.D. Sorkin's framework for logic in physics a clear separation is made
between the collection of unasserted propositions about the physical world and
the affirmation or denial of these propositions by the physical world. The
unasserted propositions form a Boolean algebra because they correspond to
subsets of an underlying set of spacetime histories. Physical rules of
inference, apply not to the propositions in themselves but to the affirmation
and denial of these propositions by the actual world. This physical logic may
or may not respect the propositions' underlying Boolean structure. We prove
that this logic is Boolean if and only if the following three axioms hold: (i)
The world is affirmed, (ii) Modus Ponens and (iii) If a proposition is denied
then its negation, or complement, is affirmed. When a physical system is
governed by a dynamical law in the form of a quantum measure with the rule that
events of zero measure are denied, the axioms (i) - (iii) prove to be too rigid
and need to be modified. One promising scheme for quantum mechanics as quantum
measure theory corresponds to replacing axiom (iii) with axiom (iv) Nature is
as fine grained as the dynamics allows.Comment: 14 pages, v2 published version with a change in the title and other
minor change
The art of being human : a project for general philosophy of science
Throughout the medieval and modern periods, in various sacred and secular guises, the unification of all forms of knowledge under the rubric of âscienceâ has been taken as the prerogative of humanity as a species. However, as our sense of species privilege has been called increasingly into question, so too has the very salience of âhumanityâ and âscienceâ as general categories, let alone ones that might bear some essential relationship to each other. After showing how the ascendant Stanford School in the philosophy of science has contributed to this joint demystification of âhumanityâ and âscienceâ, I proceed on a more positive note to a conceptual framework for making sense of science as the art of being human. My understanding of âscienceâ is indebted to the red thread that runs from Christian theology through the Scientific Revolution and Enlightenment to the Humboldtian revival of the university as the site for the synthesis of knowledge as the culmination of self-development. Especially salient to this idea is scienceâs epistemic capacity to manage modality (i.e. to determine the conditions under which possibilities can be actualised) and its political capacity to organize humanity into projects of universal concern. However, the challenge facing such an ideal in the twentyfirst century is that the predicate âhumanâ may be projected in three quite distinct ways, governed by what I call âecologicalâ, âbiomedicalâ and âcyberneticâ interests. Which one of these future humanities would claim todayâs humans as proper ancestors and could these futures co-habit the same world thus become two important questions that general philosophy of science will need to address in the coming years
Why the fair innings argument is not persuasive
The fair innings argument (FIA) is frequently put forward as a justification for denying elderly patients treatment when they are in competition with younger patients and resources are scarce. In this paper I will examine some arguments that are used to support the FIA. My conclusion will be that they do not stand up to scrutiny and therefore, the FIA should not be used to justify the denial of treatment to elderly patients, or to support rationing of health care by age. There are six issues arising out of the FIA which are to be addressed. First, the implication that there is such a thing as a fair share of life. Second, whether it makes sense to talk of a fair share of resources in the context of health care and the FIA. Third, that 'fairness' is usually only mentioned with regard to the length of a person's life, and not to any other aspect of it. Fourth, if it is sensible to discuss the merits of the FIA without taking account of the 'all other things being equal' argument. Fifth, the difference between what is unfair and what is unfortunate. Finally, that it is tragic if a young person dies, but only unfortunate if an elderly person does
Can processes make relationships work? The Triple Helix between structure and action
This contribution seeks to explore how complex adaptive theory can be applied at the conceptual level to unpack Triple Helix models. We use two cases to examine this issue â the Finnish Strategic Centres for Science, Technology & Innovation (SHOKs) and the Canadian Business-led Networks of Centres of Excellence (BL-NCE). Both types of centres are organisational structures that aspire to be business-led, with a considerable portion of their activities driven by (industrial) usersâ interests and requirements. Reflecting on the centresâ activities along three dimensions â knowledge generation, consensus building and innovation â we contend that conceptualising the Triple Helix from a process perspective will improve the dialogue between stakeholders and shareholders
Randomisation and Derandomisation in Descriptive Complexity Theory
We study probabilistic complexity classes and questions of derandomisation
from a logical point of view. For each logic L we introduce a new logic BPL,
bounded error probabilistic L, which is defined from L in a similar way as the
complexity class BPP, bounded error probabilistic polynomial time, is defined
from PTIME. Our main focus lies on questions of derandomisation, and we prove
that there is a query which is definable in BPFO, the probabilistic version of
first-order logic, but not in Cinf, finite variable infinitary logic with
counting. This implies that many of the standard logics of finite model theory,
like transitive closure logic and fixed-point logic, both with and without
counting, cannot be derandomised. Similarly, we present a query on ordered
structures which is definable in BPFO but not in monadic second-order logic,
and a query on additive structures which is definable in BPFO but not in FO.
The latter of these queries shows that certain uniform variants of AC0
(bounded-depth polynomial sized circuits) cannot be derandomised. These results
are in contrast to the general belief that most standard complexity classes can
be derandomised. Finally, we note that BPIFP+C, the probabilistic version of
fixed-point logic with counting, captures the complexity class BPP, even on
unordered structures
The âGalilean Style in Scienceâ and the Inconsistency of Linguistic Theorising
Chomskyâs principle of epistemological tolerance says that in theoretical linguistics contradictions between the data and the hypotheses may be temporarily tolerated in order to protect the explanatory power of the theory. The paper raises the following problem: What kinds of contradictions may be tolerated between the data and the hypotheses in theoretical linguistics? First a model of paraconsistent logic is introduced which differentiates between week and strong contradiction. As a second step, a case study is carried out which exemplifies that the principle of epistemological tolerance may be interpreted as the tolerance of week contradiction. The third step of the argumentation focuses on another case study which exemplifies that the principle of epistemological tolerance must not be interpreted as the tolerance of strong contradiction. The reason for the latter insight is the unreliability and the uncertainty of introspective data. From this finding the author draws the conclusion that it is the integration of different data types that may lead to the improvement of current theoretical linguistics and that the integration of different data types requires a novel methodology which, for the time being, is not available
The âGalilean Style in Scienceâ and the Inconsistency of Linguistic Theorising
Chomskyâs principle of epistemological tolerance says that in theoretical linguistics contradictions between the data and the hypotheses may be temporarily tolerated in order to protect the explanatory power of the theory. The paper raises the following problem: What kinds of contradictions may be tolerated between the data and the hypotheses in theoretical linguistics? First a model of paraconsistent logic is introduced which differentiates between week and strong contradiction. As a second step, a case study is carried out which exemplifies that the principle of epistemological tolerance may be interpreted as the tolerance of week contradiction. The third step of the argumentation focuses on another case study which exemplifies that the principle of epistemological tolerance must not be interpreted as the tolerance of strong contradiction. The reason for the latter insight is the unreliability and the uncertainty of introspective data. From this finding the author draws the conclusion that it is the integration of different data types that may lead to the improvement of current theoretical linguistics and that the integration of different data types requires a novel methodology which, for the time being, is not available
- âŠ