319 research outputs found
Gödelâs Cantorianism
Gödelâs philosophical conceptions bear striking similarities to Cantorâs. Although there is no
conclusive evidence that Gödel deliberately used or adhered to Cantorâs views, one can successfully
reconstruct and see his âCantorianismâ at work in many parts of his thought. In this paper, I aim to
describe the most prominent conceptual intersections between Cantorâs and Gödelâs thought, particularly on such matters as the nature and existence of mathematical entities (sets), concepts, Platonism, the Absolute Infinite, the progress and inexhaustibility of mathematics
Localization and the interface between quantum mechanics, quantum field theory and quantum gravity I (The two antagonistic localizations and their asymptotic compatibility)
It is shown that there are significant conceptual differences between QM and
QFT which make it difficult to view the latter as just a relativistic extension
of the principles of QM. At the root of this is a fundamental distiction
between Born-localization in QM (which in the relativistic context changes its
name to Newton-Wigner localization) and modular localization which is the
localization underlying QFT, after one separates it from its standard
presentation in terms of field coordinates. The first comes with a probability
notion and projection operators, whereas the latter describes causal
propagation in QFT and leads to thermal aspects of locally reduced finite
energy states. The Born-Newton-Wigner localization in QFT is only applicable
asymptotically and the covariant correlation between asymptotic in and out
localization projectors is the basis of the existence of an invariant
scattering matrix. In this first part of a two part essay the modular
localization (the intrinsic content of field localization) and its
philosophical consequences take the center stage. Important physical
consequences of vacuum polarization will be the main topic of part II. Both
parts together form a rather comprehensive presentation of known consequences
of the two antagonistic localization concepts, including the those of its
misunderstandings in string theory.Comment: 63 pages corrections, reformulations, references adde
On causality as the fundamental concept of Gödel's philosophy
This paper proposes a possible reconstruction and philosophical-logical clarification of Gödel's idea of causality as the philosophical fundamental concept. The results are based on Gödel's published and non-published texts (including Max Phil notebooks), and are established on the ground of interconnections of Gödel's dispersed remarks on causality, as well as on the ground of his general philosophical views. The paper is logically informal but is connected with already achieved results in the formalization of a causal account of Gödel's onto-theological theory. Gödel's main causal concepts are analysed (will, force, enjoyment, God, time and space, life, form, matter). Special attention is paid to a possible causal account of some of Gödel's logical concepts (assertion, privation, affirmation, negation, whole, part, general, particular, subject, predicate, necessary, possible, implication), as well as of logical antinomies. The problem of mechanical and non-mechanical procedures in the work with and on concepts is addressed in terms of Gödel's causal view
The idea of evolution in digital architecture: Toward united ontologies?
Humans have always sought to grasp natureâs working principles and apply acquired intelligence to artefacts since nature has always been the source of inspiration, solution and creativity. For this reason, there is a comprehensive interrelationship between the philosophy of nature and architecture. After Charles Darwinâs revolutionary work, living beings have started to be comprehended as changing, evolving and developing dynamic entities. Evolution theory has been accepted as the interpretive power of biology after several discussions and objections among scientists. In time, the working principles of evolutionary mechanisms have begun to be explained from genetic code to organism and environmental level. Afterwards, simulating natureâs evolutionary logic in the digital interface has become achievable with computational systemsâ advancements. Ultimately, architects have begun to utilise evolutionary understanding in design theories and methodologies through computational procedures since the 1990s. Although several studies about technical and pragmatic elements of evolutionary tools in design, there is still little research on the historical, theoretical and philosophical foundations of evolutionary understanding in digital architecture. This paper fills this literature gap by critically reviewing the evolutionary understanding embedded in digital architecture theories and designs since the beginning of the 1990s. The original contribution is the proposed intellectual framework seeking to understand and conceptualise how evolutionary processes were defined in biology and philosophy, then represented through computational procedures, to be finally utilised by architectural designers. The network of references and concepts is deeply connected with the communication between natural processes and their computational simulations. For this reason, another original contribution is the utilisation of theoretical limits and operative principles of computation procedures to shed light on the limitations, shortcomings and potentials of design theories regarding their speculations on the relationship between natural and computational ontologies
Australia's national health programs: An ontological mapping
Australia has a large number of health program initiatives whose comprehensive assessment will help refine and redefine priorities by highlighting areas of emphasis, under-emphasis, and non-emphasis. The objectives of our research are to: (a) systematically map all the programs onto an ontological framework, and (b) systemically analyse their relative emphases at different levels of granularity. We mapped all the health program initiatives onto an ontology with five dimensions, namely: (a) Policy-scope, (b) Policy-focus, (c) Outcomes, (d) Type of care, and (e) Population served. Each dimension is expanded into a taxonomy of its constituent elements. Each combination of elements from the five dimensions is a possible policy initiative component. There are 30,030 possible components encapsulated in the ontology. It includes, for example: (a) National financial policies on accessibility of preventive care for family, and (b) Local-urban regulatory policies on cost of palliative care for individual-aged. Four of the authors mapped all of Australia's health programs and initiatives on to the ontology. Visualizations of the data are used to highlight the relative emphases in the program initiatives. The dominant emphasis of the program initiatives is: [National] [educational, personnel-physician, information] policies on [accessibility, quality] of [preventive, wellness] care for the [community]. However, although (a) information is emphasized technology is not and (b) accessibility and quality are emphasized cost, satisfaction, and quality are not. The ontology and the results of the mapping can help systematically reassess and redirect the relative emphases of the programs and initiatives from a systemic perspective
Recommended from our members
Formally justified and modular Bayesian inference for probabilistic programs
Probabilistic modelling offers a simple and coherent framework to describe the
real world in the face of uncertainty. Furthermore, by applying Bayes' rule
it is possible to use probabilistic models to make inferences about the state of
the world from partial observations. While traditionally probabilistic models
were constructed on paper, more recently the approach of probabilistic
programming enables users to write the models in executable languages resembling
computer programs and to freely mix them with deterministic code.
It has long been recognised that the semantics of programming languages is
complicated and the intuitive understanding that programmers have is often
inaccurate, resulting in difficult to understand bugs and unexpected program
behaviours. Programming languages are therefore studied in a rigorous way using
formal languages with mathematically defined semantics. Traditionally formal
semantics of probabilistic programs are defined using exact inference results,
but in practice exact Bayesian inference is not tractable and approximate
methods are used instead, posing a question of how the results of these
algorithms relate to the exact results. Correctness of such approximate methods
is usually argued somewhat less rigorously, without reference to a formal
semantics.
In this dissertation we formally develop denotational semantics for
probabilistic programs that correspond to popular sampling algorithms often used
in practice. The semantics is defined for an expressive typed lambda calculus
with higher-order functions and inductive types, extended with probabilistic
effects for sampling and conditioning, allowing continuous distributions and
unbounded likelihoods. It makes crucial use of the recently developed formalism
of quasi-Borel spaces to bring all these elements together. We provide semantics
corresponding to several variants of Markov chain Monte Carlo and Sequential
Monte Carlo methods and formally prove a notion of correctness for these
algorithms in the context of probabilistic programming.
We also show that the semantic construction can be directly mapped to an
implementation using established functional programming abstractions called
monad transformers. We develop a compact Haskell library for probabilistic
programming closely corresponding to the semantic construction, giving users a
high level of assurance in the correctness of the implementation. We also
demonstrate on a collection of benchmarks that the library offers performance
competitive with existing systems of similar scope.
An important property of our construction, both the semantics and the
implementation, is the high degree of modularity it offers. All the inference
algorithms are constructed by combining small building blocks in a setup where
the type system ensures correctness of compositions. We show that with basic
building blocks corresponding to vanilla Metropolis-Hastings and Sequential
Monte Carlo we can implement more advanced algorithms known in the literature,
such as Resample-Move Sequential Monte Carlo, Particle Marginal
Metropolis-Hastings, and Sequential Monte Carlo squared. These implementations
are very concise, reducing the effort required to produce them and the scope for
bugs. On top of that, our modular construction enables in some cases
deterministic testing of randomised inference algorithms, further increasing
reliability of the implementation.Engineering and Physical Sciences Research Council, Cambridge Trust, Cambridge-Tuebingen programm
Generalized labelled Markov processes, coalgebraically
Coalgebras of measurable spaces are of interest in probability theory as a formalization of Labelled Markov Processes (LMPs). We discuss some general facts related to the notions of bisimulation and cocongruence on these systems, providing a faithful characterization of bisimulation on LMPs on generic measurable
spaces. This has been used to prove that bisimilarity on single LMPs is an equivalence, without assuming the state space to be analytic. As the second main contribution, we introduce the first specification rule format to define well-behaved composition operators for LMPs. This allows one to define process description languages on LMPs which are always guaranteed to have a fully-abstract semantics
Developing and Measuring Parallel Rule-Based Systems in a Functional Programming Environment
This thesis investigates the suitability of using functional programming for building parallel rule-based systems. A functional version of the well known rule-based system OPS5 was implemented, and there is a discussion on the suitability of functional languages for both building compilers and manipulating state. Functional languages can be used to build compilers that reflect the structure of the original grammar of a language and are, therefore, very suitable. Particular attention is paid to the state requirements and the state manipulation structures of applications such as a rule-based system because, traditionally, functional languages have been considered unable to manipulate state. From the implementation work, issues have arisen that are important for functional programming as a whole. They are in the areas of algorithms and data structures and development environments. There is a more general discussion of state and state manipulation in functional programs and how theoretical work, such as monads, can be used. Techniques for how descriptions of graph algorithms may be interpreted more abstractly to build functional graph algorithms are presented. Beyond the scope of programming, there are issues relating both to the functional language interaction with the operating system and to tools, such as debugging and measurement tools, which help programmers write efficient programs. In both of these areas functional systems are lacking. To address the complete lack of measurement tools for functional languages, a profiling technique was designed which can accurately measure the number of calls to a function , the time spent in a function, and the amount of heap space used by a function. From this design, a profiler was developed for higher-order, lazy, functional languages which allows the programmer to measure and verify the behaviour of a program. This profiling technique is designed primarily for application programmers rather than functional language implementors, and the results presented by the profiler directly reflect the lexical scope of the original program rather than some run-time representation. Finally, there is a discussion of generally available techniques for parallelizing functional programs in order that they may execute on a parallel machine. The techniques which are easier for the parallel systems builder to implement are shown to be least suitable for large functional applications. Those techniques that best suit functional programmers are not yet generally available and usable
Seeing all things in space : Kant and the reality of space in the context of early modern philosophy
One of the basic concepts of the metaphysics of the pre-critical Kant is the early modern, Leibnizian concept of the world as a synthetic whole of simple substances. Space is the order according to which these simple substances coexist, in the presence of God. Kantâs turn to critical philosophy contained a re-evaluation of Leibnizian metaphysics. Space is an ideal form of sensibility, not a real order of coexisting simple substances. This dissertation argues that Kantâs critical turn inspired him to outline a new science of subjective space â the Transcendental Aesthetic.
Leibnizians argued that our knowledge of space is innate, but still abstracted from the common sense idea of extension. In the Critique of Pure Reason Kant responded that this misrepresents our knowledge of the subjective space, which is the topic of the Transcendental Aesthetic. An exposition of the marks of the concept of subjective space not only shows that space is a form of sensibility, but that it is a continuous, actually infinite whole, which precedes its potentially infinite parts. In Kantâs terminology space is an analytic whole, which gives a key to the ideality of space, according to this dissertation.
One important topic in the literature concerns Kantâs awareness of the Leibnizian alternative that space might be both a form of sensibility and an order of coexistence. This dissertation claims that Kant could not rule out this alternative completely. However, in one aspect, Kant was successful: Leibnizians had to admit that continuity belongs to space, not as an order of coexistence, but as a form of sensibility. We see all things in continuous space, not in God. However, seeing things in space is analogous to seeing them in God.Kantin niin sanotun esikriittisen metafysiikan yksi leibnizilaisista peruskĂ€sityksistĂ€ on, ettĂ€ maailma on yksinkertaisten substanssien muodostama synteettinen kokonaisuus. TĂ€ssĂ€ nĂ€kemyksessĂ€ avaruus on jĂ€rjestys, jonka mukaisesti yksinkertaiset substanssit ovat yhdessĂ€ olemassa Jumalan lĂ€snĂ€ollessa. Kantin kriittiseen filosofiaan sisĂ€ltyy leibnizilaisen metafysiikan uudelleen arviointi. Sen sijaan, ettĂ€ avaruus olisi yhdessĂ€ olemassa olevien yksinkertaisten substanssien reaalinen jĂ€rjestys, avaruus on aistimellisuuden ideaali muoto. TĂ€ssĂ€ vĂ€itöskirjassa esitetÀÀn, kuinka Kantin kriittinen kÀÀnne sai hĂ€net hahmottelemaan uuden subjektiivisen avaruuden tieteen â transsendentaalisen estetiikan.
Leibnizilaiset vÀittivÀt, ettÀ tietomme avaruudesta on sisÀsyntyistÀ, joskin abstrahoitu arkiajatteluunkin kuuluvasta ulotteisuuden ideasta. Puhtaan jÀrjen kritiikissÀ Kant puolestaan vÀittÀÀ, ettÀ leibnizilaiset vÀÀristÀvÀt tietomme subjektiivisesta avaruudesta, joka on aiheena transsendentaalisessa estetiikassa. Subjektiivisen avaruuden kÀsitteen erittely osoittaa, ettÀ avaruus on paitsi aistimellisuuden muoto, myös jatkuva, aktuaalisesti ÀÀretön kokonaisuus, joka edeltÀÀ sen potentiaalisesti ÀÀrettömiÀ osia. Kantin terminologiassa avaruus on analyyttinen kokonaisuus, joka tÀssÀ vÀitöskirjassa esitetyn perusteella toimii avaimena avaruuden ideaalisuuteen.
Tutkimuskirjallisuudessa tÀrkeÀ aihe on, miten tietoinen Kant oli leibnizilaisesta vaihtoehdosta, ettÀ avaruus saattaisi olla sekÀ aistimellisuuden muoto ettÀ yhdessÀ olemisen jÀrjestys. TÀssÀ vÀitöskirjassa esitetÀÀn, ettei Kant voinut kokonaan sulkea pois tÀtÀ vaihtoehtoa. YhdessÀ mielessÀ Kant kuitenkin onnistui: leibnizilaiset olivat pakotettuja myöntÀmÀÀn, ettÀ jatkuvuus kuuluu avaruuteen aistimellisuuden muotona eikÀ yhdessÀ olemisen jÀrjestyksenÀ. NÀemme kaikki oliot jatkumollisessa avaruudessa, emme Jumalassa. Niiden nÀkeminen avaruudessa on kuitenkin analogista niiden nÀkemiselle Jumalassa
Relations and Folds in Leibniz: Monadological Intimacy
My goal is to provide a clear explanation of Leibnizâs notoriously difficult system of relations. Relations among âwindowlessâ substances that exert no causal power over one another seems like a pipe dream that should be abandoned. However, I demonstrate that each substance expresses its relations only through the unique representation of all other substances. That is, any relation a substance expresses is due to this unique, perspectival, non-causal, representation of others. Because this is the case for all substances, this means that this relation of representation is an ongoing process of interconnection for all substances. This representation is not merely a cognitive copy of the universe (i.e. all other substances); it is the expression of all other substances from a distinct perspective. By taking a holistic approach, I show that Leibniz is not contradicting himself when he claims that substances are windowless, relations are ideal, all substances are interconnected, and there are no such things as purely extrinsic denominations
- âŠ