25 research outputs found
An investigation of nondeterminism in functional programming languages
This thesis investigates nondeterminism in functional programming languages. To establish a precise understanding of nondeterministic language properties, Sondergaard and Sestoft's analysis and definitions of functional language properties are adopted as are the characterizations of weak and strong nondeterminism. This groundwork is followed by a denotational semantic description of a nondeterministic language (suggested by Sondergaard and Sestoft). In this manner, a precise characterization of the effects of strong nondeterminism is developed. Methods used to hide nondeterminism to in order to overcome or sidestep the problem of strong nondeterminism in pure functional languages are defined. These different techniques ensure that functional languages remain pure but also include some of the advantages of nondeterminism. Lastly, this discussion of nondeterminism is applied to the area of functional parallel language implementation to indicate that the related problem and the possible solutions are not purely academic. This application gives rise to an interesting discussion on optimization of list parallelism. This technique relies on the ability to decide when a bag may be used instead of a list
On the organization of the lexicon.
Thesis. 1980. Ph.D.--Massachusetts Institute of Technology. Dept. of Linguistics and Philosophy.MICROFICHE COPY AVAILABLE IN ARCHIVES AND HUMANITIES.Vita.Bibliography: leaves 321-326.Ph.D
Recommended from our members
A powerdoman semantics for indeterminism
A denotational semantics is presented for a language that includes multiple-valued functions (essentially Lisp S-expressions), which map from ground values into the power domain of ground values. The domain equations are reflexive. and fixed points of all functions are defined. Thus, it is possible to specify an operating system as a function whose codomain is a set of possible behaviors of the system, only one of which is realized under an operational semantics. Such a system can be specified using "pure" applicative programming (recursion equations without side effects) over primitive functions like amb, frons, arbiter, or arbit, all of which are formally defined.
Tempered by 'environmental transparency,' we consider a power domain semantics in which the power domain may only occur within the codomain in the equation that defines function space. The problem addressed is how to define the analog of 'fixed point' for a function from the ground domain to that power domain, in a semantics that uses natural extension as the axiom for function application. Denotational and equivalent operational semantics are presented for the Smyth-upside-down power domain; a similar denotational semantics is presented for the Plotkin power domain (Egli-Milner order), along with a likely operational semantics
A formal model of non-determinate dataflow computation
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1983.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERINGVita.Bibliography: leaves 75-78.by Jarvis Dean Brock.Ph.D
A metrical theory of stress rules
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Linguistics and Philosophy, 1980.MICROFICHE COPY AVAILABLE IN ARCHIVES AND HUMANITIES.Vita.Bibliography: leaves 333-339.by Bruce Philip Hayes.Ph.D
Active Logics: A Unified Formal Approach to Episodic Reasoning
Artificial intelligence research falls roughly into two categories:
formal and implementational. This division is not completely firm:
there are implementational studies based on (formal or informal)
theories (e.g., CYC, SOAR, OSCAR), and there are theories framed with
an eye toward implementability (e.g., predicate circumscription).
Nevertheless, formal/theoretical work tends to focus on very narrow
problems (and even on very special cases of very narrow problems) while
trying to get them ``right'' in a very strict sense, while
implementational work tends to aim at fairly broad ranges of behavior
but often at the expense of any kind of overall conceptually unifying
framework that informs understanding. It is sometimes urged that this
gap is intrinsic to the topic: intelligence is not a unitary thing for
which there will be a unifying theory, but rather a ``society'' of
subintelligences whose overall behavior cannot be reduced to useful
characterizing and predictive principles.
Here we describe a formal architecture that is more closely tied to
implementational constraints than is usual for formalisms, and which
has been used to solve a number of commonsense problems in a unified
manner. In particular, we address the issue of formal, integrated, and
longitudinal reasoning: inferentially-modeled behavior that
incorporates a fairly wide variety of types of commonsense reasoning
within the context of a single extended episode of activity requiring
keeping track of ongoing progress, and altering plans and beliefs
accordingly. Instead of aiming at optimal solutions to isolated,
well-specified and temporally narrow problems, we focus on satisficing
solutions to under-specified and temporally-extended problems, much
closer to real-world needs. We believe that such a focus is required
for AI to arrive at truly intelligent mechanisms with the ability to
behave effectively over considerably longer time periods and range of
circumstances than is common in AI today. While this will surely lead
to less elegant formalisms, it also surely is requisite if AI is to get
fully out of the blocks-world and into the real world.
(Also cross-referenced as UMIACS-TR-99-65
Natively probabilistic computation
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.Includes bibliographical references (leaves 129-135).I introduce a new set of natively probabilistic computing abstractions, including probabilistic generalizations of Boolean circuits, backtracking search and pure Lisp. I show how these tools let one compactly specify probabilistic generative models, generalize and parallelize widely used sampling algorithms like rejection sampling and Markov chain Monte Carlo, and solve difficult Bayesian inference problems. I first introduce Church, a probabilistic programming language for describing probabilistic generative processes that induce distributions, which generalizes Lisp, a language for describing deterministic procedures that induce functions. I highlight the ways randomness meshes with the reflectiveness of Lisp to support the representation of structured, uncertain knowledge, including nonparametric Bayesian models from the current literature, programs for decision making under uncertainty, and programs that learn very simple programs from data. I then introduce systematic stochastic search, a recursive algorithm for exact and approximate sampling that generalizes a popular form of backtracking search to the broader setting of stochastic simulation and recovers widely used particle filters as a special case. I use it to solve probabilistic reasoning problems from statistical physics, causal reasoning and stereo vision. Finally, I introduce stochastic digital circuits that model the probability algebra just as traditional Boolean circuits model the Boolean algebra.(cont.) I show how these circuits can be used to build massively parallel, fault-tolerant machines for sampling and allow one to efficiently run Markov chain Monte Carlo methods on models with hundreds of thousands of variables in real time. I emphasize the ways in which these ideas fit together into a coherent software and hardware stack for natively probabilistic computing, organized around distributions and samplers rather than deterministic functions. I argue that by building uncertainty and randomness into the foundations of our programming languages and computing machines, we may arrive at ones that are more powerful, flexible and efficient than deterministic designs, and are in better alignment with the needs of computational science, statistics and artificial intelligence.by Vikash Kumar Mansinghka.Ph.D