180 research outputs found
Taming Numbers and Durations in the Model Checking Integrated Planning System
The Model Checking Integrated Planning System (MIPS) is a temporal least
commitment heuristic search planner based on a flexible object-oriented
workbench architecture. Its design clearly separates explicit and symbolic
directed exploration algorithms from the set of on-line and off-line computed
estimates and associated data structures. MIPS has shown distinguished
performance in the last two international planning competitions. In the last
event the description language was extended from pure propositional planning to
include numerical state variables, action durations, and plan quality objective
functions. Plans were no longer sequences of actions but time-stamped
schedules. As a participant of the fully automated track of the competition,
MIPS has proven to be a general system; in each track and every benchmark
domain it efficiently computed plans of remarkable quality. This article
introduces and analyzes the most important algorithmic novelties that were
necessary to tackle the new layers of expressiveness in the benchmark problems
and to achieve a high level of performance. The extensions include critical
path analysis of sequentially generated plans to generate corresponding optimal
parallel plans. The linear time algorithm to compute the parallel plan bypasses
known NP hardness results for partial ordering by scheduling plans with respect
to the set of actions and the imposed precedence relations. The efficiency of
this algorithm also allows us to improve the exploration guidance: for each
encountered planning state the corresponding approximate sequential plan is
scheduled. One major strength of MIPS is its static analysis phase that grounds
and simplifies parameterized predicates, functions and operators, that infers
knowledge to minimize the state description length, and that detects domain
object symmetries. The latter aspect is analyzed in detail. MIPS has been
developed to serve as a complete and optimal state space planner, with
admissible estimates, exploration engines and branching cuts. In the
competition version, however, certain performance compromises had to be made,
including floating point arithmetic, weighted heuristic search exploration
according to an inadmissible estimate and parameterized optimization
Evolutionary dynamics, topological disease structures, and genetic machine learning
Topological evolution is a new dynamical systems model of biological evolution occurring within a genomic state space. It can be modeled equivalently as a stochastic dynamical system, a stochastic differential equation, or a partial differential equation drift-diffusion model. An application of this approach is a model of disease evolution tracing diseases in ways similar to standard functional traits (e.g., organ evolution). Genetically embedded diseases become evolving functional components of species-level genomes. The competition between species-level evolution (which tends to maintain diseases) and individual evolution (which acts to eliminate them), yields a novel structural topology for the stochastic dynamics involved. In particular, an unlimited set of dynamical time scales emerges as a means of timing different levels of evolution: from individual to group to species and larger units. These scales exhibit a dynamical tension between individual and group evolutions, which are modeled on very different (fast and slow, respectively) time scales.
This is analyzed in the context of a potentially major constraint on evolution: the species-level enforcement of lifespan via (topological) barriers to genomic longevity. This species-enforced behavior is analogous to certain types of evolutionary altruism, but it is denoted here as extreme altruism based on its potential shaping through mass extinctions. We give examples of biological mechanisms implementing some of the topological barriers discussed and provide mathematical models for them. This picture also introduces an explicit basis for lifespan-limiting evolutionary pressures. This involves a species-level need to maintain flux in its genome via a paced turnover of its biomass. This is necessitated by the need for phenomic characteristics to keep pace with genomic changes through evolution. Put briefly, the phenome must keep up with the genome, which occurs with an optimized limited lifespan.
An important consequence of this model is a new role for diseases in evolution. Rather than their commonly recognized role as accidental side-effects, they play a central functional role in the shaping of an optimal lifespan for a species implemented through the topology of their embedding into the genome state space. This includes cancers, which are known to be embedded into the genome in complex and sometimes hair-triggered ways arising from DNA damage. Such cancers are known also to act in engineered and teleological ways that have been difficult to explain using currently very popular theories of intra-organismic cancer evolution. This alternative inter-organismic picture presents cancer evolution as occurring over much longer (evolutionary) time scales rather than very shortened organic evolutions that occur in individual cancers. This in turn may explain some evolved, intricate, and seemingly engineered properties of cancer.
This dynamical evolutionary model is framed in a multiscaled picture in which different time scales are almost independently active in the evolutionary process acting on semi-independent parts of the genome.
We additionally move from natural evolution to artificial implementations of evolutionary algorithms. We study genetic programming for the structured construction of machine learning features in a new structural risk minimization environment. While genetic programming in feature engineering is not new, we propose a Lagrangian optimization criterion for defining new feature sets inspired by structural risk minimization in statistical learning.
We bifurcate the optimization of this Lagrangian into two exhaustive categories involving local and global search. The former is accomplished through local descent with given basins of attraction while the latter is done through a combinatorial search for new basins via an evolution algorithm
Clustering in massive data sets
We review the time and storage costs of search and clustering algorithms. We exemplify these, based on case-studies in astronomy, information retrieval, visual user interfaces, chemical databases, and other areas. Theoretical results developed as far back as the 1960s still very often remain topical. More recent work is also covered in this article. This includes a solution for the statistical question of how many clusters there are in a dataset. We also look at one line of inquiry in the use of clustering for human-computer user interfaces. Finally, the visualization of data leads to the consideration of data arrays as images, and we speculate on future results to be expected here
Conversing with a devil’s advocate: Interpersonal coordination in deception and disagreement
abstract: This study investigates the presence of dynamical patterns of interpersonal coordination in extended deceptive conversations across multimodal channels of behavior. Using a novel "devil’s advocate" paradigm, we experimentally elicited deception and truth across topics in which conversational partners either agreed or disagreed, and where one partner was surreptitiously asked to argue an opinion opposite of what he or she really believed. We focus on interpersonal coordination as an emergent behavioral signal that captures interdependencies between conversational partners, both as the coupling of head movements over the span of milliseconds, measured via a windowed lagged cross correlation (WLCC) technique, and more global temporal dependencies across speech rate, using cross recurrence quantification analysis (CRQA). Moreover, we considered how interpersonal coordination might be shaped by strategic, adaptive conversational goals associated with deception. We found that deceptive conversations displayed more structured speech rate and higher head movement coordination, the latter with a peak in deceptive disagreement conversations. Together the results allow us to posit an adaptive account, whereby interpersonal coordination is not beholden to any single functional explanation, but can strategically adapt to diverse conversational demands.The article is published at http://journals.plos.org/plosone/article?id=10.1371/journal.pone.017814
Objects in Oz
The programming language Oz integrates the paradigms of imperative, functional
and concurrent constraint programming in a computational framework of unprecedented
breadth, featuring stateful programming through cells, lexically scoped
higher-order programming, and explicit concurrency synchronized by logic variables.
Object-oriented programming is another paradigm that provides a set of concepts
useful in software practice. In this thesis we address the question how
object-oriented programming can be suitably supported in Oz. As a lexically
scoped higher-order language, Oz can express a wide range of object-oriented
concepts. We present a simple yet expressive object system, demonstrate its usability
and outline an efficient implementation. A central aspect of Oz is its support
for concurrent computation. We examine the impact of concurrency on the
design of an object system and explore the use of objects in concurrent programming.Die Programmiersprache Oz verbindet die Paradigmen der imperativen, funktionalen und nebenläufigen Constraint-Programmierung in einem kohärenten Berechnungsmodell. Oz unterstützt zustandsbehaftete Programmierung, Programmierung höherer Ordnung mit lexikalischer Bindung und explizite Nebenläufigkeit, die mithilfe logischer Variablen synchroniziert werden kann. In der Softwarepraxis hat sich mit der objekt-orientierten Programmierung ein weiteres Programmierparadigma etabliert. In der vorliegenden Arbeit beschäftige ich mich mit der Frage, wie objekt-orientierte Programmierung in geeigneter Weise in Oz unterstützt werden kann. Ich stelle ein einfaches und doch ausdrucksstarkes Objektsystem vor, belege seine Benutzbarkeit und umreiße seine effiziente Implementierung. Ein zentraler Aspekt der Programmiersprache Oz ist ihre Unterstützung nebenläufiger Berechnung. Infolgedessen nimmt die Untersuchung des Ein- flusses der Nebenläufigkeit auf das Design des Objektsystems einen besonderen Rang ein. Ich untersuche die Möglichkeiten, die das Objektsystem bietet, um nebenläufige objekt-orientierte Programmiertechniken auszudrücken
Recommended from our members
Essays on Statistical Decision Theory and Econometrics
This dissertation studies statistical decision making in various guises. I start by providing a general decision theoretic model of statistical behavior, and then analyze two particular instances which fit in that framework.
Chapter 1 studies statistical decision theory (SDT), a class of models pioneered by Abraham Wald to analyze how agents use data when making decisions under uncertainty. Despite its prominence in information economics and econometrics, SDT has not been given formal choice-theoretic or behavioral foundations. This chapter axiomatizes preferences over decision rules and experiments for a broad class of SDT models. The axioms show how certain seemingly-natural decision rules are incompatible with this broad class of SDT models. Using those representation result, I then develop a methodology to translate axioms from classical decision-theory, a la Anscombe and Aumann (1963), to the SDT framework. The usefulness of this toolkit is then illustrated by translating various classical axioms, which serve to refine my baseline framework into more specific statistical decision theoretic models, some of which are novel to SDT. I also discuss foundations for SDT under other kinds of choice data.
Chapter 2 studies statistical identifiability of finite mixture models. If a model is not identifiable, multiple combinations of its parameters can lead to the same observed distribution of the data, which greatly complicates, if not invalidates, causal inference based on the model. High-dimensional latent parameter models, which include finite mixtures, are widely used in economics, but are only guaranteed to be identifiable under specific conditions. Since these conditions are usually stated in terms of the hidden parameters of the model, they are seldom testable using noisy data. This chapter provides a condition which, when imposed on the directly observable mixture distribution, guarantees that a finite mixture model is non-parametrically identifiable. Since the condition relates to an observable quantity, it can be used to devise a statistical test of identification for the model. Thus I propose a Bayesian test of whether the model is close to being identified, which the econometrician may apply before estimating the parameters of the model. I also show that, when the model is identifiable, approximate non-negative matrix factorization provides a consistent, likelihood-free estimator of mixture weights.
Chapter 3 studies the robustness of pricing strategies when a firm is uncertain about the distribution of consumers' willingness-to-pay. When the firm has access to data to estimate this distribution, a simple strategy is to implement the mechanism that is optimal for the estimated distribution. We find that such an empirically optimal mechanism boasts strong profit and regret guarantees. Moreover, we provide a toolkit to evaluate the robustness properties of different mechanisms, showing how to consistently estimate and conduct valid inference on the profit generated by any one mechanism, which enables one to evaluate and compare their probabilistic revenue guarantees
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
LIPIcs, Volume 248, ISAAC 2022, Complete Volume
LIPIcs, Volume 248, ISAAC 2022, Complete Volum
- …