35,682 research outputs found
Multi-level Contextual Type Theory
Contextual type theory distinguishes between bound variables and
meta-variables to write potentially incomplete terms in the presence of
binders. It has found good use as a framework for concise explanations of
higher-order unification, characterize holes in proofs, and in developing a
foundation for programming with higher-order abstract syntax, as embodied by
the programming and reasoning environment Beluga. However, to reason about
these applications, we need to introduce meta^2-variables to characterize the
dependency on meta-variables and bound variables. In other words, we must go
beyond a two-level system granting only bound variables and meta-variables.
In this paper we generalize contextual type theory to n levels for arbitrary
n, so as to obtain a formal system offering bound variables, meta-variables and
so on all the way to meta^n-variables. We obtain a uniform account by
collapsing all these different kinds of variables into a single notion of
variabe indexed by some level k. We give a decidable bi-directional type system
which characterizes beta-eta-normal forms together with a generalized
substitution operation.Comment: In Proceedings LFMTP 2011, arXiv:1110.668
Reconciling positional and nominal binding
We define an extension of the simply-typed lambda calculus where two
different binding mechanisms, by position and by name, nicely coexist. In the
former, as in standard lambda calculus, the matching between parameter and
argument is done on a positional basis, hence alpha-equivalence holds, whereas
in the latter it is done on a nominal basis. The two mechanisms also
respectively correspond to static binding, where the existence and type
compatibility of the argument are checked at compile-time, and dynamic binding,
where they are checked at run-time.Comment: In Proceedings ITRS 2012, arXiv:1307.784
Analysis and Synthesis of Metadata Goals for Scientific Data
The proliferation of discipline-specific metadata schemes contributes to artificial barriers that can impede interdisciplinary and transdisciplinary research. The authors considered this problem by examining the domains, objectives, and architectures of nine metadata schemes used to document scientific data in the physical, life, and social sciences. They used a mixed-methods content analysis and Greenberg’s (2005) metadata objectives, principles, domains, and architectural layout (MODAL) framework, and derived 22 metadata-related goals from textual content describing each metadata scheme. Relationships are identified between the domains (e.g., scientific discipline and type of data) and the categories of scheme objectives. For each strong correlation (\u3e0.6), a Fisher’s exact test for nonparametric data was used to determine significance (p \u3c .05).
Significant relationships were found between the domains and objectives of the schemes. Schemes describing observational data are more likely to have “scheme harmonization” (compatibility and interoperability with related schemes) as an objective; schemes with the objective “abstraction” (a conceptual model exists separate from the technical implementation) also have the objective “sufficiency” (the scheme defines a minimal amount of information to meet the needs of the community); and schemes with the objective “data publication” do not have the objective “element refinement.” The analysis indicates that many metadata-driven goals expressed by communities are independent of scientific discipline or the type of data, although they are constrained by historical community practices and workflows as well as the technological environment at the time of scheme creation. The analysis reveals 11 fundamental metadata goals for metadata documenting scientific data in support of sharing research data across disciplines and domains. The authors report these results and highlight the need for more metadata-related research, particularly in the context of recent funding agency policy changes
Explicit Substitutions for Contextual Type Theory
In this paper, we present an explicit substitution calculus which
distinguishes between ordinary bound variables and meta-variables. Its typing
discipline is derived from contextual modal type theory. We first present a
dependently typed lambda calculus with explicit substitutions for ordinary
variables and explicit meta-substitutions for meta-variables. We then present a
weak head normalization procedure which performs both substitutions lazily and
in a single pass thereby combining substitution walks for the two different
classes of variables. Finally, we describe a bidirectional type checking
algorithm which uses weak head normalization and prove soundness.Comment: In Proceedings LFMTP 2010, arXiv:1009.218
Do academics doubt their own research?
When do experts doubt or question their own previously published research and why? An online survey was designed and distributed across academic staff and postgraduate research students at different universities in Great Britain. Respondents (n = 202 - 244) identified the likelihoods of six different (quasi) hypothetical occurrences causing them to doubt or question work they have published in peer reviewed journals. They are: two
objective and two semi-objective citation based metrics, plus two semi-objective metrics based on verbalised reactions. Only limited support is found from this study to suggest that the authors of primary research would agree with any judgements made by others about their research based on these metrics. The occurrence most likely to cause respondents to doubt or question their previously published research was where the majority of citing studies suggested mistakes in their work. In a multivariate context, only age and nationality are significant determinants of doubt beyond average likelihoods. Understanding and acknowledging what makes authors of primary research doubt their own research could increase the validity of those who pass judgement
A Computational Approach to Reflective Meta-Reasoning about Languages with Bindings
We present a foundation for a computational meta-theory of languages with bindings implemented in a computer-aided formal reasoning environment. Our theory provides the ability to reason abstractly about operators, languages, open-ended languages, classes of languages, etc. The theory is based on the ideas of higher-order abstract syntax, with an appropriate induction principle parameterized over the language (i.e. a set of operators) being used. In our approach, both the bound and free variables are treated uniformly and this uniform treatment extends naturally to variable-length bindings. The implementation is reflective, namely there is a natural mapping between the meta-language of the theorem-prover and the object language of our theory. The object language substitution operation is mapped to the meta-language substitution and does not need to be defined recursively. Our approach does not require designing a custom type theory; in this paper we describe the implementation of this foundational theory within a general-purpose type theory. This work is fully implemented in the MetaPRL theorem prover, using the pre-existing NuPRL-like Martin-Lof-style computational type theory. Based on this implementation, we lay out an outline for a framework for programming language experimentation and exploration as well as a general reflective reasoning framework. This paper also includes a short survey of the existing approaches to syntactic reflection
A Foundational View on Integration Problems
The integration of reasoning and computation services across system and
language boundaries is a challenging problem of computer science. In this
paper, we use integration for the scenario where we have two systems that we
integrate by moving problems and solutions between them. While this scenario is
often approached from an engineering perspective, we take a foundational view.
Based on the generic declarative language MMT, we develop a theoretical
framework for system integration using theories and partial theory morphisms.
Because MMT permits representations of the meta-logical foundations themselves,
this includes integration across logics. We discuss safe and unsafe integration
schemes and devise a general form of safe integration
- …