55,372 research outputs found
A specification language for Lexical Functional Grammars
This paper defines a language L for specifying LFG grammars. This enables
constraints on LFG's composite ontology (c-structures synchronised with
f-structures) to be stated directly; no appeal to the LFG construction
algorithm is needed. We use L to specify schemata annotated rules and the LFG
uniqueness, completeness and coherence principles. Broader issues raised by
this work are noted and discussed.Comment: 6 pages, LaTeX uses eaclap.sty; Procs of Euro ACL-9
The role of Comprehension in Requirements and Implications for Use Case Descriptions
Within requirements engineering it is generally accepted that in writing specifications (or indeed any requirements phase document), one attempts to produce an artefact which will be simple to comprehend for the user. That is, whether the document is intended for customers to validate requirements, or engineers to understand what the design must deliver, comprehension is an important goal for the author. Indeed, advice on producing ‘readable’ or ‘understandable’ documents is often included in courses on requirements engineering. However, few researchers, particularly within the software engineering domain, have attempted either to define or to understand the nature of comprehension and it’s implications for guidance on the production of quality requirements.
Therefore, this paper examines thoroughly the nature of textual comprehension, drawing heavily from research in discourse process, and suggests some implications for requirements (and other) software documentation. In essence, we find that the guidance on writing requirements, often prevalent within software engineering, may be based upon assumptions which are an oversimplification of the nature of comprehension. Hence, the paper examines guidelines which have been proposed, in this case for use case descriptions, and the extent to which they agree with discourse process theory; before suggesting refinements to the guidelines which attempt to utilise lessons learned from our richer understanding of the underlying discourse process theory. For example, we suggest subtly different sets of writing guidelines for the different tasks of requirements, specification and design
Why Coherence Matters
Explicating the concept of coherence and establishing a measure for assessing the coherence of an information set are two of the most important tasks of coherentist epistemology. To this end, several principles have been proposed to guide the specification of a measure of coherence. We depart from this prevailing path by challenging two well-established and prima facie plausible principles: Agreement and Dependence. Instead, we propose a new probabilistic measure of coherence that combines basic intuitions of both principles, but without strictly satisfying either of them. It is then shown that the new measure outperforms alternative measures in terms of its truth-tracking properties. We consider this feature to be central and argue that coherence matters because it is likely to be our best available guide to truth, at least when more direct evidence is unavailable
Extrapolation of Airborne Polarimetric and Interferometric SAR Data for Validation of Bio-Geo-Retrieval Algorithms for Future Spaceborne SAR Missions
Spaceborne SAR system concepts and mission design is often based on algorithms developed and the experience gathered
from airborne SAR experiments and associated dedicated campaigns. However, airborne SAR systems have better
performance parameters than their future space-borne counterparts as their design is not impacted by mass, power, and
storage constraints.
This paper describes a methodology to extrapolate spaceborne quality SAR image products from long wavelength airborne
polarimetric SAR data which were acquired especially for the development and validation of bio/geo-retrieval algorithms in
forested regions. For this purpose not only system (sensor) related parameters are altered, but also those relating to the
propagation path (ionosphere) and to temporal decorrelation
A counter abstraction technique for the verification of robot swarms.
We study parameterised verification of robot swarms against temporal-epistemic specifications. We relax some of the significant restrictions assumed in the literature and present a counter abstraction approach that enable us to verify a potentially much smaller abstract model when checking a formula on a swarm of any size. We present an implementation and discuss experimental results obtained for the alpha algorithm for robot swarms
Signatures and Induction Principles for Higher Inductive-Inductive Types
Higher inductive-inductive types (HIITs) generalize inductive types of
dependent type theories in two ways. On the one hand they allow the
simultaneous definition of multiple sorts that can be indexed over each other.
On the other hand they support equality constructors, thus generalizing higher
inductive types of homotopy type theory. Examples that make use of both
features are the Cauchy real numbers and the well-typed syntax of type theory
where conversion rules are given as equality constructors. In this paper we
propose a general definition of HIITs using a small type theory, named the
theory of signatures. A context in this theory encodes a HIIT by listing the
constructors. We also compute notions of induction and recursion for HIITs, by
using variants of syntactic logical relation translations. Building full
categorical semantics and constructing initial algebras is left for future
work. The theory of HIIT signatures was formalised in Agda together with the
syntactic translations. We also provide a Haskell implementation, which takes
signatures as input and outputs translation results as valid Agda code
- …