79,542 research outputs found
Determining the preferred representation of temporal constraints in conceptual models
The need for expressing temporal constraints in conceptual models is well-known, but it is unclear which representation is preferred and what would be easier to understand by modellers.
We assessed five different modes of representing temporal constraints, being the formal semantics, Description logics notation, a coding-style notation, temporal EER diagrams, and (pseudo-)natural language sentences. The same information was presented to 15 participants in an experimental evaluation.
Principally, it showed that 1) there was a clear preference for diagrams and natural language versus a dislike for other representations; 2) diagrams were preferred for simple constraints, but the natural language rendering was preferred for more complex temporal constraints; and 3) a multi-modal modelling tool will be needed for the data analysis stage to be effective
An MPEG-7 scheme for semantic content modelling and filtering of digital video
Abstract Part 5 of the MPEG-7 standard specifies Multimedia Description Schemes (MDS); that is, the format multimedia content models should conform to in order to ensure interoperability across multiple platforms and applications. However, the standard does not specify how the content or the associated model may be filtered. This paper proposes an MPEG-7 scheme which can be deployed for digital video content modelling and filtering. The proposed scheme, COSMOS-7, produces rich and multi-faceted semantic content models and supports a content-based filtering approach that only analyses content relating directly to the preferred content requirements of the user. We present details of the scheme, front-end systems used for content modelling and filtering and experiences with a number of users
Recommended from our members
Theory-driven learning : using intra-example relationships to constrain learning
We describe an incremental learning algorithm, called theory-driven learning, that creates rules to predict the effect of actions. Theory-driven learning exploits knowledge of regularities among rules to constrain the learning problem. We demonstrate that this knowledge enables the learning system to rapidly converge on accurate predictive rules and to tolerate more complex training data. An algorithm for incrementally learning these regularities is described and we provide evidence that the resulting regularities are sufficiently general to facilitate learning in new domains
Ephemeral point-events: is there a last remnant of physical objectivity?
For the past two decades, Einstein's Hole Argument (which deals with the
apparent indeterminateness of general relativity due to the general covariance
of the field equations) and its resolution in terms of Leibniz equivalence (the
statement that Riemannian geometries related by active diffeomorphisms
represent the same physical solution) have been the starting point for a lively
philosophical debate on the objectivity of the point-events of space-time. It
seems that Leibniz equivalence makes it impossible to consider the points of
the space-time manifold as physically individuated without recourse to
dynamical individuating fields. Various authors have posited that the metric
field itself can be used in this way, but nobody so far has considered the
problem of explicitly distilling the metrical fingerprint of point-events from
the gauge-dependent components of the metric field. Working in the Hamiltonian
formulation of general relativity, and building on the results of Lusanna and
Pauri (2002), we show how Bergmann and Komar's intrinsic pseudo-coordinates
(based on the value of curvature invariants) can be used to provide a physical
individuation of point-events in terms of the true degrees of freedom (the
Dirac observables) of the gravitational field, and we suggest how this
conceptual individuation could in principle be implemented with a well-defined
empirical procedure. We argue from these results that point-events retain a
significant kind of physical objectivity.Comment: LaTeX, natbib, 34 pages. Final journal versio
Flight crew aiding for recovery from subsystem failures
Some of the conceptual issues associated with pilot aiding systems are discussed and an implementation of one component of such an aiding system is described. It is essential that the format and content of the information the aiding system presents to the crew be compatible with the crew's mental models of the task. It is proposed that in order to cooperate effectively, both the aiding system and the flight crew should have consistent information processing models, especially at the point of interface. A general information processing strategy, developed by Rasmussen, was selected to serve as the bridge between the human and aiding system's information processes. The development and implementation of a model-based situation assessment and response generation system for commercial transport aircraft are described. The current implementation is a prototype which concentrates on engine and control surface failure situations and consequent flight emergencies. The aiding system, termed Recovery Recommendation System (RECORS), uses a causal model of the relevant subset of the flight domain to simulate the effects of these failures and to generate appropriate responses, given the current aircraft state and the constraints of the current flight phase. Since detailed information about the aircraft state may not always be available, the model represents the domain at varying levels of abstraction and uses the less detailed abstraction levels to make inferences when exact information is not available. The structure of this model is described in detail
Concurrent Lexicalized Dependency Parsing: The ParseTalk Model
A grammar model for concurrent, object-oriented natural language parsing is
introduced. Complete lexical distribution of grammatical knowledge is achieved
building upon the head-oriented notions of valency and dependency, while
inheritance mechanisms are used to capture lexical generalizations. The
underlying concurrent computation model relies upon the actor paradigm. We
consider message passing protocols for establishing dependency relations and
ambiguity handling.Comment: 90kB, 7pages Postscrip
Metaphoric coherence: Distinguishing verbal metaphor from `anomaly\u27
Theories and computational models of metaphor comprehension generally circumvent the question of metaphor versus “anomaly” in favor of a treatment of metaphor versus literal language. Making the distinction between metaphoric and “anomalous” expressions is subject to wide variation in judgment, yet humans agree that some potentially metaphoric expressions are much more comprehensible than others. In the context of a program which interprets simple isolated sentences that are potential instances of cross‐modal and other verbal metaphor, I consider some possible coherence criteria which must be satisfied for an expression to be “conceivable” metaphorically. Metaphoric constraints on object nominals are represented as abstracted or extended along with the invariant structural components of the verb meaning in a metaphor. This approach distinguishes what is preserved in metaphoric extension from that which is “violated”, thus referring to both “similarity” and “dissimilarity” views of metaphor. The role and potential limits of represented abstracted properties and constraints is discussed as they relate to the recognition of incoherent semantic combinations and the rejection or adjustment of metaphoric interpretations
- …