5,254 research outputs found

    An information assistant system for the prevention of tunnel vision in crisis management

    Get PDF
    In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions

    High-level feature detection from video in TRECVid: a 5-year retrospective of achievements

    Get PDF
    Successful and effective content-based access to digital video requires fast, accurate and scalable methods to determine the video content automatically. A variety of contemporary approaches to this rely on text taken from speech within the video, or on matching one video frame against others using low-level characteristics like colour, texture, or shapes, or on determining and matching objects appearing within the video. Possibly the most important technique, however, is one which determines the presence or absence of a high-level or semantic feature, within a video clip or shot. By utilizing dozens, hundreds or even thousands of such semantic features we can support many kinds of content-based video navigation. Critically however, this depends on being able to determine whether each feature is or is not present in a video clip. The last 5 years have seen much progress in the development of techniques to determine the presence of semantic features within video. This progress can be tracked in the annual TRECVid benchmarking activity where dozens of research groups measure the effectiveness of their techniques on common data and using an open, metrics-based approach. In this chapter we summarise the work done on the TRECVid high-level feature task, showing the progress made year-on-year. This provides a fairly comprehensive statement on where the state-of-the-art is regarding this important task, not just for one research group or for one approach, but across the spectrum. We then use this past and on-going work as a basis for highlighting the trends that are emerging in this area, and the questions which remain to be addressed before we can achieve large-scale, fast and reliable high-level feature detection on video

    The Ontology of Command and Control

    Get PDF
    The goal of the Department of Defense Net-Centric Data Strategy is to improve data sharing throughout the DoD. Data sharing is a critical element of interoperability in the emerging system-of-systems. Achieving interoperability requires the elimination of two types of data heterogeneity: differences of syntax and differences of semantics. This paper builds a path toward semantic uniformity through application of a disciplined approach to ontology. An ontology is a consensus framework representing the types of entities within a given domain and the relations between them. The construction of an ontology begins when a Community of Interest (COI) identifies its authoritative data sources (ADS), which are usually manifest in relevant doctrinal publications, glossaries, data dictionaries, and logical data models. The identified terms are then defined in relation to a common logical framework that has been designed to ensure interoperability with other ontologies created on the basis of the same strategy. As will be described, the Command and Control (C2) Ontology will include representations of a substantial number of entities within the Command and Control (C2) domain. If domain ontologies (e.g. Strike and Counterinsurgency) semantically align with the C2 Ontology, then a substantial barrier to systems interoperability is thereby crossed

    Query Rewriting and Optimization for Ontological Databases

    Full text link
    Ontological queries are evaluated against a knowledge base consisting of an extensional database and an ontology (i.e., a set of logical assertions and constraints which derive new intensional knowledge from the extensional database), rather than directly on the extensional database. The evaluation and optimization of such queries is an intriguing new problem for database research. In this paper, we discuss two important aspects of this problem: query rewriting and query optimization. Query rewriting consists of the compilation of an ontological query into an equivalent first-order query against the underlying extensional database. We present a novel query rewriting algorithm for rather general types of ontological constraints which is well-suited for practical implementations. In particular, we show how a conjunctive query against a knowledge base, expressed using linear and sticky existential rules, that is, members of the recently introduced Datalog+/- family of ontology languages, can be compiled into a union of conjunctive queries (UCQ) against the underlying database. Ontological query optimization, in this context, attempts to improve this rewriting process so to produce possibly small and cost-effective UCQ rewritings for an input query.Comment: arXiv admin note: text overlap with arXiv:1312.5914 by other author

    Joint Doctrine Ontology: A Benchmark for Military Information Systems Interoperability

    Get PDF
    When the U.S. conducts warfare, elements of a force are drawn from different services and work together as a single team to accomplish an assigned mission. To achieve such unified action, it is necessary that the doctrines governing the actions of members of specific services be both consistent with and subservient to joint Doctrine. Because warfighting today increasingly involves not only live forces but also automated systems, unified action requires that information technology that is used in joint warfare must be aligned with joint doctrine. It requires also that the separate information systems used by the different elements of a joint force must be interoperable, in the sense that data and information that is generated by each element must be usable (understandable, processable) by all the other elements that need them. Currently, such interoperability is impeded by multiple inconsistencies among the different data and software standards used by warfighters. We describe here the on-going project of creating a Joint Doctrine Ontology (JDO), which uses joint doctrine to provide shared computer-accessible content valid for any field of military endeavor, organization, and information system. JDO addresses the two previously mentioned requirements of unified action by providing a widely applicable benchmark for use by developers of information systems that will both guarantee alignment with joint doctrine and support interoperability
    corecore