7,020 research outputs found

    Toward a Unified Timestamp with explicit precision

    Get PDF
    Demographic and health surveillance (DS) systems monitor and document individual- and group-level processes in well-defined populations over long periods of time. The resulting data are complex and inherently temporal. Established methods of storing and manipulating temporal data are unable to adequately address the challenges posed by these data. Building on existing standards, a temporal framework and notation are presented that are able to faithfully record all of the time-related information (or partial lack thereof) produced by surveillance systems. The Unified Timestamp isolates all of the inherent complexity of temporal data into a single data type and provides the foundation on which a Unified Timestamp class can be built. The Unified Timestamp accommodates both point- and interval-based time measures with arbitrary precision, including temporal sets. Arbitrary granularities and calendars are supported, and the Unified Timestamp is hierarchically organized, allowing it to represent an unlimited array of temporal entities.demographic surveillance, standardization, temporal databases, temporal integrity, timestamp, valid time

    A Process for Engineer Domain Ontology: An Experience in Developing Business Analysis Ontology

    Get PDF
    During the last years several works have been aimed to improve ontology technological as-pects, like representation language and inference mechanisms. This paper presents a discussion on the process and product of an experience in developing ontology for the public sector whose organization requires a strong knowledge management. This process is applied to engineer and develop ontology for Business analysis domain.Ontology, Ontology Engineering, Methodology, Protégé, Business Analysis

    Developing conceptual glossaries for the Latin vulgate bible.

    Get PDF
    A conceptual glossary is a textual reference work that combines the features of a thesaurus and an index verborum. In it, the word occurrences within a given text are classified, disambiguated, and indexed according to their membership of a set of conceptual (i.e. semantic) fields. Since 1994, we have been working towards building a set of conceptual glossaries for the Latin Vulgate Bible. So far, we have published a conceptual glossary to the Gospel according to John and are at present completing the analysis of the Gospel according to Mark and the minor epistles. This paper describes the background to our project and outlines the steps by which the glossaries are developed within a relational database framework

    Thematic Annotation: extracting concepts out of documents

    Get PDF
    Contrarily to standard approaches to topic annotation, the technique used in this work does not centrally rely on some sort of -- possibly statistical -- keyword extraction. In fact, the proposed annotation algorithm uses a large scale semantic database -- the EDR Electronic Dictionary -- that provides a concept hierarchy based on hyponym and hypernym relations. This concept hierarchy is used to generate a synthetic representation of the document by aggregating the words present in topically homogeneous document segments into a set of concepts best preserving the document's content. This new extraction technique uses an unexplored approach to topic selection. Instead of using semantic similarity measures based on a semantic resource, the later is processed to extract the part of the conceptual hierarchy relevant to the document content. Then this conceptual hierarchy is searched to extract the most relevant set of concepts to represent the topics discussed in the document. Notice that this algorithm is able to extract generic concepts that are not directly present in the document.Comment: Technical report EPFL/LIA. 81 pages, 16 figure

    Processing count queries over event streams at multiple time granularities

    Get PDF
    Management and analysis of streaming data has become crucial with its applications in web, sensor data, network tra c data, and stock market. Data streams consist of mostly numeric data but what is more interesting is the events derived from the numerical data that need to be monitored. The events obtained from streaming data form event streams. Event streams have similar properties to data streams, i.e., they are seen only once in a fixed order as a continuous stream. Events appearing in the event stream have time stamps associated with them in a certain time granularity, such as second, minute, or hour. One type of frequently asked queries over event streams is count queries, i.e., the frequency of an event occurrence over time. Count queries can be answered over event streams easily, however, users may ask queries over di erent time granularities as well. For example, a broker may ask how many times a stock increased in the same time frame, where the time frames specified could be hour, day, or both. This is crucial especially in the case of event streams where only a window of an event stream is available at a certain time instead of the whole stream. In this paper, we propose a technique for predicting the frequencies of event occurrences in event streams at multiple time granularities. The proposed approximation method e ciently estimates the count of events with a high accuracy in an event stream at any time granularity by examining the distance distributions of event occurrences. The proposed method has been implemented and tested on di erent real data sets and the results obtained are presented to show its e ectiveness

    A timeband framework for modelling real-time systems

    Get PDF
    Complex real-time systems must integrate physical processes with digital control, human operation and organisational structures. New scientific foundations are required for specifying, designing and implementing these systems. One key challenge is to cope with the wide range of time scales and dynamics inherent in such systems. To exploit the unique properties of time, with the aim of producing more dependable computer-based systems, it is desirable to explicitly identify distinct time bands in which the system is situated. Such a framework enables the temporal properties and associated dynamic behaviour of existing systems to be described and the requirements for new or modified systems to be specified. A system model based on a finite set of distinct time bands is motivated and developed in this paper

    The Size Conundrum: Why Online Knowledge Markets Can Fail at Scale

    Full text link
    In this paper, we interpret the community question answering websites on the StackExchange platform as knowledge markets, and analyze how and why these markets can fail at scale. A knowledge market framing allows site operators to reason about market failures, and to design policies to prevent them. Our goal is to provide insights on large-scale knowledge market failures through an interpretable model. We explore a set of interpretable economic production models on a large empirical dataset to analyze the dynamics of content generation in knowledge markets. Amongst these, the Cobb-Douglas model best explains empirical data and provides an intuitive explanation for content generation through concepts of elasticity and diminishing returns. Content generation depends on user participation and also on how specific types of content (e.g. answers) depends on other types (e.g. questions). We show that these factors of content generation have constant elasticity---a percentage increase in any of the inputs leads to a constant percentage increase in the output. Furthermore, markets exhibit diminishing returns---the marginal output decreases as the input is incrementally increased. Knowledge markets also vary on their returns to scale---the increase in output resulting from a proportionate increase in all inputs. Importantly, many knowledge markets exhibit diseconomies of scale---measures of market health (e.g., the percentage of questions with an accepted answer) decrease as a function of number of participants. The implications of our work are two-fold: site operators ought to design incentives as a function of system size (number of participants); the market lens should shed insight into complex dependencies amongst different content types and participant actions in general social networks.Comment: The 27th International Conference on World Wide Web (WWW), 201

    The Galileo PPS expert monitoring and diagnostic prototype

    Get PDF
    The Galileo PPS Expert Monitoring Module (EMM) is a prototype system implemented on the SUN workstation that will demonstrate a knowledge-based approach to monitoring and diagnosis for the Galileo spacecraft Power/Pyro subsystems. The prototype will simulate an analysis module functioning within the SFOC Engineering Analysis Subsystem Environment (EASE). This document describes the implementation of a prototype EMM for the Galileo spacecraft Power Pyro Subsystem. Section 2 of this document provides an overview of the issues in monitoring and diagnosis and comparison between traditional and knowledge-based solutions to this problem. Section 3 describes various tradeoffs which must be considered when designing a knowledge-based approach to monitoring and diagnosis, and section 4 discusses how these issues were resolved in constructing the prototype. Section 5 presents conclusions and recommendations for constructing a full-scale demonstration of the EMM. A Glossary provides definitions of terms used in this text
    • …
    corecore