1,963 research outputs found

    A query description model based on basic semantic unit composite Petri-Net for soccer video

    Get PDF
    Digital video networks are making available increasing amounts of sports video data. The volume of material on offer means that sports fans often rely on prepared summaries of game highlights to follow the progress of their favourite teams. A significant application area for automated video analysis technology is the generation of personalized highlights of sports events. One of the most popular sports around world is soccer. A soccer game is composed of a range of significant events, such as goal scoring, fouls, and substitutions. Automatically detecting these events in a soccer video can enable users to interactively design their own highlights programmes. From an analysis of broadcast soccer video, we propose a query description model based on Basic Semantic Unit Composite Petri-Nets (BSUCPN) to automatically detect significant events within soccer video. Firstly we define a Basic Semantic Unit (BSU) set for soccer videos based on identifiable feature elements within a soccer video, Secondly we design Composite Petri-Net (CPN) models for semantic queries and use these to describe BSUCPNs for semantic events in soccer videos. A particular strength of this approach is that users are able to design their own semantic event queries based on BSUCPNs to search interactively within soccer videos. Experimental results based on recorded soccer broadcasts are used to illustrate the potential of this approach

    Big continuous data: dealing with velocity by composing event streams

    No full text
    International audienceThe rate at which we produce data is growing steadily, thus creating even larger streams of continuously evolving data. Online news, micro-blogs, search queries are just a few examples of these continuous streams of user activities. The value of these streams relies in their freshness and relatedness to on-going events. Modern applications consuming these streams need to extract behaviour patterns that can be obtained by aggregating and mining statically and dynamically huge event histories. An event is the notification that a happening of interest has occurred. Event streams must be combined or aggregated to produce more meaningful information. By combining and aggregating them either from multiple producers, or from a single one during a given period of time, a limited set of events describing meaningful situations may be notified to consumers. Event streams with their volume and continuous production cope mainly with two of the characteristics given to Big Data by the 5V’s model: volume & velocity. Techniques such as complex pattern detection, event correlation, event aggregation, event mining and stream processing, have been used for composing events. Nevertheless, to the best of our knowledge, few approaches integrate different composition techniques (online and post-mortem) for dealing with Big Data velocity. This chapter gives an analytical overview of event stream processing and composition approaches: complex event languages, services and event querying systems on distributed logs. Our analysis underlines the challenges introduced by Big Data velocity and volume and use them as reference for identifying the scope and limitations of results stemming from different disciplines: networks, distributed systems, stream databases, event composition services, and data mining on traces

    Integration of a failure monitoring within a hybrid dynamic simulation environment

    Get PDF
    The complexity and the size of the industrial chemical processes induce the monitoring of a growing number of process variables. Their knowledge is generally based on the measurements of system variables and on the physico-chemical models of the process. Nevertheless this information is imprecise because of process and measurement noise. So the research ways aim at developing new and more powerful techniques for the detection of process fault. In this work, we present a method for the fault detection based on the comparison between the real system and the reference model evolution generated by the extended Kalman filter. The reference model is simulated by the dynamic hybrid simulator, PrODHyS. It is a general object-oriented environment which provides common and reusable components designed for the development and the management of dynamic simulation of industrial systems. The use of this method is illustrated through a didactic example relating to the field of Chemical Process System Engineering

    A Parameterized Algebra for Event Notification Services

    Get PDF
    Event notification services are used in various applications such as digital libraries, stock tickers, traffic control, or facility management. However, to our knowledge, a common semantics of events in event notification services has not been defined so far. In this paper, we propose a parameterized event algebra which describes the semantics of composite events for event notification systems. The parameters serve as a basis for flexible handling of duplicates in both primitive and composite events

    Managing Supply Chain Events to Build Sense-and-Respond Capability

    Get PDF
    As supply chains become more dynamic, there is a need for a sense-and-respond capability to react to events in a real-time manner. In this paper, we propose Petri nets extended with time and color (for case data) as a formalism for doing so. Hence, we describe seven basic patterns that are used to capture modeling concepts that arise commonly in supply chains. These basic patterns may be used by themselves and also be combined to create new patterns. Next, we show how to use the patterns as building blocks to model a complete supply chain and analyze it using dependency graphs and simulation. Dependency graphs can be used to analyze the various events and their causes. Simulation was, in addition, used to analyze various performance indicators (e.g. fill rates, replenishment times, and lead times) under different supply chain strategies. We performed sensitivity analysis to study the effect of changing parameter values on the performance indicators. In the experiments, by cutting resolution time for production delays in half (strategy 1), we were able to increase order fill rate from 89% to 95%. Similarly, upon raising the probability of successful alternative sourcing (strategy 2) from 0.5 to 0.7 the order fill rate again increased from 89% to 95%. We show that by modeling timing and causality issues accurately, it is possible to improve supply chain performance

    Temporal Stream Algebra

    Get PDF
    Data stream management systems (DSMS) so far focus on event queries and hardly consider combined queries to both data from event streams and from a database. However, applications like emergency management require combined data stream and database queries. Further requirements are the simultaneous use of multiple timestamps after different time lines and semantics, expressive temporal relations between multiple time-stamps and exible negation, grouping and aggregation which can be controlled, i. e. started and stopped, by events and are not limited to fixed-size time windows. Current DSMS hardly address these requirements. This article proposes Temporal Stream Algebra (TSA) so as to meet the afore mentioned requirements. Temporal streams are a common abstraction of data streams and data- base relations; the operators of TSA are generalizations of the usual operators of Relational Algebra. A in-depth 'analysis of temporal relations guarantees that valid TSA expressions are non-blocking, i. e. can be evaluated incrementally. In this respect TSA differs significantly from previous algebraic approaches which use specialized operators to prevent blocking expressions on a "syntactical" level
    corecore