893,225 research outputs found

    Programming in logic without logic programming

    Get PDF
    In previous work, we proposed a logic-based framework in which computation is the execution of actions in an attempt to make reactive rules of the form if antecedent then consequent true in a canonical model of a logic program determined by an initial state, sequence of events, and the resulting sequence of subsequent states. In this model-theoretic semantics, reactive rules are the driving force, and logic programs play only a supporting role. In the canonical model, states, actions and other events are represented with timestamps. But in the operational semantics, for the sake of efficiency, timestamps are omitted and only the current state is maintained. State transitions are performed reactively by executing actions to make the consequents of rules true whenever the antecedents become true. This operational semantics is sound, but incomplete. It cannot make reactive rules true by preventing their antecedents from becoming true, or by proactively making their consequents true before their antecedents become true. In this paper, we characterize the notion of reactive model, and prove that the operational semantics can generate all and only such models. In order to focus on the main issues, we omit the logic programming component of the framework.Comment: Under consideration in Theory and Practice of Logic Programming (TPLP

    STV-based Video Feature Processing for Action Recognition

    Get PDF
    In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end

    Evaluating the robustness of an active network management function in an operational environment

    Get PDF
    This paper presents the integration process of a distribution network Active Network Management (ANM) function within an operational environment in the form of a Micro-Grid Laboratory. This enables emulation of a real power network and enables investigation into the effects of data uncertainty on an online and automatic ANM algorithm's control decisions. The algorithm implemented within the operational environment is a Power Flow Management (PFM) approach based around the Constraint Satisfaction Problem (CSP). This paper show the impact of increasing uncertainty, in the input data available for an ANM scheme in terms of the variation in control actions. The inclusion of a State Estimator (SE), with known tolerances is shown to improve the ANM performance

    Apollo experience report: The cryogenic storage system

    Get PDF
    A review of the design, development, and flight history of the Apollo cryogenic storage system and of selected components within the system is presented. Discussions are presented on the development history of the pressure vessels, heaters, insulation, and selected components. Flight experience and operational difficulties are reported in detail to provide definition of the problems and applicable corrective actions

    Total data quality management: a study of bridging rigor and relevance

    Get PDF
    Ensuring data quality is of crucial importance to organizations. The Total Data Quality Management (TDQM) theory provides a methodology to ensure data quality. Although well researched, the TDQM methodology is not easy to apply. In the case of Honeywell Emmen, we found that the application of the methodology requires considerable contextual redesign, flexibility in use, and the provision of practical tools. We identified team composition, toolsets, development of obvious actions, the design of phases, steps, and actions, and sessions as vital elements of making an academically rooted methodology applicable. Such an applicable methodology, we name “well articulated”, because it incorporates the existing academic theory and it is made operational. This enables the methodology to be systematically beta tested and made useful for different organizational conditions

    Operational strategies for offshore wind turbines to mitigate failure rate uncertainty on operational costs and revenue

    Get PDF
    Several operational strategies for offshore wind farms have been established and explored in order to improve understanding of operational costs with a focus on heavy lift vessel strategies. Additionally, an investigation into the uncertainty surrounding failure behaviour has been performed identifying the robustness of different strategies. Four operational strategies were considered: fix on fail, batch repair, annual charter and purchase. A range of failure rates have been explored identifying the key cost drivers and under which circumstances an operator would choose to adopt them. When failures are low, the fix on fail and batch strategies perform best and allow flexibility of operating strategy. When failures are high, purchase becomes optimal and is least sensitive to increasing failure rate. Late life failure distributions based on mechanical and electrical components behaviour have been explored. Increased operating costs because of wear-out failures have been quantified. An increase in minor failures principally increase lost revenue costs and can be mitigated by deploying increased maintenance resources. An increase in larger failures primarily increases vessel and repair costs. Adopting a purchase strategy can negate the vessel cost increase; however, significant cost increases are still observed. Maintenance actions requiring the use of heavy lift vessels, currently drive train components and blades are identified as critical for proactive maintenance to minimise overall maintenance costs

    An Operational Petri Net Semantics for the Join-Calculus

    Full text link
    We present a concurrent operational Petri net semantics for the join-calculus, a process calculus for specifying concurrent and distributed systems. There often is a gap between system specifications and the actual implementations caused by synchrony assumptions on the specification side and asynchronously interacting components in implementations. The join-calculus is promising to reduce this gap by providing an abstract specification language which is asynchronously distributable. Classical process semantics establish an implicit order of actually independent actions, by means of an interleaving. So does the semantics of the join-calculus. To capture such independent actions, step-based semantics, e.g., as defined on Petri nets, are employed. Our Petri net semantics for the join-calculus induces step-behavior in a natural way. We prove our semantics behaviorally equivalent to the original join-calculus semantics by means of a bisimulation. We discuss how join specific assumptions influence an existing notion of distributability based on Petri nets.Comment: In Proceedings EXPRESS/SOS 2012, arXiv:1208.244

    Using Indexed and Synchronous Events to Model and Validate Cyber-Physical Systems

    Full text link
    Timed Transition Models (TTMs) are event-based descriptions for modelling, specifying, and verifying discrete real-time systems. An event can be spontaneous, fair, or timed with specified bounds. TTMs have a textual syntax, an operational semantics, and an automated tool supporting linear-time temporal logic. We extend TTMs and its tool with two novel modelling features for writing high-level specifications: indexed events and synchronous events. Indexed events allow for concise description of behaviour common to a set of actors. The indexing construct allows us to select a specific actor and to specify a temporal property for that actor. We use indexed events to validate the requirements of a train control system. Synchronous events allow developers to decompose simultaneous state updates into actions of separate events. To specify the intended data flow among synchronized actions, we use primed variables to reference the post-state (i.e., one resulted from taking the synchronized actions). The TTM tool automatically infers the data flow from synchronous events, and reports errors on inconsistencies due to circular data flow. We use synchronous events to validate part of the requirements of a nuclear shutdown system. In both case studies, we show how the new notation facilitates the formal validation of system requirements, and use the TTM tool to verify safety, liveness, and real-time properties.Comment: In Proceedings ESSS 2015, arXiv:1506.0325
    corecore