2 research outputs found

    Extending Event-Driven Architecture for Proactive Systems

    Get PDF
    ABSTRACT Proactive Event-Driven Computing is a new paradigm, in which a decision is not made due to explicit users' requests nor is it made as a response to past events. Rather, the decision is autonomously triggered by forecasting future states. Proactive event-driven computing requires a departure from current event-driven architectures to ones capable of handling uncertainty and future events, and real-time decision making. We present a proactive event-driven architecture for Scalable Proactive Event-Driven Decision-making (SPEEDD), which combines these capabilities. The proposed architecture is composed of three main components: complex event processing, real-time decision making, and visualization. This architecture is instantiated by a real use case from the traffic management domain. In the future, the results of actual implementations of the use case will help us revise and refine the proposed architecture

    RFID Authentication Efficient Proactive Information Security within Computational Security

    No full text
    Abstract. We consider repeated communication sessions between a RFID Tag (e.g., Radio Frequency Identification, RFID Tag) and a RFID Verifier. A proactive information theoretic security scheme is proposed. The scheme is based on the assumption that the information exchanged during at least one of every n successive communication sessions is not exposed to an adversary. The Tag and the Verifier maintain a vector of n entries that is repeatedly refreshed by pairwise xoring entries, with a new vector of n entries that is randomly chosen by the Tag and sent to the Verifier as a part of each communication session. The general case in which the adversary does not listen in k ≥ 1 sessions among any n successive communication sessions is also considered. A lower bound of n·(k+1) for the number of random numbers used during any n successive communication sessions is proven. In other words, we prove that an algorithm must use at least n·(k+1) new random numbers during any n successive communication sessions. Then a randomized scheme that uses only O(n log n) new random numbers is presented. A computational secure scheme which is based on the information theoretic secure scheme is used to ensure that even in the case that the adversary listens in all the information exchanges, the communication between the Tag and the Verifier is secure
    corecore