160 research outputs found

    From Manifest V2 to V3 : A Study on the Discoverability of Chrome Extensions

    Get PDF
    Peer reviewedPostprin

    User Access Privacy in OAuth 2.0 and OpenID Connect

    Get PDF

    Industry Herding in Crypto Assets

    Get PDF
    Peer reviewedPostprin

    Tackling Non-Stationarity in Reinforcement Learning via Causal-Origin Representation

    Full text link
    In real-world scenarios, the application of reinforcement learning is significantly challenged by complex non-stationarity. Most existing methods attempt to model changes in the environment explicitly, often requiring impractical prior knowledge. In this paper, we propose a new perspective, positing that non-stationarity can propagate and accumulate through complex causal relationships during state transitions, thereby compounding its sophistication and affecting policy learning. We believe that this challenge can be more effectively addressed by tracing the causal origin of non-stationarity. To this end, we introduce the Causal-Origin REPresentation (COREP) algorithm. COREP primarily employs a guided updating mechanism to learn a stable graph representation for states termed as causal-origin representation. By leveraging this representation, the learned policy exhibits impressive resilience to non-stationarity. We supplement our approach with a theoretical analysis grounded in the causal interpretation for non-stationary reinforcement learning, advocating for the validity of the causal-origin representation. Experimental results further demonstrate the superior performance of COREP over existing methods in tackling non-stationarity
    • …
    corecore