4 research outputs found

    Reinforcement Learning with Policy Mixture Model for Temporal Point Processes Clustering

    Full text link
    Temporal point process is an expressive tool for modeling event sequences over time. In this paper, we take a reinforcement learning view whereby the observed sequences are assumed to be generated from a mixture of latent policies. The purpose is to cluster the sequences with different temporal patterns into the underlying policies while learning each of the policy model. The flexibility of our model lies in: i) all the components are networks including the policy network for modeling the intensity function of temporal point process; ii) to handle varying-length event sequences, we resort to inverse reinforcement learning by decomposing the observed sequence into states (RNN hidden embedding of history) and actions (time interval to next event) in order to learn the reward function, thus achieving better performance or increasing efficiency compared to existing methods using rewards over the entire sequence such as log-likelihood or Wasserstein distance. We adopt an expectation-maximization framework with the E-step estimating the cluster labels for each sequence, and the M-step aiming to learn the respective policy. Extensive experiments show the efficacy of our method against state-of-the-arts.Comment: 8 pages, 3 figures, 4 table

    Insider Threat Detection via Hierarchical Neural Temporal Point Processes

    Full text link
    Insiders usually cause significant losses to organizations and are hard to detect. Currently, various approaches have been proposed to achieve insider threat detection based on analyzing the audit data that record information of the employee's activity type and time. However, the existing approaches usually focus on modeling the users' activity types but do not consider the activity time information. In this paper, we propose a hierarchical neural temporal point process model by combining the temporal point processes and recurrent neural networks for insider threat detection. Our model is capable of capturing a general nonlinear dependency over the history of all activities by the two-level structure that effectively models activity times, activity types, session durations, and session intervals information. Experimental results on two datasets demonstrate that our model outperforms the models that only consider information of the activity types or time alone

    Intensity-Free Learning of Temporal Point Processes

    Full text link
    Temporal point processes are the dominant paradigm for modeling sequences of events happening at irregular intervals. The standard way of learning in such models is by estimating the conditional intensity function. However, parameterizing the intensity function usually incurs several trade-offs. We show how to overcome the limitations of intensity-based approaches by directly modeling the conditional distribution of inter-event times. We draw on the literature on normalizing flows to design models that are flexible and efficient. We additionally propose a simple mixture model that matches the flexibility of flow-based models, but also permits sampling and computing moments in closed form. The proposed models achieve state-of-the-art performance in standard prediction tasks and are suitable for novel applications, such as learning sequence embeddings and imputing missing data.Comment: International Conference on Learning Representations (ICLR) 202

    Hawkes Processes Modeling, Inference and Control: An Overview

    Full text link
    Hawkes Processes are a type of point process which models self-excitement among time events. It has been used in a myriad of applications, ranging from finance and earthquakes to crime rates and social network activity analysis.Recently, a surge of different tools and algorithms have showed their way up to top-tier Machine Learning conferences. This work aims to give a broad view of the recent advances on the Hawkes Processes modeling and inference to a newcomer to the field.Comment: Fixed typos. Included pseudocodes for simulation algorithms. Improved figures. Included tables with complexity and performance comparisons. Included new sections on Current Challenges and Application Example
    corecore