13 research outputs found

    DAEMON: dynamic auto-encoders for contextualised anomaly detection applied to security monitoring

    No full text
    International audienceThe slow adoption rate of machine learning-based methods for novel attack detection by Security Operation Centers (SOC) analysts can be partly explained by their lack of data science expertise and the insufficient explainability of the results provided by these approaches. In this paper, we present an anomaly-based detection method that fuses events coming from heterogeneous sources into sets describing the same phenomenons and relies on a deep auto-encoder model to highlight anomalies and their context. To implicate security analysts and benefit from their expertise, we focus on limiting the need of data science knowledge during the configuration phase. Results on a lab environment, monitored using off-the-shelf tools, show good detection performances on several attack scenarios (F1 score ≈ 0.9), and eases the investigation of anomalies by quickly finding similar anomalies through clustering

    Simulation réaliste d'utilisateurs pour les systÚmes d'information en Cyber Range

    No full text
    International audienceGenerating user activity is a key capability for both evaluating security monitoring tools as well as improving the credibility of attacker analysis platforms (e.g., honeynets). In this paper, to generate this activity, we instrument each machine by means of an external agent. This agent combines both deterministic and deep learning based methods to adapt to different environment (e.g., multiple OS, software versions, etc.), while maintaining high performances. We also propose conditional text generation models to facilitate the creation of conversations and documents to accelerate the definition of coherent, system-wide, life scenarios.La génération d'activité utilisateur est un élément-clé autant pour la qualification des produits de supervision de sécurité que pour la crédibilité des environnements d'analyse de l'attaquant. Ce travail aborde la génération automatique d'une telle activité en instrumentant chaque poste utilisateur à l'aide d'un agent externe; lequel combine des méthodes déterministes et d'apprentissage profond, qui le rendent adaptable à différents environnements, sans pour autant dégrader ses performances. La préparation de scénarios de vie cohérents à l'échelle du SI est assistée par des modÚles de génération de conversations et de documents crédible

    Simulation réaliste d'utilisateurs pour les systÚmes d'information en Cyber Range

    No full text
    International audienceGenerating user activity is a key capability for both evaluating security monitoring tools as well as improving the credibility of attacker analysis platforms (e.g., honeynets). In this paper, to generate this activity, we instrument each machine by means of an external agent. This agent combines both deterministic and deep learning based methods to adapt to different environment (e.g., multiple OS, software versions, etc.), while maintaining high performances. We also propose conditional text generation models to facilitate the creation of conversations and documents to accelerate the definition of coherent, system-wide, life scenarios.La génération d'activité utilisateur est un élément-clé autant pour la qualification des produits de supervision de sécurité que pour la crédibilité des environnements d'analyse de l'attaquant. Ce travail aborde la génération automatique d'une telle activité en instrumentant chaque poste utilisateur à l'aide d'un agent externe; lequel combine des méthodes déterministes et d'apprentissage profond, qui le rendent adaptable à différents environnements, sans pour autant dégrader ses performances. La préparation de scénarios de vie cohérents à l'échelle du SI est assistée par des modÚles de génération de conversations et de documents crédible

    Federated Learning as enabler for Collaborative Security between not Fully-Trusting Distributed Parties

    No full text
    International audienceLiterature shows that trust typically relies on knowledge about the communication partner. Federated learning is an approach for collaboratively improving machine learning models. It allows collaborators to share Machine Learning models without revealing secrets, as only the abstract models and not the data used for their creation is shared. Federated learning thereby provides a mechanism to create trust without revealing secrets, such as specificities of local industrial systems. A fundamental challenge, however, is determining how much trust is justified for each contributor to collaboratively optimize the joint models. By assigning equal trust to each contribution, divergence of a model from its optimum can easily happen-caused by errors, bad observations, or cyberattacks. Trust also depends on how much an aggregated model contributes to the objectives of a party. For example, a model trained for an OT system is typically useless for monitoring IT systems. This paper shows first directions how heterogeneous distributed data sources could be integrated using federated learning methods. With an extended abstract, it shows current research directions and open issues from a cyber-analyst's perspective

    Federated Learning as enabler for Collaborative Security between not Fully-Trusting Distributed Parties

    No full text
    International audienceLiterature shows that trust typically relies on knowledge about the communication partner. Federated learning is an approach for collaboratively improving machine learning models. It allows collaborators to share Machine Learning models without revealing secrets, as only the abstract models and not the data used for their creation is shared. Federated learning thereby provides a mechanism to create trust without revealing secrets, such as specificities of local industrial systems. A fundamental challenge, however, is determining how much trust is justified for each contributor to collaboratively optimize the joint models. By assigning equal trust to each contribution, divergence of a model from its optimum can easily happen-caused by errors, bad observations, or cyberattacks. Trust also depends on how much an aggregated model contributes to the objectives of a party. For example, a model trained for an OT system is typically useless for monitoring IT systems. This paper shows first directions how heterogeneous distributed data sources could be integrated using federated learning methods. With an extended abstract, it shows current research directions and open issues from a cyber-analyst's perspective

    Federated Learning as enabler for Collaborative Security between not Fully-Trusting Distributed Parties

    No full text
    International audienceLiterature shows that trust typically relies on knowledge about the communication partner. Federated learning is an approach for collaboratively improving machine learning models. It allows collaborators to share Machine Learning models without revealing secrets, as only the abstract models and not the data used for their creation is shared. Federated learning thereby provides a mechanism to create trust without revealing secrets, such as specificities of local industrial systems. A fundamental challenge, however, is determining how much trust is justified for each contributor to collaboratively optimize the joint models. By assigning equal trust to each contribution, divergence of a model from its optimum can easily happen-caused by errors, bad observations, or cyberattacks. Trust also depends on how much an aggregated model contributes to the objectives of a party. For example, a model trained for an OT system is typically useless for monitoring IT systems. This paper shows first directions how heterogeneous distributed data sources could be integrated using federated learning methods. With an extended abstract, it shows current research directions and open issues from a cyber-analyst's perspective
    corecore