3,379 research outputs found

    A consensus based network intrusion detection system

    Full text link
    Network intrusion detection is the process of identifying malicious behaviors that target a network and its resources. Current systems implementing intrusion detection processes observe traffic at several data collecting points in the network but analysis is often centralized or partly centralized. These systems are not scalable and suffer from the single point of failure, i.e. attackers only need to target the central node to compromise the whole system. This paper proposes an anomaly-based fully distributed network intrusion detection system where analysis is run at each data collecting point using a naive Bayes classifier. Probability values computed by each classifier are shared among nodes using an iterative average consensus protocol. The final analysis is performed redundantly and in parallel at the level of each data collecting point, thus avoiding the single point of failure issue. We run simulations focusing on DDoS attacks with several network configurations, comparing the accuracy of our fully distributed system with a hierarchical one. We also analyze communication costs and convergence speed during consensus phases.Comment: Presented at THE 5TH INTERNATIONAL CONFERENCE ON IT CONVERGENCE AND SECURITY 2015 IN KUALA LUMPUR, MALAYSI

    Increasing resilience of ATM networks using traffic monitoring and automated anomaly analysis

    Get PDF
    Systematic network monitoring can be the cornerstone for the dependable operation of safety-critical distributed systems. In this paper, we present our vision for informed anomaly detection through network monitoring and resilience measurements to increase the operators' visibility of ATM communication networks. We raise the question of how to determine the optimal level of automation in this safety-critical context, and we present a novel passive network monitoring system that can reveal network utilisation trends and traffic patterns in diverse timescales. Using network measurements, we derive resilience metrics and visualisations to enhance the operators' knowledge of the network and traffic behaviour, and allow for network planning and provisioning based on informed what-if analysis

    Variational Autoencoders for New Physics Mining at the Large Hadron Collider

    Get PDF
    Using variational autoencoders trained on known physics processes, we develop a one-sided threshold test to isolate previously unseen processes as outlier events. Since the autoencoder training does not depend on any specific new physics signature, the proposed procedure doesn't make specific assumptions on the nature of new physics. An event selection based on this algorithm would be complementary to classic LHC searches, typically based on model-dependent hypothesis testing. Such an algorithm would deliver a list of anomalous events, that the experimental collaborations could further scrutinize and even release as a catalog, similarly to what is typically done in other scientific domains. Event topologies repeating in this dataset could inspire new-physics model building and new experimental searches. Running in the trigger system of the LHC experiments, such an application could identify anomalous events that would be otherwise lost, extending the scientific reach of the LHC.Comment: 29 pages, 12 figures, 5 table

    Decentralized Anomaly Identification in Cyber-Physical DC Microgrids

    Get PDF

    Inferring statistics of planet populations by means of automated microlensing searches

    Get PDF
    (abridged) The study of other worlds is key to understanding our own, and not only provides clues to the origin of our civilization, but also looks into its future. Rather than in identifying nearby systems and learning about their individual properties, the main value of the technique of gravitational microlensing is in obtaining the statistics of planetary populations within the Milky Way and beyond. Only the complementarity of different techniques currently employed promises to yield a complete picture of planet formation that has sufficient predictive power to let us understand how habitable worlds like ours evolve, and how abundant such systems are in the Universe. A cooperative three-step strategy of survey, follow-up, and anomaly monitoring of microlensing targets, realized by means of an automated expert system and a network of ground-based telescopes is ready right now to be used to obtain a first census of cool planets with masses reaching even below that of Earth orbiting K and M dwarfs in two distinct stellar populations, namely the Galactic bulge and disk. The hunt for extra-solar planets acts as a principal science driver for time-domain astronomy with robotic-telescope networks adopting fully-automated strategies. Several initiatives, both into facilities as well as into advanced software and strategies, are supposed to see the capabilities of gravitational microlensing programmes step-wise increasing over the next 10 years. New opportunities will show up with high-precision astrometry becoming available and studying the abundance of planets around stars in neighbouring galaxies becoming possible. Finally, we should not miss out on sharing the vision with the general public, and make its realization to profit not only the scientists but all the wider society.Comment: 10 pages in PDF format. White paper submitted to ESA's Exo-Planet Roadmap Advisory Team (EPR-AT); typos corrected. The embedded figures are available from the author on request. See also "Towards A Census of Earth-mass Exo-planets with Gravitational Microlensing" by J.P. Beaulieu, E. Kerins, S. Mao et al. (arXiv:0808.0005

    Crossing Roads of Federated Learning and Smart Grids: Overview, Challenges, and Perspectives

    Full text link
    Consumer's privacy is a main concern in Smart Grids (SGs) due to the sensitivity of energy data, particularly when used to train machine learning models for different services. These data-driven models often require huge amounts of data to achieve acceptable performance leading in most cases to risks of privacy leakage. By pushing the training to the edge, Federated Learning (FL) offers a good compromise between privacy preservation and the predictive performance of these models. The current paper presents an overview of FL applications in SGs while discussing their advantages and drawbacks, mainly in load forecasting, electric vehicles, fault diagnoses, load disaggregation and renewable energies. In addition, an analysis of main design trends and possible taxonomies is provided considering data partitioning, the communication topology, and security mechanisms. Towards the end, an overview of main challenges facing this technology and potential future directions is presented
    • 

    corecore