57,421 research outputs found

    A Multilayer Approach for Intrusion Detection with Lightweight Multilayer Perceptron and LSTM Deep Learning Models

    Get PDF
    Intrusion detection is essential in the field of cybersecurity for protecting networks and computer systems from nefarious activity. We suggest a novel multilayer strategy that combines the strength of the Lightweight Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) deep learning models in order to improve the precision and effectiveness of intrusion detection.The initial layer for extraction of features and representation is the Lightweight MLP. Its streamlined architecture allows for quick network data processing while still maintaining competitive performance. The LSTM deep learning model, which is excellent at identifying temporal correlations and patterns in sequential data, receives the extracted features after that.Our multilayer technique successfully manages the highly dimensional and dynamic nature of data from networks by merging these two models. We undertake extensive tests on benchmark datasets, and the outcomes show that our strategy performs better than conventional single-model intrusion detection techniques.The suggested multilayer method also demonstrates outstanding efficiency, which makes it particularly ideal for real-time intrusion detection in expansive network environments. Our multilayer approach offers a strong and dependable solution for identifying and reducing intrusions, strengthening the security position of computer systems and networks as cyber threats continue to advance

    Mobility and Congestion in Dynamical Multilayer Networks with Finite Storage Capacity

    Get PDF
    Multilayer networks describe well many real interconnected communication and transportation systems, ranging from computer networks to multimodal mobility infrastructures. Here, we introduce a model in which the nodes have a limited capacity of storing and processing the agents moving over a multilayer network, and their congestions trigger temporary faults which, in turn, dynamically affect the routing of agents seeking for uncongested paths. The study of the network performance under different layer velocities and node maximum capacities, reveals the existence of delicate trade-offs between the number of served agents and their time to travel to destination. We provide analytical estimates of the optimal buffer size at which the travel time is minimum and of its dependence on the velocity and number of links at the different layers. Phenomena reminiscent of the Slower Is Faster (SIF) effect and of the Braess' paradox are observed in our dynamical multilayer set-up.Comment: 5 pages, 3 figure

    A Novel Algorithm to Train Multilayer Hardlimit Neural Networks Based on a Mixed Integer Linear Program Model

    Get PDF
    In a previous work we showed that hardlimit multilayer neural networks have more computational power than sigmoidal multilayer neural networks [1]. In 1962 Minsky and Papert showed the limitations of a single perceptron which can only solve linearly separable classification problems and since at that time there was no algorithm to find the weights of a multilayer hardlimit perceptron research on neural networks stagnated until the early eighties with the invention of the Backpropagation algorithm [2]. Nevertheless since the sixties there have arisen some proposals of algorithms to implement logical functions with threshold elements or hardlimit neurons that could have been adapted to classification problems with multilayer hardlimit perceptrons and this way the stagnation of research on neural networks could have been avoided. Although the problem of training a hardlimit neural network is NP-Complete, our algorithm based on mathematical programming, a mixed integer linear model (MILP), takes few seconds to train the two input XOR function and a simple logical function of three variables with two minterms. Since any linearly separable logical function can be implemented by a perceptron with integer weights, varying them between -1 and 1 we found all the 10 possible solutions for the implementation of the two input XOR function and all the 14 and 18 possible solutions for the implementation of two logical functions of three variables, respectively, with a two layer architecture, with two neurons in the first layer. We describe our MILP model and show why it consumes a lot of computational resources, even a small hardlimit neural network translates into a MILP model greater than 1G, implying the use of a more powerful computer than a common 32 bits PC. We consider the reduction of computational resources as the near future work main objective to improve our novel MILP model and we will also try a nonlinear version of our algorithm based on a MINLP model that will consume less memory.proofpublishe

    Spreading processes in Multilayer Networks

    Get PDF
    Several systems can be modeled as sets of interconnected networks or networks with multiple types of connections, here generally called multilayer networks. Spreading processes such as information propagation among users of an online social networks, or the diffusion of pathogens among individuals through their contact network, are fundamental phenomena occurring in these networks. However, while information diffusion in single networks has received considerable attention from various disciplines for over a decade, spreading processes in multilayer networks is still a young research area presenting many challenging research issues. In this paper we review the main models, results and applications of multilayer spreading processes and discuss some promising research directions.Comment: 21 pages, 3 figures, 4 table

    Multilayer Networks

    Full text link
    In most natural and engineered systems, a set of entities interact with each other in complicated patterns that can encompass multiple types of relationships, change in time, and include other types of complications. Such systems include multiple subsystems and layers of connectivity, and it is important to take such "multilayer" features into account to try to improve our understanding of complex systems. Consequently, it is necessary to generalize "traditional" network theory by developing (and validating) a framework and associated tools to study multilayer systems in a comprehensive fashion. The origins of such efforts date back several decades and arose in multiple disciplines, and now the study of multilayer networks has become one of the most important directions in network science. In this paper, we discuss the history of multilayer networks (and related concepts) and review the exploding body of work on such networks. To unify the disparate terminology in the large body of recent work, we discuss a general framework for multilayer networks, construct a dictionary of terminology to relate the numerous existing concepts to each other, and provide a thorough discussion that compares, contrasts, and translates between related notions such as multilayer networks, multiplex networks, interdependent networks, networks of networks, and many others. We also survey and discuss existing data sets that can be represented as multilayer networks. We review attempts to generalize single-layer-network diagnostics to multilayer networks. We also discuss the rapidly expanding research on multilayer-network models and notions like community structure, connected components, tensor decompositions, and various types of dynamical processes on multilayer networks. We conclude with a summary and an outlook.Comment: Working paper; 59 pages, 8 figure

    Multilayer Network of Language: a Unified Framework for Structural Analysis of Linguistic Subsystems

    Get PDF
    Recently, the focus of complex networks research has shifted from the analysis of isolated properties of a system toward a more realistic modeling of multiple phenomena - multilayer networks. Motivated by the prosperity of multilayer approach in social, transport or trade systems, we propose the introduction of multilayer networks for language. The multilayer network of language is a unified framework for modeling linguistic subsystems and their structural properties enabling the exploration of their mutual interactions. Various aspects of natural language systems can be represented as complex networks, whose vertices depict linguistic units, while links model their relations. The multilayer network of language is defined by three aspects: the network construction principle, the linguistic subsystem and the language of interest. More precisely, we construct a word-level (syntax, co-occurrence and its shuffled counterpart) and a subword level (syllables and graphemes) network layers, from five variations of original text (in the modeled language). The obtained results suggest that there are substantial differences between the networks structures of different language subsystems, which are hidden during the exploration of an isolated layer. The word-level layers share structural properties regardless of the language (e.g. Croatian or English), while the syllabic subword level expresses more language dependent structural properties. The preserved weighted overlap quantifies the similarity of word-level layers in weighted and directed networks. Moreover, the analysis of motifs reveals a close topological structure of the syntactic and syllabic layers for both languages. The findings corroborate that the multilayer network framework is a powerful, consistent and systematic approach to model several linguistic subsystems simultaneously and hence to provide a more unified view on language
    corecore