864 research outputs found

    A Systematic Approach to Constructing Incremental Topology Control Algorithms Using Graph Transformation

    Full text link
    Communication networks form the backbone of our society. Topology control algorithms optimize the topology of such communication networks. Due to the importance of communication networks, a topology control algorithm should guarantee certain required consistency properties (e.g., connectivity of the topology), while achieving desired optimization properties (e.g., a bounded number of neighbors). Real-world topologies are dynamic (e.g., because nodes join, leave, or move within the network), which requires topology control algorithms to operate in an incremental way, i.e., based on the recently introduced modifications of a topology. Visual programming and specification languages are a proven means for specifying the structure as well as consistency and optimization properties of topologies. In this paper, we present a novel methodology, based on a visual graph transformation and graph constraint language, for developing incremental topology control algorithms that are guaranteed to fulfill a set of specified consistency and optimization constraints. More specifically, we model the possible modifications of a topology control algorithm and the environment using graph transformation rules, and we describe consistency and optimization properties using graph constraints. On this basis, we apply and extend a well-known constructive approach to derive refined graph transformation rules that preserve these graph constraints. We apply our methodology to re-engineer an established topology control algorithm, kTC, and evaluate it in a network simulation study to show the practical applicability of our approachComment: This document corresponds to the accepted manuscript of the referenced journal articl

    Network Intrusion Detection Using Autoencode Neural Network

    Get PDF
    In today's interconnected digital landscape, safeguarding computer networks against unauthorized access and cyber threats is of paramount importance. NIDS play a crucial role in identifying and mitigating potential security breaches. This research paper explores the application of autoencoder neural networks, a subset of deep learning techniques, in the realm of Network Intrusion Detection.Autoencoder neural networks are known for their ability to learn and represent data in a compressed, low-dimensional form. This study investigates their potential in modeling network traffic patterns and identifying anomalous activities. By training autoencoder networks on both normal and malicious network traffic data, we aim to create effective intrusion detection models that can distinguish between benign and malicious network behavior.The paper provides an in-depth analysis of the architecture and training methodologies of autoencoder neural networks for intrusion detection. It also explores various data preprocessing techniques and feature engineering approaches to enhance the model's performance. Additionally, the research evaluates the robustness and scalability of autoencoder-based NIDS in real-world network environments. Furthermore, ethical considerations in network intrusion detection, including privacy concerns and false positive rates, are discussed. It addresses the need for a balanced approach that ensures network security while respecting user privacy and minimizing disruptions. operation. This approach compresses the majority samples & increases the minority sample count in tough samples so that the IDS can achieve greater classification accuracy

    A Submodular Optimization Framework for Imbalanced Text Classification with Data Augmentation

    Get PDF
    In the domain of text classification, imbalanced datasets are a common occurrence. The skewed distribution of the labels of these datasets poses a great challenge to the performance of text classifiers. One popular way to mitigate this challenge is to augment underwhelmingly represented labels with synthesized items. The synthesized items are generated by data augmentation methods that can typically generate an unbounded number of items. To select the synthesized items that maximize the performance of text classifiers, we introduce a novel method that selects items that jointly maximize the likelihood of the items belonging to their respective labels and the diversity of the selected items. Our proposed method formulates the joint maximization as a monotone submodular objective function, whose solution can be approximated by a tractable and efficient greedy algorithm. We evaluated our method on multiple real-world datasets with different data augmentation techniques and text classifiers, and compared results with several baselines. The experimental results demonstrate the effectiveness and efficiency of our method

    Comparing parameter tuning methods for evolutionary algorithms

    Get PDF
    Abstract — Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a non-trivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algorithms are used widely in evolutionary computing. This paper is meant to be stepping stone towards a better practice by discussing the most important issues related to tuning EA parameters, describing a number of existing tuning methods, and presenting a modest experimental comparison among them. The paper is concluded by suggestions for future research – hopefully inspiring fellow researchers for further work. Index Terms — evolutionary algorithms, parameter tuning I. BACKGROUND AND OBJECTIVES Evolutionary Algorithms (EA) form a rich class of stochasti

    An Enhanced Hardware Description Language Implementation for Improved Design-Space Exploration in High-Energy Physics Hardware Design

    Get PDF
    Detectors in High-Energy Physics (HEP) have increased tremendously in accuracy, speed and integration. Consequently HEP experiments are confronted with an immense amount of data to be read out, processed and stored. Originally low-level processing has been accomplished in hardware, while more elaborate algorithms have been executed on large computing farms. Field-Programmable Gate Arrays (FPGAs) meet HEP's need for ever higher real-time processing performance by providing programmable yet fast digital logic resources. With the fast move from HEP Digital Signal Processing (DSPing) applications into the domain of FPGAs, related design tools are crucial to realise the potential performance gains. This work reviews Hardware Description Languages (HDLs) in respect to the special needs present in the HEP digital hardware design process. It is especially concerned with the question, how features outside the scope of mainstream digital hardware design can be implemented efficiently into HDLs. It will argue that functional languages are especially suitable for implementation of domain-specific languages, including HDLs. Casestudies examining the implementation complexity of HEP-specific language extensions to the functional HDCaml HDL will prove the viability of the suggested approach
    • …
    corecore