523,698 research outputs found

    Preserving Both Privacy and Utility in Network Trace Anonymization

    Full text link
    As network security monitoring grows more sophisticated, there is an increasing need for outsourcing such tasks to third-party analysts. However, organizations are usually reluctant to share their network traces due to privacy concerns over sensitive information, e.g., network and system configuration, which may potentially be exploited for attacks. In cases where data owners are convinced to share their network traces, the data are typically subjected to certain anonymization techniques, e.g., CryptoPAn, which replaces real IP addresses with prefix-preserving pseudonyms. However, most such techniques either are vulnerable to adversaries with prior knowledge about some network flows in the traces, or require heavy data sanitization or perturbation, both of which may result in a significant loss of data utility. In this paper, we aim to preserve both privacy and utility through shifting the trade-off from between privacy and utility to between privacy and computational cost. The key idea is for the analysts to generate and analyze multiple anonymized views of the original network traces; those views are designed to be sufficiently indistinguishable even to adversaries armed with prior knowledge, which preserves the privacy, whereas one of the views will yield true analysis results privately retrieved by the data owner, which preserves the utility. We present the general approach and instantiate it based on CryptoPAn. We formally analyze the privacy of our solution and experimentally evaluate it using real network traces provided by a major ISP. The results show that our approach can significantly reduce the level of information leakage (e.g., less than 1\% of the information leaked by CryptoPAn) with comparable utility

    Complex railway systems: capacity and utilisation of interconnected networks

    Get PDF
    Introduction Worldwide the transport sector faces several issues related to the rising of traffic demand such as congestion, energy consumption, noise, pollution, safety, etc. Trying to stem the problem, the European Commission is encouraging a modal shift towards railway, considered as one of the key factors for the development of a more sustainable European transport system. The coveted increase in railway share of transport demand for the next decades and the attempt to open up the rail market (for freight, international and recently also local services) strengthen the attention to capacity usage of the system. This contribution proposes a synthetic methodology for the capacity and utilisation analysis of complex interconnected rail networks; the procedure has a dual scope since it allows both a theoretically robust examination of suburban rail systems and a solid approach to be applied, with few additional and consistent assumptions, for feasibility or strategic analysis of wide networks (by efficiently exploiting the use of Big Data and/or available Open Databases). Method In particular the approach proposes a schematization of typical elements of a rail network (stations and line segments) to be applied in case of lack of more detailed data; in the authors’ opinion the strength points of the presented procedure stem from the flexibility of the applied synthetic methods and from the joint analysis of nodes and lines. The article, after building a quasiautomatic model to carry out several analyses by changing the border conditions or assumptions, even presents some general abacuses showing the variability of capacity/utilization of the network’s elements in function of basic parameters. Results This has helped in both the presented case studies: one focuses on a detailed analysis of the Naples’ suburban node, while the other tries to broaden the horizon by examining the whole European rail network with a more specific zoom on the Belgium area. The first application shows how the procedure can be applied in case of availability of fine-grained data and for metropolitan/regional analysis, allowing a precise detection of possible bottlenecks in the system and the individuation of possible interventions to relieve the high usage rate of these elements. The second application represents an on-going attempt to provide a broad analysis of capacity and related parameters for the entire European railway system. It explores the potentiality of the approach and the possible exploitation of different ‘Open and Big Data’ sources, but the outcomes underline the necessity to rely on proper and adequate information; the accuracy of the results significantly depend on the design and precision of the input database. Conclusion In conclusion, the proposed methodology aims to evaluate capacity and utilisation rates of rail systems at different geographical scales and according to data availability; the outcomes might provide valuable information to allow efficient exploitation and deployment of railway infrastructure, better supporting policy (e.g. investment prioritization, rail infrastructure access charges) and helping to minimize costs for users.The presented case studies show that the method allows indicative evaluations on the use of the system and comparative analysis between different elementary components, providing a first identification of ‘weak’ links or nodes for which, then, specific and detailed analyses should be carried out, taking into account more in depth their actual configuration, the technical characteristics and the real composition of the traffic (i.e. other elements influencing the rail capacity, such as: the adopted operating systems, the station traffic/route control & safety system, the elastic release of routes, the overlap of block sections, etc.)

    RIS-Assisted Over-the-Air Adaptive Federated Learning with Noisy Downlink

    Full text link
    Over-the-air federated learning (OTA-FL) exploits the inherent superposition property of wireless channels to integrate the communication and model aggregation. Though a naturally promising framework for wireless federated learning, it requires care to mitigate physical layer impairments. In this work, we consider a heterogeneous edge-intelligent network with different edge device resources and non-i.i.d. user dataset distributions, under a general non-convex learning objective. We leverage the Reconfigurable Intelligent Surface (RIS) technology to augment OTA-FL system over simultaneous time varying uplink and downlink noisy communication channels under imperfect CSI scenario. We propose a cross-layer algorithm that jointly optimizes RIS configuration, communication and computation resources in this general realistic setting. Specifically, we design dynamic local update steps in conjunction with RIS phase shifts and transmission power to boost learning performance. We present a convergence analysis of the proposed algorithm, and show that it outperforms the existing unified approach under heterogeneous system and imperfect CSI in numerical results.Comment: Appeared in 2023 IEEE ICC Workshop on Edge Learning over 5G Mobile Networks and Beyon

    Combining a Dispersal Model with Network Theory to Assess Habitat Connectivity

    Get PDF
    Assessing the potential for threatened species to persist and spread within fragmented landscapes requires the identification of core areas that can sustain resident populations and dispersal corridors that can link these core areas with isolated patches of remnant habitat. We developed a set of GIS tools, simulation methods, and network analysis procedures to assess potential landscape connectivity for the Delmarva fox squirrel (DFS; Sciurus niger cinereus), an endangered species inhabiting forested areas on the Delmarva Peninsula, USA. Information on the DFS’s life history and dispersal characteristics, together with data on the composition and configuration of land cover on the peninsula, were used as input data for an individual-based model to simulate dispersal patterns of millions of squirrels. Simulation results were then assessed using methods from graph theory, which quantifies habitat attributes associated with local and global connectivity. Several bottlenecks to dispersal were identified that were not apparent from simple distance-based metrics, highlighting specific locations for landscape conservation, restoration, and/or squirrel translocations. Our approach links simulation models, network analysis, and available field data in an efficient and general manner, making these methods useful and appropriate for assessing the movement dynamics of threatened species within landscapes being altered by human and natural disturbances

    Constrained Network Modularity

    Get PDF
    Static representations of protein interactions networks or PIN reflect measurements referred to a variety of conditions, including time. To partially bypass such limitation, gene expression information is usually integrated in the network to measure its "activity level." In general, the entire PIN modular organization (complexes, pathways) can reveal changes of configuration whose functional significance depends on biological annotation. However, since network dynamics are based on the presence of different conditions leading to comparisons between normal and disease states, or between networks observed sequentially in time, our working hypothesis refers to the analysis of differential networks based on varying modularity and uncertainty. Two popular methods were applied and evaluated, k-core and Q-modularity, over a reference yeast dataset comprising a PIN of literature-curated data obtained from the fusion of heterogeneous measurements sources. While the functional aspect of interest is cell cycle and the corresponding interactions were isolated, the PIN dynamics were externally induced by time-course measured gene expression values, which we consider one of the "modularity drivers." Notably, due to the nature of such expression values referred to the "just-in-time method," we could specialize our approach according to three constrained modular configurations then comparatively assessed through local entropy measures

    elopment of Neural Network Model for Predicting Crucial Product Properties or Yield for Optimisation of Refinery Operation

    Get PDF
    Refinery optimisation requires accurate prediction of crucial product properties and yield of desired products. Neural network modeling is an alternative approach to prediction using mathematical correlations. The project is an extension of a previous research conducted by the university on product yield and properties prediction using non-linear regression method. The objectives of this project are to develop a framework for the application of neural network modeling in predicting refinery product yield and properties, to develop neural network model for three case studies (predicting crude distillation yield, diesel pour point and hydrocracker total gasoline yield) and to evaluate the suitability of using neural networkmodelingfor predicting refinery product yield and properties. The project methodologies used are literature research and computer modeling using MATLAB neural network toolbox. The framework development for neural network modeling include aspects such as process understanding, data collection and division, input elements selection, data preprocessing, network type selection, design of network architecture, learning algorithm selection, network training, and network simulation using new data set. Various configurations of neural network model were tested to choose the best model to represent each case study. The model selected has the smallestmean squared error when simulated using test data. The results are presented in the form of the network configuration that gives the smallest MSE, plots comparing the actual output with the output predictedby the neural network, as well as residual analysis results to determine the range of deviationbetween the actual and predicted output. Although the accuracy of the output predicted by the neural network model requires further improvement, in general, the study has shown the tremendous potential for the use of neural networkfor predicting refinery product yield and properties. Suggestions for future study in the area include improvement of the model accuracy using advanced methods such as cross-training and stacked network, integration of neural networkwith plant's Advanced Process Control as inferential property predictor, and study on inverted network for use in a neural network-based controller

    Distance distribution in configuration model networks

    Get PDF
    We present analytical results for the distribution of shortest path lengths between random pairs of nodes in configuration model networks. The results, which are based on recursion equations, are shown to be in good agreement with numerical simulations for networks with degenerate, binomial and power-law degree distributions. The mean, mode and variance of the distribution of shortest path lengths are also evaluated. These results provide expressions for central measures and dispersion measures of the distribution of shortest path lengths in terms of moments of the degree distribution, illuminating the connection between the two distributions.Comment: 28 pages, 7 figures. Accepted for publication in Phys. Rev.

    SDN Architecture and Southbound APIs for IPv6 Segment Routing Enabled Wide Area Networks

    Full text link
    The SRv6 architecture (Segment Routing based on IPv6 data plane) is a promising solution to support services like Traffic Engineering, Service Function Chaining and Virtual Private Networks in IPv6 backbones and datacenters. The SRv6 architecture has interesting scalability properties as it reduces the amount of state information that needs to be configured in the nodes to support the network services. In this paper, we describe the advantages of complementing the SRv6 technology with an SDN based approach in backbone networks. We discuss the architecture of a SRv6 enabled network based on Linux nodes. In addition, we present the design and implementation of the Southbound API between the SDN controller and the SRv6 device. We have defined a data-model and four different implementations of the API, respectively based on gRPC, REST, NETCONF and remote Command Line Interface (CLI). Since it is important to support both the development and testing aspects we have realized an Intent based emulation system to build realistic and reproducible experiments. This collection of tools automate most of the configuration aspects relieving the experimenter from a significant effort. Finally, we have realized an evaluation of some performance aspects of our architecture and of the different variants of the Southbound APIs and we have analyzed the effects of the configuration updates in the SRv6 enabled nodes

    Predicting epidemic risk from past temporal contact data

    Full text link
    Understanding how epidemics spread in a system is a crucial step to prevent and control outbreaks, with broad implications on the system's functioning, health, and associated costs. This can be achieved by identifying the elements at higher risk of infection and implementing targeted surveillance and control measures. One important ingredient to consider is the pattern of disease-transmission contacts among the elements, however lack of data or delays in providing updated records may hinder its use, especially for time-varying patterns. Here we explore to what extent it is possible to use past temporal data of a system's pattern of contacts to predict the risk of infection of its elements during an emerging outbreak, in absence of updated data. We focus on two real-world temporal systems; a livestock displacements trade network among animal holdings, and a network of sexual encounters in high-end prostitution. We define the node's loyalty as a local measure of its tendency to maintain contacts with the same elements over time, and uncover important non-trivial correlations with the node's epidemic risk. We show that a risk assessment analysis incorporating this knowledge and based on past structural and temporal pattern properties provides accurate predictions for both systems. Its generalizability is tested by introducing a theoretical model for generating synthetic temporal networks. High accuracy of our predictions is recovered across different settings, while the amount of possible predictions is system-specific. The proposed method can provide crucial information for the setup of targeted intervention strategies.Comment: 24 pages, 5 figures + SI (18 pages, 15 figures
    • …
    corecore