428 research outputs found

    A lightweight intrusion alert fusion system

    Full text link
    In this paper, we present some practical experience on implementing an alert fusion mechanism from our project. After investigation on most of the existing alert fusion systems, we found the current body of work alternatively weighed down in the mire of insecure design or rarely deployed because of their complexity. As confirmed by our experimental analysis, unsuitable mechanisms could easily be submerged by an abundance of useless alerts. Even with the use of methods that achieve a high fusion rate and low false positives, attack is also possible. To find the solution, we carried out analysis on a series of alerts generated by well-known datasets as well as realistic alerts from the Australian Honey-Pot. One important finding is that one alert has more than an 85% chance of being fused in the following 5 alerts. Of particular importance is our design of a novel lightweight Cache-based Alert Fusion Scheme, called CAFS. CAFS has the capacity to not only reduce the quantity of useless alerts generated by IDS (Intrusion Detection System), but also enhance the accuracy of alerts, therefore greatly reducing the cost of fusion processing. We also present reasonable and practical specifications for the target-oriented fusion policy that provides a quality guarantee on alert fusion, and as a result seamlessly satisfies the process of successive correlation. Our experimental results showed that the CAFS easily attained the desired level of survivable, inescapable alert fusion design. Furthermore, as a lightweight scheme, CAFS can easily be deployed and excel in a large amount of alert fusions, which go towards improving the usability of system resources. To the best of our knowledge, our work is a novel exploration in addressing these problems from a survivable, inescapable and deployable point of view

    Failure propagation in GMPLS optical rings: CTMC model and performance analysis

    Get PDF
    Network reliability and resilience has become a key design parameter for network operators and Internet service providers. These often seek ways to have their networks fully operational for at least 99.999% of the time, regardless of the number and type of failures that may occur in their networks. This article presents a continuous-time Markov chain model to characterise the propagation of failures in optical GMPLS rings. Two types of failures are considered depending on whether they affect only the control plane, or both the control and data planes of the node. Additionally, it is assumed that control failures propagate along the ring infecting neighbouring nodes, as stated by the Susceptible-Infected-Disabled (SID) propagation model taken from epidemic-based propagation models. A few numerical examples are performed to demonstrate that the CTMC model provides a set of guidelines for selecting the appropriate repair rates in order to attain specific availability requirements, both in the control plane and the data plane.This work is partially supported by the Spanish Ministry of Science and Innovation project TEC 2009-10724 and by the Generalitat de Catalunya research support programme (SGR-1202). Additionally, the authors would like to thank the support of the T2C2 Spanish project (under code TIN2008-06739-C04-01) and the CAM-UC3M Greencom research grant (under code CCG10-UC3M/TIC-5624) in the development of this work.Publicad

    New results about multi-band uncertainty in Robust Optimization

    Full text link
    "The Price of Robustness" by Bertsimas and Sim represented a breakthrough in the development of a tractable robust counterpart of Linear Programming Problems. However, the central modeling assumption that the deviation band of each uncertain parameter is single may be too limitative in practice: experience indeed suggests that the deviations distribute also internally to the single band, so that getting a higher resolution by partitioning the band into multiple sub-bands seems advisable. The critical aim of our work is to close the knowledge gap about the adoption of a multi-band uncertainty set in Robust Optimization: a general definition and intensive theoretical study of a multi-band model are actually still missing. Our new developments have been also strongly inspired and encouraged by our industrial partners, which have been interested in getting a better modeling of arbitrary distributions, built on historical data of the uncertainty affecting the considered real-world problems. In this paper, we study the robust counterpart of a Linear Programming Problem with uncertain coefficient matrix, when a multi-band uncertainty set is considered. We first show that the robust counterpart corresponds to a compact LP formulation. Then we investigate the problem of separating cuts imposing robustness and we show that the separation can be efficiently operated by solving a min-cost flow problem. Finally, we test the performance of our new approach to Robust Optimization on realistic instances of a Wireless Network Design Problem subject to uncertainty.Comment: 15 pages. The present paper is a revised version of the one appeared in the Proceedings of SEA 201

    Cross-layer modeling and optimization of next-generation internet networks

    Get PDF
    Scaling traditional telecommunication networks so that they are able to cope with the volume of future traffic demands and the stringent European Commission (EC) regulations on emissions would entail unaffordable investments. For this very reason, the design of an innovative ultra-high bandwidth power-efficient network architecture is nowadays a bold topic within the research community. So far, the independent evolution of network layers has resulted in isolated, and hence, far-from-optimal contributions, which have eventually led to the issues today's networks are facing such as inefficient energy strategy, limited network scalability and flexibility, reduced network manageability and increased overall network and customer services costs. Consequently, there is currently large consensus among network operators and the research community that cross-layer interaction and coordination is fundamental for the proper architectural design of next-generation Internet networks. This thesis actively contributes to the this goal by addressing the modeling, optimization and performance analysis of a set of potential technologies to be deployed in future cross-layer network architectures. By applying a transversal design approach (i.e., joint consideration of several network layers), we aim for achieving the maximization of the integration of the different network layers involved in each specific problem. To this end, Part I provides a comprehensive evaluation of optical transport networks (OTNs) based on layer 2 (L2) sub-wavelength switching (SWS) technologies, also taking into consideration the impact of physical layer impairments (PLIs) (L0 phenomena). Indeed, the recent and relevant advances in optical technologies have dramatically increased the impact that PLIs have on the optical signal quality, particularly in the context of SWS networks. Then, in Part II of the thesis, we present a set of case studies where it is shown that the application of operations research (OR) methodologies in the desing/planning stage of future cross-layer Internet network architectures leads to the successful joint optimization of key network performance indicators (KPIs) such as cost (i.e., CAPEX/OPEX), resources usage and energy consumption. OR can definitely play an important role by allowing network designers/architects to obtain good near-optimal solutions to real-sized problems within practical running times

    Control Plane Strategies for Elastic Optical Networks

    Get PDF

    A Flexible, Natural Formulation for the Network Design Problem with Vulnerability Constraints

    Get PDF
    Given a graph, a set of origin-destination (OD) pairs with communication requirements, and an integer k 2, the network design problem with vulnerability constraints (NDPVC) is to identify a subgraph with the minimum total edge costs such that, between each OD pair, there exist a hop-constrained primary path and a hop-constrained backup path after any k â' 1 edges of the graph fail. Formulations exist for single-edge failures (i.e., k = 2). To solve the NDPVC for an arbitrary number of edge failures, we develop two natural formulations based on the notion of length-bounded cuts. We compare their strengths and flexibilities in solving the problem for k 3. We study different methods to separate infeasible solutions by computing length-bounded cuts of a given size. Experimental results show that, for single-edge failures, our formulation increases the number of solved benchmark instances from 61% (obtained within a two-hour limit by the best published algorithm) to more than 95%, thus increasing the number of solved instances by 1,065. Our formulation also accelerates the solution process for larger hop limits and efficiently solves the NDPVC for general k. We test our best algorithm for two to five simultaneous edge failures and investigate the impact of multiple failures on the network design. Â</p

    A Flexible, Natural Formulation for the Network Design Problem with Vulnerability Constraints

    Get PDF
    Given a graph, a set of origin-destination (OD) pairs with communication requirements, and an integer k &gt;= 2, the network design problem with vulnerability constraints (NDPVC) is to identify a subgraph with the minimum total edge costs such that, between each OD pair, there exist a hop-constrained primary path and a hop-constrained backup path after any k - 1 edges of the graph fail. Formulations exist for single-edge failures (i.e., k = 2). To solve the NDPVC for an arbitrary number of edge failures, we develop two natural formulations based on the notion of length-bounded cuts. We compare their strengths and flexibilities in solving the problem for k &gt;= 3. We study different methods to separate infeasible solutions by computing length-bounded cuts of a given size. Experimental results show that, for single-edge failures, our formulation increases the number of solved benchmark instances from 61% (obtained within a two-hour limit by the best published algorithm) to more than 95%, thus increasing the number of solved instances by 1,065. Our formulation also accelerates the solution process for larger hop limits and efficiently solves the NDPVC for general k. We test our best algorithm for two to five simultaneous edge failures and investigate the impact of multiple failures on the network design

    A novel approach to emergency management of wireless telecommunication system

    Get PDF
    The survivability concerns the service continuity when the components of a system are damaged. This concept is especially useful in the emergency management of the system, as often emergencies involve accidents or incident disasters which more or less damage the system. The overall objective of this thesis study is to develop a quantitative management approach to the emergency management of a wireless cellular telecommunication system in light of its service continuity in emergency situations – namely the survivability of the system. A particular wireless cellular telecommunication system, WCDMA, is taken as an example to ground this research.The thesis proposes an ontology-based paradigm for service management such that the management system contains three models: (1) the work domain model, (2) the dynamic model, and (3) the reconfiguration model. A powerful work domain modeling tool called Function-Behavior-Structure (FBS) is employed for developing the work domain model of the WCDMA system. Petri-Net theory, as well as its formalization, is applied to develop the dynamic model of the WCDMA system. A concept in engineering design called the general and specific function concept is applied to develop a new approach to system reconfiguration for the high survivability of the system. These models are implemented along with a user-interface which can be used by emergency management personnel. A demonstration of the effectiveness of this study approach is included.There are a couple of contributions with this thesis study. First, the proposed approach can be added to contemporary telecommunication management systems. Second, the Petri Net model of the WCDMA system is more comprehensive than any dynamic model of the telecommunication systems in literature. Furthermore, this model can be extended to any other telecommunication system. Third, the proposed system reconfiguration approach, based on the general and specific function concept, offers a unique way for the survivability of any service provider system.In conclusion, the ontology-based paradigm for a service system management provides a total solution to service continuity as well as its emergency management. This paradigm makes the complex mathematical modeling of the system transparent to the manager or managerial personnel and provides a feasible scenario of the human-in-the-loop management
    • …
    corecore