37,133 research outputs found

    Heuristic Approach to the Chance Constrained Minimum Spanning K-core Problem

    Get PDF
    This thesis presents metaheuristic approaches to solve a novel network design problem under uncertainty. The problem is an extension of the classical k-core based network model called as the minimum spanning k-core problem. The minimum spanning k-core problem aims to balance the network design objectives of robustness, reachability and cost effectiveness. The problem is further extended to a probabilistic version called as, the chance constrained minimum spanning k-core problem. The minimum spanning k-core problem can be used to design underlying transportation networks, telecommunication networks, electrical and power distribution networks etc. in robust manner. In this thesis, Greedy Randomized Adaptive Search Procedure (GRASP), a metaheuristic approach is developed to solve both versions of the minimum spanning k-core problem. Computational experiments are performed to study the effectiveness of GRASP on specially designed test instances. Computational results conclude that GRASP provides good quality feasible solutions and efficiently solve both versions of the minimum spanning k-core problem.Industrial Engineering & Managemen

    Low Cost Quality of Service Multicast Routing in High Speed Networks

    Get PDF
    Many of the services envisaged for high speed networks, such as B-ISDN/ATM, will support real-time applications with large numbers of users. Examples of these types of application range from those used by closed groups, such as private video meetings or conferences, where all participants must be known to the sender, to applications used by open groups, such as video lectures, where partcipants need not be known by the sender. These types of application will require high volumes of network resources in addition to the real-time delay constraints on data delivery. For these reasons, several multicast routing heuristics have been proposed to support both interactive and distribution multimedia services, in high speed networks. The objective of such heuristics is to minimise the multicast tree cost while maintaining a real-time bound on delay. Previous evaluation work has compared the relative average performance of some of these heuristics and concludes that they are generally efficient, although some perform better for small multicast groups and others perform better for larger groups. Firstly, we present a detailed analysis and evaluation of some of these heuristics which illustrates that in some situations their average performance is reversed; a heuristic that in general produces efficient solutions for small multicasts may sometimes produce a more efficient solution for a particular large multicast, in a specific network. Also, in a limited number of cases using Dijkstra's algorithm produces the best result. We conclude that the efficiency of a heuristic solution depends on the topology of both the network and the multicast, and that it is difficult to predict. Because of this unpredictability we propose the integration of two heuristics with Dijkstra's shortest path tree algorithm to produce a hybrid that consistently generates efficient multicast solutions for all possible multicast groups in any network. These heuristics are based on Dijkstra's algorithm which maintains acceptable time complexity for the hybrid, and they rarely produce inefficient solutions for the same network/multicast. The resulting performance attained is generally good and in the rare worst cases is that of the shortest path tree. The performance of our hybrid is supported by our evaluation results. Secondly, we examine the stability of multicast trees where multicast group membership is dynamic. We conclude that, in general, the more efficient the solution of a heuristic is, the less stable the multicast tree will be as multicast group membership changes. For this reason, while the hybrid solution we propose might be suitable for use with closed user group multicasts, which are likely to be stable, we need a different approach for open user group multicasting, where group membership may be highly volatile. We propose an extension to an existing heuristic that ensures multicast tree stability where multicast group membership is dynamic. Although this extension decreases the efficiency of the heuristics solutions, its performance is significantly better than that of the worst case, a shortest path tree. Finally, we consider how we might apply the hybrid and the extended heuristic in current and future multicast routing protocols for the Internet and for ATM Networks.

    Link power coordination for energy conservation in complex communication networks

    Full text link
    Communication networks consume huge, and rapidly growing, amount of energy. However, a lot of the energy consumption is wasted due to the lack of global link power coordination in these complex systems. This paper proposes several link power coordination schemes to achieve energy-efficient routing by progressively putting some links into energy saving mode and hence aggregating traffic during periods of low traffic load. We show that the achievable energy savings not only depend on the link power coordination schemes, but also on the network topologies. In the random network, there is no scheme that can significantly outperform others. In the scale-free network, when the largest betweenness first (LBF) scheme is used, phase transition of the networks' transmission capacities during the traffic cooling down phase is observed. Motivated by this, a hybrid link power coordination scheme is proposed to significantly reduce the energy consumption in the scale-free network. In a real Internet Service Provider (ISP)'s router-level Internet topology, however, the smallest betweenness first (SBF) scheme significantly outperforms other schemes.Comment: 6 pages, 4 figure

    Minimum Cuts in Near-Linear Time

    Full text link
    We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a ``semi-duality'' between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized algorithm that finds a minimum cut in an m-edge, n-vertex graph with high probability in O(m log^3 n) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O(n^2 log n) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O(n^2 log^3 n). Other applications of the tree-packing approach are new, nearly tight bounds on the number of near minimum cuts a graph may have and a new data structure for representing them in a space-efficient manner

    On the study of jamming percolation

    Full text link
    We investigate kinetically constrained models of glassy transitions, and determine which model characteristics are crucial in allowing a rigorous proof that such models have discontinuous transitions with faster than power law diverging length and time scales. The models we investigate have constraints similar to that of the knights model, introduced by Toninelli, Biroli, and Fisher (TBF), but differing neighbor relations. We find that such knights-like models, otherwise known as models of jamming percolation, need a ``No Parallel Crossing'' rule for the TBF proof of a glassy transition to be valid. Furthermore, most knight-like models fail a ``No Perpendicular Crossing'' requirement, and thus need modification to be made rigorous. We also show how the ``No Parallel Crossing'' requirement can be used to evaluate the provable glassiness of other correlated percolation models, by looking at models with more stable directions than the knights model. Finally, we show that the TBF proof does not generalize in any straightforward fashion for three-dimensional versions of the knights-like models.Comment: 13 pages, 18 figures; Spiral model does satisfy property

    Measuring Industry Relatedness and Corporate Coherence

    Get PDF
    Since the seminal work of Teece et al. (1994) firm diversification has been found to be a non-random process. The hidden deterministic nature of the diversification patterns is usually detected comparing expected (under a null hypothesys) and actual values of some statistics. Nevertheless the standard approach presents two big drawbacks, leaving unanswered several issues. First, using the observed value of a statistics provides noisy and nonhomogeneous estimates and second, the expected values are computed in a specific and privileged null hypothesis that implies spurious random effects. We show that using Monte Carlo p-scores as measure of relatedness provides cleaner and homogeneous estimates. Using the NBER database on corporate patents we investigate the effect of assuming different null hypotheses, from the less unconstrained to the fully constrained, revealing that new features in firm diversification patterns can be catched if random artifacts are ruled out.corporate coherence; relatedness; null model analysis; patent data
    • ā€¦
    corecore