659 research outputs found

    Minimum-cost multicast over coded packet networks

    Get PDF
    We consider the problem of establishing minimum-cost multicast connections over coded packet networks, i.e., packet networks where the contents of outgoing packets are arbitrary, causal functions of the contents of received packets. We consider both wireline and wireless packet networks as well as both static multicast (where membership of the multicast group remains constant for the duration of the connection) and dynamic multicast (where membership of the multicast group changes in time, with nodes joining and leaving the group). For static multicast, we reduce the problem to a polynomial-time solvable optimization problem, and we present decentralized algorithms for solving it. These algorithms, when coupled with existing decentralized schemes for constructing network codes, yield a fully decentralized approach for achieving minimum-cost multicast. By contrast, establishing minimum-cost static multicast connections over routed packet networks is a very difficult problem even using centralized computation, except in the special cases of unicast and broadcast connections. For dynamic multicast, we reduce the problem to a dynamic programming problem and apply the theory of dynamic programming to suggest how it may be solved

    Accuracy, Efficiency, and Parallelism in Network Target Coordination Optimization

    Get PDF
    The optimal design task of complex engineering systems requires knowledge in various domains. It is thus often split into smaller parts and assigned to different design teams with specialized backgrounds. Decomposition based optimization is a multidisciplinary design optimization (MDO) technique that models and improves this process by partitioning the whole design optimization task into many manageable sub-problems. These sub-problems can be treated separately and a coordination strategy is employed to coordinate their couplings and drive their individual solutions to a consistent overall optimum. Many methods have been proposed in the literature, applying mathematical theories in nonlinear programming to decomposition based optimization, and testing them on engineering problems. These methods include Analytical Target Cascading (ATC) using quadratic methods and Augmented Lagrangian Coordination (ALC) using augmented Lagrangian relaxation. The decomposition structure has also been expanded from the special hierarchical structure to the general network structure. However, accuracy, efficiency, and parallelism still remain the focus of decomposition based optimization research when dealing with complex problems and more work is needed to both improve the existing methods and develop new methods. In this research, a hybrid network partition in which additional sub-problems can either be disciplines or components added to a component or discipline network respectively is proposed and two hybrid test problems are formulated. The newly developed consensus optimization method is applied on these test problems and shows good performance. For the ALC method, when the problem partition is given, various alternative structures are analyzed and compared through numerical tests. A new theory of dual residual based on Karush-Kuhn-Tucker (KKT) conditions is developed, which leads to a new flexible weight update strategy for both centralized and distributed ALC. Numerical tests show that the optimization accuracy is greatly improved by considering the dual residual in the iteration process. Furthermore, the ALC using the new update is able to converge to a good solution starting with various initial weights while the traditional update fails to guide the optimization to a reasonable solution when the initial weight is outside of a narrow range. Finally, a new coordination method is developed in this research by utilizing both the ordinary Lagrangian duality theorem and the alternating direction method of multipliers (ADMM). Different from the methods in the literature which employ duality theorems just once, the proposed method uses duality theorems twice and the resulting algorithm can optimize all sub-problems in parallel while requiring the least copies of linking variables. Numerical tests show that the new method consistently reaches more accurate solutions and consumes less computational resources when compared to another popular parallel method, the centralized ALC
    corecore