156,243 research outputs found

    Parallel Computation of Large-Scale Nonlinear Network Problems in the Social and Economic Sciences

    Get PDF
    In this paper we focus on the parallel computation of large - scale equilibrium and optimization problems arising in the social and economic sciences. In particular, we consider problems which can be visualized and conceptualized as nonlinear network flow problems. The underlying network structure is then exploited in the development of parallel decomposition algorithms. We first consider market equilibrium problems, both dynamic and static, which are formulated as variational inequality problems, and for which we propose parallel decomposition algorithms by time period and by commodity, respectively. We then turn to the parallel computation of large-scale constrained matrix problems which are formulated as optimization problems and discuss the results of parallel decomposition by row/column

    Optimal Control of Transient Flow in Natural Gas Networks

    Full text link
    We outline a new control system model for the distributed dynamics of compressible gas flow through large-scale pipeline networks with time-varying injections, withdrawals, and control actions of compressors and regulators. The gas dynamics PDE equations over the pipelines, together with boundary conditions at junctions, are reduced using lumped elements to a sparse nonlinear ODE system expressed in vector-matrix form using graph theoretic notation. This system, which we call the reduced network flow (RNF) model, is a consistent discretization of the PDE equations for gas flow. The RNF forms the dynamic constraints for optimal control problems for pipeline systems with known time-varying withdrawals and injections and gas pressure limits throughout the network. The objectives include economic transient compression (ETC) and minimum load shedding (MLS), which involve minimizing compression costs or, if that is infeasible, minimizing the unfulfilled deliveries, respectively. These continuous functional optimization problems are approximated using the Legendre-Gauss-Lobatto (LGL) pseudospectral collocation scheme to yield a family of nonlinear programs, whose solutions approach the optima with finer discretization. Simulation and optimization of time-varying scenarios on an example natural gas transmission network demonstrate the gains in security and efficiency over methods that assume steady-state behavior

    Modeling of Nonlinear Signal Distortion in Fiber-Optic Networks

    Get PDF
    A low-complexity model for signal quality prediction in a nonlinear fiber-optic network is developed. The model, which builds on the Gaussian noise model, takes into account the signal degradation caused by a combination of chromatic dispersion, nonlinear signal distortion, and amplifier noise. The center frequencies, bandwidths, and transmit powers can be chosen independently for each channel, which makes the model suitable for analysis and optimization of resource allocation and routing in large-scale optical networks applying flexible-grid wavelength-division multiplexing

    Scalable Pareto set generation for multiobjective co-design problems in water distribution networks: a continuous relaxation approach

    No full text
    In this paper, we study the multiobjective co-design problem of optimal valve placement and operation in water distribution networks, addressing the minimization of average pressure and pressure variability indices. The presented formulation considers nodal pressures, pipe flows and valve locations as decision variables, where binary variables are used to model the placement of control valves. The resulting optimization problem is a multiobjective mixed integer nonlinear optimization problem. As conflicting objectives, average zone pressure and pressure variability can not be simultaneously optimized. Therefore, we present the concept of Pareto optima sets to investigate the trade-offs between the two conflicting objectives and evaluate the best compromise. We focus on the approximation of the Pareto front, the image of the Pareto optima set through the objective functions, using the weighted sum, normal boundary intersection and normalized normal constraint scalarization techniques. Each of the three methods relies on the solution of a series of single-objective optimization problems, which are mixed integer nonlinear programs (MINLPs) in our case. For the solution of each single-objective optimization problem, we implement a relaxation method that solves a sequence of nonlinear programs (NLPs) whose stationary points converge to a stationary point of the original MINLP. The relaxed NLPs have a sparse structure that come from the sparse water network graph constraints. In solving the large number of relaxed NLPs, sparsity is exploited by tailored techniques to improve the performance of the algorithms further and render the approaches scalable for large scale networks. The features of the proposed scalarization approaches are evaluated using a published benchmarking network model

    Optimization Algorithms for Machine Learning Designed for Parallel and Distributed Environments

    Get PDF
    This thesis proposes several optimization methods that utilize parallel algorithms for large-scale machine learning problems. The overall theme is network-based machine learning algorithms; in particular, we consider two machine learning models: graphical models and neural networks. Graphical models are methods categorized under unsupervised machine learning, aiming at recovering conditional dependencies among random variables from observed samples of a multivariable distribution. Neural networks, on the other hand, are methods that learn an implicit approximation to underlying true nonlinear functions based on sample data and utilize that information to generalize to validation data. The goal of finding the best methods relies on an optimization problem tasked with training such models. Improvements in current methods of solving the optimization problem for graphical models are obtained by parallelization and the use of a new update and a new step-size selection rule in the coordinate descent algorithms designed for large-scale problems. For training deep neural networks, we consider the second-order optimization algorithms within trust-region-like optimization frameworks. Deep networks are represented using large-scale vectors of weights and are trained based on very large datasets. Hence, obtaining second-order information is very expensive for these networks. In this thesis, we undertake an extensive exploration of algorithms that use a small number of curvature evaluations and are hence faster than other existing methods
    • …
    corecore