29 research outputs found

    Optimization for L1-Norm Error Fitting via Data Aggregation

    Get PDF
    We propose a data aggregation-based algorithm with monotonic convergence to a global optimum for a generalized version of the L1-norm error fitting model with an assumption of the fitting function. The proposed algorithm generalizes the recent algorithm in the literature, aggregate and iterative disaggregate (AID), which selectively solves three specific L1-norm error fitting problems. With the proposed algorithm, any L1-norm error fitting model can be solved optimally if it follows the form of the L1-norm error fitting problem and if the fitting function satisfies the assumption. The proposed algorithm can also solve multi-dimensional fitting problems with arbitrary constraints on the fitting coefficients matrix. The generalized problem includes popular models such as regression and the orthogonal Procrustes problem. The results of the computational experiment show that the proposed algorithms are faster than the state-of-the-art benchmarks for L1-norm regression subset selection and L1-norm regression over a sphere. Further, the relative performance of the proposed algorithm improves as data size increases

    An Analytical Approximation of the Joint Distribution of Aggregate Queue-Lengths in an Urban Network

    Get PDF
    Traditional queueing network models assume infinite queue capacities due to the complexity of capturing interactions between finite capacity queues. Accounting for this correlation can help explain how congestion propagates through a network. Joint queue-length distribution can be accurately estimated through simulation. Nonetheless, simulation is a computationally intensive technique, and its use for optimization purposes is challenging. By modeling the system analytically, we lose accuracy but gain efficiency and adaptability and can contribute novel information to a variety of congestion related problems, such as traffic signal optimization. We formulate an analytical technique that combines queueing theory with aggregation-disaggregation techniques in order to approximate the joint network distribution, considering an aggregate description of the network. We propose a stationary formulation. We consider a tandem network with three queues. The model is validated by comparing the aggregate joint distribution of the three queue system with the exact results determined by a simulation over several scenarios. It derives a good approximation of aggregate joint distributions

    Identifying Sparse Low-Dimensional Structures in Markov Chains: A Nonnegative Matrix Factorization Approach

    Full text link
    We consider the problem of learning low-dimensional representations for large-scale Markov chains. We formulate the task of representation learning as that of mapping the state space of the model to a low-dimensional state space, called the kernel space. The kernel space contains a set of meta states which are desired to be representative of only a small subset of original states. To promote this structural property, we constrain the number of nonzero entries of the mappings between the state space and the kernel space. By imposing the desired characteristics of the representation, we cast the problem as a constrained nonnegative matrix factorization. To compute the solution, we propose an efficient block coordinate gradient descent and theoretically analyze its convergence properties.Comment: Accepted for publication in American Control Conference (ACC) Proceedings, 202

    Building GIS Platforms for Spatial Business: A Focus on the Science of Maximizing Location Intelligence Benefits Through Risk/Cost Management

    Get PDF
    An ensemble model for aggregating weighted risks and costs is tested in a Monte Carlo simulation with Tomlinson's 22 lower-order risk factors for GIS implementations. The basic assumption of the model is that practitioners incorrectly manipulate and transpose risk and cost factors contributing to less than optimum implementation results. Such examples include: (1) violation of Lusser's probability product law, (2) non-use of Galton's 50th percentile/median as the "wisdom of the crowd" estimate, (3) incorrect use of weighting (if at all), (4) dubious ranking of lower-order risk factor importance and (5) the inability to automatically predict a Bayesian posterior adjusted cost projection. The ensemble model corrects for these and other errors. Life data analysis and reliability functions from reliability engineering are built into the model for further enhancement of results

    An Aggregation Procedure for Simulating Manufacturing Flow Line Models

    Get PDF
    We develop a formal method for specifying an aggregate discrete-event simulation model of a production flow line manufacturing system. The methodology operates by aggregating production stations or resources of a flow line. Determining the specifications for representing the aggregated resources in a simulation model is the focus of our presentation. We test the methodology for a set of flow lines with exponentially distributed arrival and service times. Comparisons between analytical and simulation results indicate the aggregation approach is quite accurate for estimating average part cycle time

    Spare Parts Logistics and Installed Base Information

    Get PDF
    Many of the challenges in spare parts logistics emerge due to the combination of large service networks, and sporadic/slow-moving demand. Customer heterogeneity and stringent service deadlines entail further challenges. Meanwhile, high revenues rates in service operations motivate companies to invest and optimize the service logistics function. An important aspect of the spare parts logistics function is its ability to support customer-specific requirements with respect to service deadlines. To support customer specific operations, many companies are actively maintaining and utilizing installed base data during forecasting, planning and execution stages. In this paper, we highlight the potential economic value of installed base data for spare parts logistics. We also discuss various data quality issues that are associated with the use of installed base data and show that planning performance depends on the quality dimensions

    A Two-Level Approach to Large Mixed-Integer Programs with Application to Cogeneration in Energy-Efficient Buildings

    Full text link
    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model (coarsened with respect to variables) and a coarse model (coarsened with respect to both variables and constraints). We coarsen binary variables by selecting a small number of pre-specified daily on/off profiles. We aggregate constraints by partitioning them into groups and summing over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence provides an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. The coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers

    Multi-Fidelity Space Mission Planning and Infrastructure Design Framework for Space Resource Logistics

    Get PDF
    To build a sustainable and affordable space transportation system for human space exploration, the design and deployment of space infrastructures are critical; one attractive and promising infrastructure system is the in-situ resource utilization (ISRU) system. The design analysis and trade studies for ISRU systems require the consideration of not only the design of the ISRU plant itself but also other infrastructure systems (e.g., storage, power) and various ISRU architecture options (e.g., resource, location, technology). This paper proposes a system-level space infrastructure and its logistics design optimization framework to perform architecture trade studies. A new space infrastructure logistics optimization problem formulation is proposed that considers infrastructure subsystems' internal interactions and their external synergistic effects with space logistics simultaneously. Since the full-size version of this proposed problem formulation can be computationally prohibitive, a new multi-fidelity optimization formulation is developed by varying the granularity of the commodity type definition over the network graph; this multi-fidelity formulation can find an approximation solution to the full-size problem computationally efficiently with little sacrifice in the solution quality. The proposed problem formulation and method are applied to a multi-mission lunar exploration campaign to demonstrate their values.Comment: 34 pages, 3 figures, presented at the AIAA Propulsion and Energy Forum 2019, submitted to the Journal of Spacecraft and Rocket

    Scheduling Perfectly Periodic Services Quickly with Aggregation

    Get PDF
    The problem of scheduling periodic services that have different period lengths seeks to find a schedule in which the workload is nearly the same in every time unit. A time unit’s workload is the sum of the workloads of the services scheduled for that time unit. A level workload minimizes the variability in the resources required and simplifies capacity and production planning. This paper considers the problem in which the schedule for each service must be perfectly periodic, and the schedule length is a multiple of the services’ period lengths. The objective is to minimize the maximum workload. The problem is strongly NP-hard, but there exist heuristics that perform well when the number of services is large. Because many services will have the same period length, we developed a new aggregation approach that separates the problem into subproblems for each period length, uses the subproblem solutions to form aggregate services, schedules these, and then creates a solution to the original instance. We also developed an approach that separates the problem into subproblems based on a partition of the period lengths. Computational experiments show that using aggregation generates high-quality solutions and reduces computational effort. The quality of the partition approach depended upon the partition used
    corecore