14,436 research outputs found

    The Vehicle Routing Problem with Service Level Constraints

    Full text link
    We consider a vehicle routing problem which seeks to minimize cost subject to service level constraints on several groups of deliveries. This problem captures some essential challenges faced by a logistics provider which operates transportation services for a limited number of partners and should respect contractual obligations on service levels. The problem also generalizes several important classes of vehicle routing problems with profits. To solve it, we propose a compact mathematical formulation, a branch-and-price algorithm, and a hybrid genetic algorithm with population management, which relies on problem-tailored solution representation, crossover and local search operators, as well as an adaptive penalization mechanism establishing a good balance between service levels and costs. Our computational experiments show that the proposed heuristic returns very high-quality solutions for this difficult problem, matches all optimal solutions found for small and medium-scale benchmark instances, and improves upon existing algorithms for two important special cases: the vehicle routing problem with private fleet and common carrier, and the capacitated profitable tour problem. The branch-and-price algorithm also produces new optimal solutions for all three problems

    Multilevel Weighted Support Vector Machine for Classification on Healthcare Data with Missing Values

    Full text link
    This work is motivated by the needs of predictive analytics on healthcare data as represented by Electronic Medical Records. Such data is invariably problematic: noisy, with missing entries, with imbalance in classes of interests, leading to serious bias in predictive modeling. Since standard data mining methods often produce poor performance measures, we argue for development of specialized techniques of data-preprocessing and classification. In this paper, we propose a new method to simultaneously classify large datasets and reduce the effects of missing values. It is based on a multilevel framework of the cost-sensitive SVM and the expected maximization imputation method for missing values, which relies on iterated regression analyses. We compare classification results of multilevel SVM-based algorithms on public benchmark datasets with imbalanced classes and missing values as well as real data in health applications, and show that our multilevel SVM-based method produces fast, and more accurate and robust classification results.Comment: arXiv admin note: substantial text overlap with arXiv:1503.0625

    Satisficing in multi-armed bandit problems

    Full text link
    Satisficing is a relaxation of maximizing and allows for less risky decision making in the face of uncertainty. We propose two sets of satisficing objectives for the multi-armed bandit problem, where the objective is to achieve reward-based decision-making performance above a given threshold. We show that these new problems are equivalent to various standard multi-armed bandit problems with maximizing objectives and use the equivalence to find bounds on performance. The different objectives can result in qualitatively different behavior; for example, agents explore their options continually in one case and only a finite number of times in another. For the case of Gaussian rewards we show an additional equivalence between the two sets of satisficing objectives that allows algorithms developed for one set to be applied to the other. We then develop variants of the Upper Credible Limit (UCL) algorithm that solve the problems with satisficing objectives and show that these modified UCL algorithms achieve efficient satisficing performance.Comment: To appear in IEEE Transactions on Automatic Contro

    Sample- and segment-size specific Model Selection in Mixture Regression Analysis

    Get PDF
    As mixture regression models increasingly receive attention from both theory and practice, the question of selecting the correct number of segments gains urgency. A misspecification can lead to an under- or oversegmentation, thus resulting in flawed management decisions on customer targeting or product positioning. This paper presents the results of an extensive simulation study that examines the performance of commonly used information criteria in a mixture regression context with normal data. Unlike with previous studies, the performance is evaluated at a broad range of sample/segment size combinations being the most critical factors for the effectiveness of the criteria from both a theoretical and practical point of view. In order to assess the absolute performance of each criterion with respect to chance, the performance is reviewed against so called chance criteria, derived from discriminant analysis. The results induce recommendations on criterion selection when a certain sample size is given and help to judge what sample size is needed in order to guarantee an accurate decision based on a certain criterion respectively

    Recognition and classification of power quality disturbances by DWT-MRA and SVM classifier

    Get PDF
    Electrical power system is a large and complex network, where power quality disturbances (PQDs) must be monitored, analyzed and mitigated continuously in order to preserve and to re-establish the normal power supply without even slight interruption. Practically huge disturbance data is difficult to manage and requires the higher level of accuracy and time for the analysis and monitoring. Thus automatic and intelligent algorithm based methodologies are in practice for the detection, recognition and classification of power quality events. This approach may help to take preventive measures against abnormal operations and moreover, sudden fluctuations in supply can be handled accordingly. Disturbance types, causes, proper and appropriate extraction of features in single and multiple disturbances, classification model type and classifier performance, are still the main concerns and challenges. In this paper, an attempt has been made to present a different approach for recognition of PQDs with the synthetic model based generated disturbances, which are frequent in power system operations, and the proposed unique feature vector. Disturbances are generated in Matlab workspace environment whereas distinctive features of events are extracted through discrete wavelet transform (DWT) technique. Machine learning based Support vector machine classifier tool is implemented for the classification and recognition of disturbances. In relation to the results, the proposed methodology recognizes the PQDs with high accuracy, sensitivity and specificity. This study illustrates that the proposed approach is valid, efficient and applicable

    An Exploratory Study on the Strategic Use of Information Technology in the Source Selection Decision-Making Process

    Get PDF
    The strategic use of Information Technology in the acquisition field can be very useful in the decision making process of evaluating alternative solutions during a Government source selection. Current implementation of information technology provides a more tactical approach to systems development. The use of Electronic Commerce/Electronic Data Interchange and the internet to electronically transfer information is only the beginning of the shift towards a more strategic design process for information systems within Government procurement agencies. A schematic model was designed to demonstrate how information technology, such as Decision Support Systems, Expert Systems, and Shared Data Warehousing could assist the SSA in selecting the optimal, or best value solution. In addition, three source selection evaluation models using management science techniques were designed and developed using Microsoft Excel software. The Sealed Bidding, FAR Part 14, and Competitive Proposal, FAR Part 15 models implemented Integer Linear Programming through Microsoft Excel\u27s SOLVER option. The AFFARS Appendix AA/BB model implemented the use of the multi-criteria Analytical Hierarchy Process

    Sample- and segment-size specific Model Selection in Mixture Regression Analysis

    Get PDF
    As mixture regression models increasingly receive attention from both theory and practice, the question of selecting the correct number of segments gains urgency. A misspecification can lead to an under- or oversegmentation, thus resulting in flawed management decisions on customer targeting or product positioning. This paper presents the results of an extensive simulation study that examines the performance of commonly used information criteria in a mixture regression context with normal data. Unlike with previous studies, the performance is evaluated at a broad range of sample/segment size combinations being the most critical factors for the effectiveness of the criteria from both a theoretical and practical point of view. In order to assess the absolute performance of each criterion with respect to chance, the performance is reviewed against so called chance criteria, derived from discriminant analysis. The results induce recommendations on criterion selection when a certain sample size is given and help to judge what sample size is needed in order to guarantee an accurate decision based on a certain criterion respectively.Mixture Regression; Model Selection; Information Criteria

    Scenario reduction heuristics for a rolling stochastic programming simulation of bulk energy flows with uncertain fuel costs

    Get PDF
    Stochastic programming is employed regularly to solve energy planning problems with uncertainties in costs, demands and other parameters. We formulated a stochastic program to quantify the impact of uncertain fuel costs in an aggregated U.S. bulk energy transportation network model. A rolling two-stage approach with discrete scenarios is implemented to mimic the decision process as realizations of the uncertain elements become known and forecasts of their values in future periods are updated. Compared to the expected value solution from the deterministic model, the recourse solution found from the stochastic model has higher total cost, lower natural gas consumption and less subregional power trade but a fuel mix that is closer to what actually occurred. The worth of solving the stochastic program lies in its capacity of better simulating the actual energy flows. Strategies including decomposition, aggregation and scenario reduction are adopted for reducing computational burden of the large-scale program due to a huge number of scenarios. We devised two heuristic algorithms, aiming to improve the scenario reduction algorithms, which select a subset of scenarios from the original set in order to reduce the problem size. The accelerated forward selection (AFS) algorithm is a heuristic based on the existing forward selection (FS) method. AFS\u27s selection of scenarios is very close to FS\u27s selection, while AFS greatly outperforms FS in efficiency. We also proposed the TCFS method of forward selection within clusters of transferred scenarios. TCFS clusters scenarios into groups according to their distinct impact on the key first-stage decisions before selecting a representative scenario from each group. In contrast to the problem independent selection process of FS, by making use of the problem information, TCFS achieves excellent accuracy and at the same time greatly mitigates the huge computation burden

    Enhancement and evaluation of Skylab photography for potential land use inventories, part 1

    Get PDF
    The author has identified the following significant results. Three sites were evaluated for land use inventory: Finger Lakes - Tompkins County, Lower Hudson Valley - Newburgh, and Suffolk County - Long Island. Special photo enhancement processes were developed to standardize the density range and contrast among S190A negatives. Enhanced black and white enlargements were converted to color by contact printing onto diazo film. A color prediction model related the density values on each spectral band for each category of land use to the spectral properties of the various diazo dyes. The S190A multispectral system proved to be almost as effective as the S190B high resolution camera for inventorying land use. Aggregate error for Level 1 averaged about 12% while Level 2 aggregate error averaged about 25%. The S190A system proved to be much superior to LANDSAT in inventorying land use, primarily because of increased resolution
    • …
    corecore