230 research outputs found

    Neur2RO: Neural Two-Stage Robust Optimization

    Full text link
    Robust optimization provides a mathematical framework for modeling and solving decision-making problems under worst-case uncertainty. This work addresses two-stage robust optimization (2RO) problems (also called adjustable robust optimization), wherein first-stage and second-stage decisions are made before and after uncertainty is realized, respectively. This results in a nested min-max-min optimization problem which is extremely challenging computationally, especially when the decisions are discrete. We propose Neur2RO, an efficient machine learning-driven instantiation of column-and-constraint generation (CCG), a classical iterative algorithm for 2RO. Specifically, we learn to estimate the value function of the second-stage problem via a novel neural network architecture that is easy to optimize over by design. Embedding our neural network into CCG yields high-quality solutions quickly as evidenced by experiments on two 2RO benchmarks, knapsack and capital budgeting. For knapsack, Neur2RO finds solutions that are within roughly 2%2\% of the best-known values in a few seconds compared to the three hours of the state-of-the-art exact branch-and-price algorithm; for larger and more complex instances, Neur2RO finds even better solutions. For capital budgeting, Neur2RO outperforms three variants of the kk-adaptability algorithm, particularly on the largest instances, with a 5 to 10-fold reduction in solution time. Our code and data are available at https://github.com/khalil-research/Neur2RO

    Large-scale unit commitment under uncertainty: an updated literature survey

    Get PDF
    The Unit Commitment problem in energy management aims at finding the optimal production schedule of a set of generation units, while meeting various system-wide constraints. It has always been a large-scale, non-convex, difficult problem, especially in view of the fact that, due to operational requirements, it has to be solved in an unreasonably small time for its size. Recently, growing renewable energy shares have strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex and uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focussing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, while providing entry points to the relevant literature on optimization under uncertainty. This is an updated version of the paper "Large-scale Unit Commitment under uncertainty: a literature survey" that appeared in 4OR 13(2), 115--171 (2015); this version has over 170 more citations, most of which appeared in the last three years, proving how fast the literature on uncertain Unit Commitment evolves, and therefore the interest in this subject

    Related-Key Differential Attack on Round Reduced RECTANGLE-80

    Get PDF
    RECTANGLE is a newly proposed lightweight block cipher which allows fast implementations for multiple platforms by using bit-slice techniques. It is an iterative 25-round SPN block cipher with a 64-bit block size and a 80-bit or 128-bit key size. Until now, the results on analyzing the cipher are not too much, which includes an attack on the 18-round reduced version proposed by the designers themselves. In this paper, we find all 15-round differential characteristics with 26--30 active S-boxes for given input, output and round subkey differences, which have a total probability 260.52^{-60.5}. Based on these differential characteristics, we extend the corresponding distinguisher to 2 rounds backward and forward respectively, and propose an attack on the 19-round reduced RECTANGLE-80 with data complexity of 2622^{62} plaintexts, time complexity of about 267.422^{67.42} encryptions and memory complexity of 2722^{72}. TThese data and time complexities are much lower than that of the designers for the 18-round reduced RECTANGLE-80

    MACHS: Mitigating the Achilles Heel of the Cloud through High Availability and Performance-aware Solutions

    Get PDF
    Cloud computing is continuously growing as a business model for hosting information and communication technology applications. However, many concerns arise regarding the quality of service (QoS) offered by the cloud. One major challenge is the high availability (HA) of cloud-based applications. The key to achieving availability requirements is to develop an approach that is immune to cloud failures while minimizing the service level agreement (SLA) violations. To this end, this thesis addresses the HA of cloud-based applications from different perspectives. First, the thesis proposes a component’s HA-ware scheduler (CHASE) to manage the deployments of carrier-grade cloud applications while maximizing their HA and satisfying the QoS requirements. Second, a Stochastic Petri Net (SPN) model is proposed to capture the stochastic characteristics of cloud services and quantify the expected availability offered by an application deployment. The SPN model is then associated with an extensible policy-driven cloud scoring system that integrates other cloud challenges (i.e. green and cost concerns) with HA objectives. The proposed HA-aware solutions are extended to include a live virtual machine migration model that provides a trade-off between the migration time and the downtime while maintaining HA objective. Furthermore, the thesis proposes a generic input template for cloud simulators, GITS, to facilitate the creation of cloud scenarios while ensuring reusability, simplicity, and portability. Finally, an availability-aware CloudSim extension, ACE, is proposed. ACE extends CloudSim simulator with failure injection, computational paths, repair, failover, load balancing, and other availability-based modules

    Differential Analysis on Simeck and SIMON with Dynamic Key-guessing Techniques

    Get PDF
    The Simeck family of lightweight block ciphers was proposed in CHES 2015 which combines the good design components from NSA designed ciphers SIMON and SPECK. Dynamic key-guessing techniques were proposed by Wang {\it et al.} to greatly reduce the key space guessed in differential cryptanalysis and work well on SIMON. In this paper, we implement the dynamic key-guessing techniques in a program to automatically give out the data in dynamic key-guessing procedure and thus simplify the security evaluation of SIMON and Simeck like block ciphers regarding differential attacks. We use the differentials from Kölbl {\it et al.}\u27s work and also a differential with lower Hamming weight we find using Mixed Integer Linear Programming method to attack 22-round Simeck32, 28-round Simeck48 and 35-round Simeck64. Besides, we launch the same attack procedure on four members of SIMON family by use of newly proposed differentials in CRYPTO2015 and get new attack results on 22-round SIMON32/64, 24-round SIMON48/96, 28, 29-round SIMON64/96 and 29, 30-round SIMON64/128. As far as we are concerned, our results on SIMON64 are currently the best results

    Improved self-management of datacenter systems applying machine learning

    Get PDF
    Autonomic Computing is a Computer Science and Technologies research area, originated during mid 2000's. It focuses on optimization and improvement of complex distributed computing systems through self-control and self-management. As distributed computing systems grow in complexity, like multi-datacenter systems in cloud computing, the system operators and architects need more help to understand, design and optimize manually these systems, even more when these systems are distributed along the world and belong to different entities and authorities. Self-management lets these distributed computing systems improve their resource and energy management, a very important issue when resources have a cost, by obtaining, running or maintaining them. Here we propose to improve Autonomic Computing techniques for resource management by applying modeling and prediction methods from Machine Learning and Artificial Intelligence. Machine Learning methods can find accurate models from system behaviors and often intelligible explanations to them, also predict and infer system states and values. These models obtained from automatic learning have the advantage of being easily updated to workload or configuration changes by re-taking examples and re-training the predictors. So employing automatic modeling and predictive abilities, we can find new methods for making "intelligent" decisions and discovering new information and knowledge from systems. This thesis departs from the state of the art, where management is based on administrators expertise, well known data, ad-hoc studied algorithms and models, and elements to be studied from computing machine point of view; to a novel state of the art where management is driven by models learned from the same system, providing useful feedback, making up for incomplete, missing or uncertain data, from a global network of datacenters point of view. - First of all, we cover the scenario where the decision maker works knowing all pieces of information from the system: how much will each job consume, how is and will be the desired quality of service, what are the deadlines for the workload, etc. All of this focusing on each component and policy of each element involved in executing these jobs. -Then we focus on the scenario where instead of fixed oracles that provide us information from an expert formula or set of conditions, machine learning is used to create these oracles. Here we look at components and specific details while some part of the information is not known and must be learned and predicted. - We reduce the problem of optimizing resource allocations and requirements for virtualized web-services to a mathematical problem, indicating each factor, variable and element involved, also all the constraints the scheduling process must attend to. The scheduling problem can be modeled as a Mixed Integer Linear Program. Here we face an scenario of a full datacenter, further we introduce some information prediction. - We complement the model by expanding the predicted elements, studying the main resources (this is CPU, Memory and IO) that can suffer from noise, inaccuracy or unavailability. Once learning predictors for certain components let the decision making improve, the system can become more ¿expert-knowledge independent¿ and research can focus on an scenario where all the elements provide noisy, uncertainty or private information. Also we introduce to the management optimization new factors as for each datacenter context and costs may change, turning the model as "multi-datacenter" - Finally, we review of the cost of placing datacenters depending on green energy sources, and distribute the load according to green energy availability

    Speededness in Achievement Testing: Relevance, Consequences, and Control

    Get PDF
    Da Prüfungen und Tests häufig dazu dienen, den Zugang zu Bildungsprogrammen zu steuern und die Grundlage zur Abschlussvergabe am Ende von Bildungsprogrammen bilden, ist ihre Fairness und Validität von größter Bedeutung. Ein kontrovers diskutierter Aspekt standardisierter Tests ist die Verwendung von Zeitlimits. Unabhängig davon ob eine Testadministration Zeitdruck hervorrufen soll oder nicht, sollten Testentwickler:innen in die Lage versetzt werden, den Zeitdruck einer Testadministrationen explizit gestalten zu können. Zu diesem Zweck schlägt van der Linden (2011a, 2011b) einen Ansatz zur Kontrolle des Zeitdrucks von Tests in der automatisierten Testhefterstellung (ATA) unter Verwendung von Mixed Integer Linear Programming und eines lognormalen Antwortzeitmodells vor. Dabei hat der Ansatz von van der Linden jedoch eine zentrale Limitation: Er ist auf das zwei-parametrische lognormale Antwortzeitmodell beschränkt, das gleiche Geschwindigkeits-Sensitivitäten (d.h. Faktorladungen) für alle Items annimmt. Diese Arbeit zeigt, dass ansonsten parallele Testhefte mit unterschiedlichen Geschwindigkeits-Sensitivitäten für bestimmte Testteilnehmende unfair sind. Darüber hinaus wird eine Erweiterung des van der Linden-Ansatzes vorgestellt, die unterschiedliche Geschwindigkeits-Sensitivitäten von Items in ATA berücksichtigt. Weiter wird diskutiert, wie Testhefte mit identischen, aber unterschiedlich angeordneten Items zu Fairness-Problemen aufgrund von Item-Positionseffekten führen können und wie dies verhindert werden kann. Die vorliegende Arbeit enthält zusätzlich Anleitungen zur Verwendung des R-Pakets eatATA für ATA und zur Verwendung von Stan und rstan für Bayesianische hierarchische Antwortzeitmodellierung. Abschließend werden Alternativen, praktische Implikationen und Grenzen der vorgeschlagenen Ansätze diskutiert und Vorschläge für zukünftige Forschungsthemen gemacht.As examinations and assessments are often used to control access to educational programs and to assess successful participation in an educational program, their fairness and validity is of great importance. A controversially discussed aspect of standardized tests is setting time limits on tests and how this practice can result in test speededness. Regardless of whether a test should be speeded or not, being able to deliberately control the speededness of tests is desirable. For this purpose, van der Linden (2011a, 2011b) proposed an approach to control the speededness of tests in automated test assembly (ATA) using mixed integer linear programming and a lognormal response time model. However, the approach by van der Linden (2011a, 2011b) has an important limitation, in that it is restricted to the two-parameter lognormal response time model which assumes equal speed sensitivities (i.e., factor loadings) across items. This thesis demonstrates that otherwise parallel test forms with differential speed sensitivities are indeed unfair for specific test-takers. Furthermore, an extension of the van der Linden approach is introduced, which incorporates speed sensitivities in ATA. Additionally, test speededness can undermine the fairness of a test if identical but differently ordered test forms are used. To prevent that the score of test-takers depends on whether easy or difficult items are located at the end of a test form, it is proposed that the same, most time intensive items should be placed at the end of all test forms. The thesis also provides introductions and tutorials on using the R package eatATA for ATA and using Stan and rstan for Bayesian hierarchical response time modeling. Finally, the thesis discusses alternatives, practical implications, and limitations of the proposed approaches and provides an outlook on future related research topics

    Proceedings of the 2010 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    On the annual Joint Workshop of the Fraunhofer IOSB and the Karlsruhe Institute of Technology (KIT), Vision and Fusion Laboratory, the students of both institutions present their latest research findings on image processing, visual inspection, pattern recognition, tracking, SLAM, information fusion, non-myopic planning, world modeling, security in surveillance, interoperability, and human-computer interaction. This book is a collection of 16 reviewed technical reports of the 2010 Joint Workshop

    Data-Intensive Computing in Smart Microgrids

    Get PDF
    Microgrids have recently emerged as the building block of a smart grid, combining distributed renewable energy sources, energy storage devices, and load management in order to improve power system reliability, enhance sustainable development, and reduce carbon emissions. At the same time, rapid advancements in sensor and metering technologies, wireless and network communication, as well as cloud and fog computing are leading to the collection and accumulation of large amounts of data (e.g., device status data, energy generation data, consumption data). The application of big data analysis techniques (e.g., forecasting, classification, clustering) on such data can optimize the power generation and operation in real time by accurately predicting electricity demands, discovering electricity consumption patterns, and developing dynamic pricing mechanisms. An efficient and intelligent analysis of the data will enable smart microgrids to detect and recover from failures quickly, respond to electricity demand swiftly, supply more reliable and economical energy, and enable customers to have more control over their energy use. Overall, data-intensive analytics can provide effective and efficient decision support for all of the producers, operators, customers, and regulators in smart microgrids, in order to achieve holistic smart energy management, including energy generation, transmission, distribution, and demand-side management. This book contains an assortment of relevant novel research contributions that provide real-world applications of data-intensive analytics in smart grids and contribute to the dissemination of new ideas in this area
    corecore