28,097 research outputs found

    Data-driven scenario generation for two-stage stochastic programming

    Get PDF
    Optimisation under uncertainty has always been a focal point within the Process Systems Engineering (PSE) research agenda. In particular, the efficient manipulation of large amount of data for the uncertain parameters constitutes a crucial condition for effectively tackling stochastic programming problems. In this context, this work proposes a new data-driven Mixed-Integer Linear Programming (MILP) model for the Distribution & Moment Matching Problem (DMP). For cases with multiple uncertain parameters a copula-based simulation of initial scenarios is employed as preliminary step. Moreover, the integration of clustering methods and DMP in the proposed model is shown to enhance computational performance. Finally, we compare the proposed approach with state-of-the-art scenario generation methodologies. Through a number of case studies we highlight the benefits regarding the quality of the generated scenario trees by evaluating the corresponding obtained stochastic solutions

    Generating realistic scaled complex networks

    Get PDF
    Research on generative models is a central project in the emerging field of network science, and it studies how statistical patterns found in real networks could be generated by formal rules. Output from these generative models is then the basis for designing and evaluating computational methods on networks, and for verification and simulation studies. During the last two decades, a variety of models has been proposed with an ultimate goal of achieving comprehensive realism for the generated networks. In this study, we (a) introduce a new generator, termed ReCoN; (b) explore how ReCoN and some existing models can be fitted to an original network to produce a structurally similar replica, (c) use ReCoN to produce networks much larger than the original exemplar, and finally (d) discuss open problems and promising research directions. In a comparative experimental study, we find that ReCoN is often superior to many other state-of-the-art network generation methods. We argue that ReCoN is a scalable and effective tool for modeling a given network while preserving important properties at both micro- and macroscopic scales, and for scaling the exemplar data by orders of magnitude in size.Comment: 26 pages, 13 figures, extended version, a preliminary version of the paper was presented at the 5th International Workshop on Complex Networks and their Application

    Bayesian Approaches for Modelling Flood Damage Processes

    Get PDF
    Hochwasserschadensprozesse werden von den drei Komponenten des Hochwasserrisikos bestimmt – der Gefahr, der Exposition und der Vulnerabilität. Dabei bleiben wichtige Einflussgrößen auf die Vulnerabilität, wie die private Hochwasservorsorge aufgrund fehlender quantitativer Informationen unberücksichtigt. Diese Arbeit entwickelt daher eine robuste statistische Methode zur Quantifizierung des Einflusses von privater Hochwasservorsorge auf die Reduzierung der Vulnerabilität von Haushalten bei Hochwasser. Es konnte gezeigt werden, dass in Deutschland private Hochwasservorsorgemaßnahmen den durchschnittlichen Hochwasserschaden pro Wohngebäude um 11.000 bis 15.000 Euro reduzieren. Hochwasserschadensmodelle mit Expertenwissen und datengestützten Methoden sind dabei am besten in der Lage Unterschiede in der Vulnerabilität durch private Hochwasservorsorge zu erkennen. Die über Hochwasserschadenprozesse erhobenen Daten und Modellannahmen sind von Unsicherheit geprägt und so sind auch Schätzungen mit. Die Bayesschen Modelle, die in dieser Arbeit entwickelt und angewandt werden, nutzen Annahmen über Schadensprozesse als Prior und empirische Daten zur Aktualisierung der Wahrscheinlischkeitsverteilungen. Die Modelle bieten Hochwasserschadensschätzungen als Verteilung, welche die Bandbreite der Variabilität der Schadensprozesse und die Unsicherheit der Modellannahmen abbilden. Hochwasserschadensmodelle, hinsichtlich der Prognoseerstellung und Anwendbarkeit. Ins Besondere verbessert die Verwendung einer Beta–Verteilung die Zuverlässigkeit der Modellergebnisse im Vergleich zu den häufig genutzten Gaußschen oder nicht parametrischen Verteilungen. Der hierarchische Bayessche Ansatz schafft eine verbesserte Parametrisierung von Wasserstand-Schadens-Funktionen und ersetzt so die Notwendigkeit empirischer Daten durch regional- und Ereignis-spezifisches Expertenwissen. Auf diese Weise kann die Vorhersage bei einer zeitlich und räumlichen Übertragung des Models verbessert werden.Flood damage processes are influenced by the three components of flood risk - hazard, exposure and vulnerability. In comparison to hazard and exposure, the vulnerability component, though equally important is often generalized in many flood risk assessments by a simple depth-damage curve. Hence, this thesis developed a robust statistical method to quantify the role of private precaution in reducing flood vulnerability of households. In Germany, the role of private precaution was found to be very significant in reducing flood damage (11 - 15 thousand euros, per household). Also, flood loss models with structure, parameterization and choice of explanatory variables based on expert knowledge and data-driven methods were successful in capturing changes in vulnerability, which makes them suitable for future risk assessments. Due to significant uncertainty in the underlying data and model assumptions, flood loss models always carry uncertainty around their predictions. This thesis develops Bayesian approaches for flood loss modelling using assumptions regarding damage processes as priors and available empirical data as evidence for updating. Thus, these models provide flood loss predictions as a distribution, that potentially accounts for variability in damage processes and uncertainty in model assumptions. The models presented in this thesis are an improvement over the state-of-the-art flood loss models in terms of prediction capability and model applicability. In particular, the choice of the response (Beta) distribution improved the reliability of loss predictions compared to the popular Gaussian or non-parametric distributions; the Hierarchical Bayesian approach resulted in an improved parameterization of the common stage damage functions that replaces empirical data requirements with region and event-specific expert knowledge, thereby, enhancing its predictive capabilities during spatiotemporal transfer

    Dynamic Procurement of New Products with Covariate Information: The Residual Tree Method

    Get PDF
    Problem definition: We study the practice-motivated problem of dynamically procuring a new, short life-cycle product under demand uncertainty. The firm does not know the demand for the new product but has data on similar products sold in the past, including demand histories and covariate information such as product characteristics. Academic/practical relevance: The dynamic procurement problem has long attracted academic and practitioner interest, and we solve it in an innovative data-driven way with proven theoretical guarantees. This work is also the first to leverage the power of covariate data in solving this problem. Methodology:We propose a new, combined forecasting and optimization algorithm called the Residual Tree method, and analyze its performance via epi-convergence theory and computations. Our method generalizes the classical Scenario Tree method by using covariates to link historical data on similar products to construct demand forecasts for the new product. Results: We prove, under fairly mild conditions, that the Residual Tree method is asymptotically optimal as the size of the data set grows. We also numerically validate the method for problem instances derived using data from the global fashion retailer Zara. We find that ignoring covariate information leads to systematic bias in the optimal solution, translating to a 6–15% increase in the total cost for the problem instances under study. We also find that solutions based on trees using just 2–3 branches per node, which is common in the existing literature, are inadequate, resulting in 30–66% higher total costs compared with our best solution. Managerial implications: The Residual Tree is a new and generalizable approach that uses past data on similar products to manage new product inventories. We also quantify the value of covariate information and of granular demand modeling

    STCP: Receiver-agnostic Communication Enabled by Space-Time Cloud Pointers

    Get PDF
    Department of Electrical and Computer Engineering (Computer Engineering)During the last decade, mobile communication technologies have rapidly evolved and ubiquitous network connectivity is nearly achieved. However, we observe that there are critical situations where none of the existing mobile communication technologies is usable. Such situations are often found when messages need to be delivered to arbitrary persons or devices that are located in a specific space at a specific time. For instance at a disaster scene, current communication methods are incapable of delivering messages of a rescuer to the group of people at a specific area even when their cellular connections are alive because the rescuer cannot specify the receivers of the messages. We name this as receiver-unknown problem and propose a viable solution called SpaceMessaging. SpaceMessaging adopts the idea of Post-it by which we casually deliver our messages to a person who happens to visit a location at a random moment. To enable SpaceMessaging, we realize the concept of posting messages to a space by implementing cloud-pointers at a cloud server to which messages can be posted and from which messages can fetched by arbitrary mobile devices that are located at that space. Our Android-based prototype of SpaceMessaging, which particularly maps a cloud-pointer to a WiFi signal fingerprint captured from mobile devices, demonstrates that it first allows mobile devices to deliver messages to a specific space and to listen to the messages of a specific space in a highly accurate manner (with more than 90% of Recall)

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated
    corecore