2,525 research outputs found
Recommended from our members
Scheduling Conservation Designs for Maximum Flexibility via Network Cascade Optimization
One approach to conserving endangered species is to purchase and protect a set of land parcels in a way that maximizes the expected future population spread. Unfortunately, an ideal set of parcels may have a cost that is beyond the immediate budget constraints and must thus be purchased incrementally. This raises the challenge of deciding how to schedule the parcel purchases in a way that maximizes the flexibility of budget usage while keeping population spread loss in control. In this paper, we introduce a formulation of this scheduling problem that does not rely on knowing the future budgets of an organization. In particular, we consider scheduling purchases in a way that achieves a population spread no less than desired but delays purchases as long as possible. Such schedules offer conservation planners maximum flexibility and use available budgets in the most efficient way. We develop the problem formally as a stochastic optimization problem over a network cascade model describing a commonly used model of population spread. Our solution approach is based on reducing the stochastic problem to a novel variant of the directed Steiner tree problem, which we call the set-weighted directed Steiner graph problem. We show that this problem is computationally hard, motivating the development of a primal-dual algorithm for the problem that computes both a feasible solution and a bound on the quality of an optimal solution. We evaluate the approach on both real and synthetic conservation data with a standard population spread model. The algorithm is shown to produce near optimal results and is much more scalable than more generic off-the-shelf optimizers. Finally, we evaluate a variant of the algorithm to explore the trade-offs between budget savings and population growth
Recommended from our members
Scheduling and Online Planning in Stochastic Diffusion Networks
Diffusion processes in networks are common models for many domains, including species colonization, information/idea cascade, disease propagation and fire spreading. In diffusion networks, a diffusion event occurs when a behavior spreads from one node to the other following a probabilistic model, where the behavior could be species, an idea, a virus, fire, etc. In the real world, in addition to observing diffusion processes, people are usually able to control the influence of diffusion by conducting operations on each individual node or node groups. Then the diffusion network control problem is to decide how to perform possible controls in order to maximize or minimize the range of diffusion, especially when there is a limited resource for doing controls.
Diffusion network control problems are challenging for most AI planning techniques. The complexity comes from highly stochastic exogenous events, a large action branching factor (the number of combinations of individual operations), a long time horizon, and the need to reason about numeric resource limits. In this thesis, we explore approaches that offer high-quality policies of controlling diffusion processes in large-scale networks.
We first propose a non-adaptive policy in conservation planning, where the goal is to encourage species spread in a long term. Given a set of control operations of interest, this policy specifies the deadline of taking each operation, so that the resource is used with the most flexibility while keeping the loss of diffusion influence within a desired ratio. This is particularly applicable in cases where a domain expert can develop a set of control operations that captures their own objectives. Then our approach provides a way of trading off diffusion influence and resource usage.
We further propose a fully adaptive approach for this conservation planning problem by computing a Hindsight Optimization (HOP) solution at every time step. Instead of computing a HOP action in the traditional way which is linear in the number of actions, we take advantage of its separable structure and develop an effective algorithm that scales for exponentially large, factored action spaces. From experiments on both synthetic and real data sets, we show that our algorithm returns near-optimal HOP solutions while scaling to large problems.
Moreover, we extend our implementation of HOP policy to a general framework of online planning for diffusion network control problems. In particular, we give a general and formal representation of diffusion network problems. Our framework proposes a schema of effectively computing multiple lookahead policies, some of which have been successfully applied to various probabilistic planning problems. We evaluate our ap-proach on diffusion network control problems in conservation planning, epidemic control and firefighting. The experimental results demonstrate the behaviors of these lookahead policies and the advantage of each in different domains
AI and OR in management of operations: history and trends
The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested
Development of systematic technique for energy and property integration in batch processes
The increasing consumption of energy, generation of waste as well as higher cost of fresh resources and waste treatment systems are the important driving forces for developing efficient, environmentally friendly and economic resource conservation techniques in the process industries. Process integration is being recognized as an useful systematic strategy for resource conservation and waste minimization. Up to date, less research works have been investigated on heat and property integration and these works are only focused on continuous processes.Since the application of batch processes is increasingly popular due to the development of technology-intensive industries such as pharmacy, fine chemistry and foods, it is necessary to consider both heat and property integration in batch processes simultaneously. In this thesis, a new mixed integer nonlinear programming (MINLP) mathematical model is introduced to synthesize a property-based heat integrated resource conservation networks (HIRCNs) for batch processes. A source-HEN-sink superstructure is constructed to embed all possible network configurations. Then, an MINLP model that consists of propertybased resource conservation network (RCN) and heat exchanger network (HEN) models is developed.In the proposed model, the property-based RCN model is formulated based on supertargeting approach while HEN model is formulated via automated targeting method (ATM). The optimization objective is to minimize total annualized cost (TAC) for a batch process system. This includes the operating cost of fresh resources, hot and cold utilities as well as the capital cost of storage tanks. To demonstrate the proposed approach, three case studies were solved. Based on the optimized results, the proposed simultaneous targeting approach for property-based HIRCNs is more effective in term of TAC for HIRCNs than the presented sequential targeting approach
Forbrukerfleksibilitet i kraftmarkeder
Demand flexibility integration is an important measure for the decarbonization of energy systems and a more efficient use of resources. Demand flexibility can provide multiple benefits to the power system and reduce system costs. Adjusting electricity demand to match variable production supports the integration of larger shares of variable renewable energy (VRE). Using demand response for system services provided by network operators can contribute to a more cost-efficient use of infrastructure and resources.
Demand flexibility is a large and complex field of study which includes different markets, different grid voltage levels and different actors. The aim of this PhD project is to study how demand flexibility can be optimally integrated into electricity markets, taking account of the benefits to the power system as a whole and the interplay between different markets. Demand flexibility is studied from the perspective of the whole system, as well as from the private economic perspective of aggregators and electricity consumers.
The thesis includes separate studies which go in depth about specific topics. The whole system perspective is studied in Paper I, which focuses on the value of demand flexibility in spot and reserve markets in power systems with high shares of VRE. The perspective of TSO and DSO is studied in Paper II, which proposes a marketplace for procurement of transmission and distribution system services from demand flexibility. The perspective of demand flexibility aggregator is studied in Paper III which develops an optimization framework for an aggregator participating in the wholesale and the regulation capacity markets. The perspective of private electricity consumers is studied in Paper IV which studies price-based demand response and investments in load control in an energy system.
The results of these studies offer various useful insights. Firstly, demand flexibility was found to significantly decrease the system cost when large shares of VRE are integrated into the system. This happens primarily by replacing reserve provision from coal and gas plants but also by reducing peak load generation due to price response on the wholesale market. Optimal allocation of demand flexibility between reserve and wholesale markets maximizes the system benefits. The results suggest that in systems with large shares of VRE and small shares of base load, more demand flexibility should be placed in the reserve market than in the wholesale power market.
Demand flexibility also benefits the distribution system, and it was also found that new market designs and better coordination between the transmission and distribution levels are important for efficiently integrating demand flexibility and minimizing the total procurement costs. New market designs can ensure that demand flexibility is used to maximize the value for the whole system and not only for single actors.
Next, the results of the studies illustrate that demand flexibility access to many markets is beneficial, from both the system and private economic perspectives. It increases the value of demand flexibility, gives incentives to aggregators’ business and ensures that demand flexibility is optimally allocated between markets based on price. However, market interplay can also have negative effects, as when demand flexibility providers favour one particular market with higher profitability and flee from other markets. New market designs for demand flexibility should consider the interplay between different markets.
Finally, modelling demand response to electricity price shows that private investments in demand flexibility are governed by the cost of load control, the daily electricity price variability and the price flattening effect. The price flattening effect implies that demand response to price reduces price volatility in the market, and at some point, no more demand response is feasible. To achieve this optimal demand response level in the wholesale market, it is important to have correct feedback between the market and consumers so that they do not respond more is optimal from the system perspective.
To sum up, the results of this PhD research suggest that efficient integration of demand flexibility into electricity markets implies giving it access to many markets, strengthening the role of aggregators, improving coordination between the distribution and transmission system levels and promoting market designs that optimize demand flexibility use and system value. This thesis illustrates the importance of studying demand response in a holistic perspective, including different markets, actors and system levels.Norwegian Research Council ; Enfo ; Sysco ; NV
Energy Demand Response for High-Performance Computing Systems
The growing computational demand of scientific applications has greatly motivated the development of large-scale high-performance computing (HPC) systems in the past decade. To accommodate the increasing demand of applications, HPC systems have been going through dramatic architectural changes (e.g., introduction of many-core and multi-core systems, rapid growth of complex interconnection network for efficient communication between thousands of nodes), as well as significant increase in size (e.g., modern supercomputers consist of hundreds of thousands of nodes). With such changes in architecture and size, the energy consumption by these systems has increased significantly. With the advent of exascale supercomputers in the next few years, power consumption of the HPC systems will surely increase; some systems may even consume hundreds of megawatts of electricity. Demand response programs are designed to help the energy service providers to stabilize the power system by reducing the energy consumption of participating systems during the time periods of high demand power usage or temporary shortage in power supply.
This dissertation focuses on developing energy-efficient demand-response models and algorithms to enable HPC system\u27s demand response participation. In the first part, we present interconnection network models for performance prediction of large-scale HPC applications. They are based on interconnected topologies widely used in HPC systems: dragonfly, torus, and fat-tree. Our interconnect models are fully integrated with an implementation of message-passing interface (MPI) that can mimic most of its functions with packet-level accuracy. Extensive experiments show that our integrated models provide good accuracy for predicting the network behavior, while at the same time allowing for good parallel scaling performance. In the second part, we present an energy-efficient demand-response model to reduce HPC systems\u27 energy consumption during demand response periods. We propose HPC job scheduling and resource provisioning schemes to enable HPC system\u27s emergency demand response participation. In the final part, we propose an economic demand-response model to allow both HPC operator and HPC users to jointly reduce HPC system\u27s energy cost. Our proposed model allows the participation of HPC systems in economic demand-response programs through a contract-based rewarding scheme that can incentivize HPC users to participate in demand response
Robust decision making for stochastic network design
We address the problem of robust decision making for stochastic network design. Our work is motivated by spatial conservation planning where the goal is to take management decisions within a fixed budget to maximize the expected spread of a population of species over a network of land parcels. Most previous work for this problem assumes that accurate estimates of different network parameters (edge activation probabilities, habitat suitability scores) are available, which is an unrealistic assumption. To address this shortcoming, we assume that network parameters are only partially known, specified via interval bounds. We then develop a decision making approach that computes the solution with minimax regret. We provide new theoretical results regarding the structure of the minmax regret solution which help develop a computationally efficient approach. Empirically, we show that previous approaches that work on point estimates of network parameters result in high regret on several standard benchmarks, while our approach provides significantly more robust solutions
Theoretical Analysis and Evaluation of NoCs with Weighted Round-Robin Arbitration
Fast and accurate performance analysis techniques are essential in early
design space exploration and pre-silicon evaluations, including software
eco-system development. In particular, on-chip communication continues to play
an increasingly important role as the many-core processors scale up. This paper
presents the first performance analysis technique that targets networks-on-chip
(NoCs) that employ weighted round-robin (WRR) arbitration. Besides fairness,
WRR arbitration provides flexibility in allocating bandwidth proportionally to
the importance of the traffic classes, unlike basic round-robin and
priority-based arbitration. The proposed approach first estimates the effective
service time of the packets in the queue due to WRR arbitration. Then, it uses
the effective service time to compute the average waiting time of the packets.
Next, we incorporate a decomposition technique to extend the analytical model
to handle NoC of any size. The proposed approach achieves less than 5% error
while executing real applications and 10% error under challenging synthetic
traffic with different burstiness levels.Comment: This paper is accepted in International Conference on Computer Aided
Design (ICCAD), 202
Pump Scheduling for Optimised Energy Cost and Water Quality in Water Distribution Networks
Delivering water to customers in sufficient quantity and quality and at low cost is the main driver for many water utilities around the world. One way of working toward this goal is to optimize the operation of a water distribution system. This means scheduling the operation of pumps in a way that results in minimal cost of energy used. It is not an easy process due to nonlinearity of hydraulic system response to different schedules and complexity of water networks in general. This thesis reviewed over 250 papers about pump scheduling published in the last 5 decades. The review revealed that, despite a lot of good work done in the past, the existing pump scheduling methods have several drawbacks revolving mainly around the ability to find globally optimal pump schedules and in a computationally efficient manner whilst dealing with water quality and other complexities of large pipe networks. A new pump scheduling method, entitled iterative Extended Lexicographic Goal Programming (iELGP) method, is developed and presented in this thesis with aim to overcome above drawbacks. The pump scheduling problem is formulated and solved as an optimisation problem with objectives being the electricity cost and the water age (used as a surrogate for water quality). The developed pump scheduling method is general and can be applied to any water distribution network configuration. Moreover, the new method can optimize the operation of fixed and variable speed pumps. The new method was tested on three different case studies. Each case study has different topography, demand patterns, number of pumps and number of tanks. The objective in the first and second case studies is to minimise energy cost only, whereas in the third case study, energy cost and water age are minimized simultaneously. The results obtained by using the new method are compared with results obtained from other pump scheduling methods that were applied to the same case studies. The results obtained demonstrate that the iELGP method is capable of determining optimal, low cost pump schedules whilst trading-off energy costs and water quality. The optimal schedules can be generated in a computationally very efficient manner. Given this, the iELGP method has potential to be applied in real-time scheduling of pumps in larger water distribution networks and without the need to simplify the respective hydraulic models or replace these with surrogate models
The "MIND" Scalable PIM Architecture
MIND (Memory, Intelligence, and Network Device) is an advanced parallel computer architecture for high performance computing and scalable embedded processing. It is a
Processor-in-Memory (PIM) architecture integrating both DRAM bit cells and CMOS logic devices on the same silicon die. MIND is multicore with multiple memory/processor nodes on
each chip and supports global shared memory across systems of MIND components. MIND is distinguished from other PIM architectures in that it incorporates mechanisms for efficient support of a global parallel execution model based on the semantics of message-driven multithreaded split-transaction processing. MIND is designed to operate either in conjunction with other conventional microprocessors or in standalone arrays of like devices. It also incorporates mechanisms for fault tolerance, real time execution, and active power management. This paper describes the major elements and operational methods of the MIND
architecture
- …