144,519 research outputs found

    A review of the identification of market power in the liberalized electricity markets

    Get PDF
    The liberalization of the electricity market aimed to promote competition, innovation, and fair pricing for consumers. However, as with any imperfect system, certain loopholes exist. Some major players in the electricity market have taken advantage of these loopholes to benefit from their market power. This research examines various methods for detecting market power in the liberalized electricity market and proposes a combination of detection methods that effectively address the issue of market power abuse. Two approaches to market power detection were identified and analyzed. The first approach involves the use of structural indices and analysis, including Concentration Ratio (Crn), Herfindahl-Hirschman Index (HHI), Pivotal Supplier Indicator(PSI), Residual Supply Index(RSI), Structure Conduct Performance Model, and Residual Demand Analysis. The second approach utilizes simulation models such as Linear Optimization, Supply Function Equilibrium, Cournot- Nash Framework, Agent-Based Model, and New Empirical Industrial Organization. The research findings indicate that combining market simulation approaches, such as the linear optimization model, with other methods like residual demand analysis, concentration ratios, and agent-based models, provides a comprehensive approach to market power detection. The linear optimization model can identify potential discrepancies by comparing marginal costs and prices, thereby indicating possible market power abuse. By incorporating residual demand analysis, a deeper understanding of the demand side of the market can be gained. Additionally, considering concentration ratios and employing agent-based models to capture strategic choices and behaviors of market participants can enhance the accuracy of market power detectio

    Agent-based modeling for environmental management. Case study: virus dynamics affecting Norwegian fish farming in fjords

    Get PDF
    Background: Norwegian fish-farming industry is an important industry, rapidly growing, and facing significant challenges such as the spread of pathogens1, trade-off between locations, fish production and health. There is a need for research, i.e. the development of theories (models), methods, techniques and tools for analysis, prediction and management, i.e. strategy development, policy design and decision making, to facilitate a sustainable industry. Loss due to the disease outbreaks in the aquaculture systems pose a large risk to a sustainable fish industry system, and pose a risk to the coastal and fjord ecosystem systems as a whole. Norwegian marine aquaculture systems are located in open areas (i.e. fjords) where they overlap and interact with other systems (e.g. transport, wild life, tourist, etc.). For instance, shedding viruses from aquaculture sites affect the wild fish in the whole fjord system. Fish disease spread and pathogen transmission in such complex systems, is process that it is difficult to predict, analyze, and control. There are several time-variant factors such as fish density, environmental conditions and other biological factors that affect the spread process. In this thesis, we developed methods to examine these factors on fish disease spread in fish populations and on pathogen spread in the time-space domain. Then we develop methods to control and manage the aquaculture system by finding optimal system settings in order to have a minimum infection risk and a high production capacity. Aim: The overall objective of the thesis is to develop agent-based models, methods and tools to facilitate the management of aquaculture production in Norwegian fjords by predicting the pathogen dynamics, distribution, and transmission in marine aquaculture systems. Specifically, the objectives are to assess agent-based modeling as an approach to understanding fish disease spread processes, to develop agent-based models that help us predict, analyze and understand disease dynamics in the context of various scenarios, and to develop a framework to optimize the location and the load of the aquaculture systems so as to minimize the infection risk in a growing fish industry. Methods: We use agent-based method to build models to simulate disease dynamics in fish populations and to simulate pathogen transmission between several aquaculture sites in a Norwegian fjord. Also, we use particle swarm optimization algorithm to identify agent-based models’ parameters so as to optimize the dynamics of the system model. In this context, we present a framework for using a particle swarm optimization algorithm to identify the parameter values of the agent-based model of aquaculture system that are expected to yield the optimal fish densities and farm locations that avoid the risk of spreading disease. The use of particle swarm optimization algorithm helps in identifying optimal agent-based models’ input parameters depending on the feedback from the agentbased models’ outputs. Results: As the thesis is built on three main studies, the results of the thesis work can be divided into three components. In the first study, we developed many agent-based models to simulate fish disease spread in stand-alone fish populations. We test the models in different scenarios by varying the agents (i.e. fish and pathogens) parameters, environment parameters (i.e. seawater temperature and currents), and interactions (interaction between agents-agents, and agents-environment) parameters. We use sensitivity analysis method to test different key input parameters such as fish density, fish swimming behavior, seawater temperature, and sea currents to show their effects on the disease spread process. Exploring the sensitivity of fish disease dynamics to these key parameters helps in combatting fish disease spread. In the second study, we build infection risk maps in a space-time domain, by developing agent-based models to identify the pathogen transmission patterns. The agent-based method helps us advance our understanding of pathogen transmission and builds risk maps to help us reduce the spread of infectious fish diseases. By using this method, we may study the spatial and dynamic aspects of the spread of infections and address the stochastic nature of the infection process. In the third study, we developed a framework for the optimization of the aquaculture systems. The framework uses particle swarm optimization algorithm to optimize agent-based models’ parameters so as to optimize the objective function. The framework was tested by developing a model to find optimal fish densities and farm locations in marine aquaculture system in a Norwegian fjord. Results show so that the rapid convergence of the presented particle swarm optimization algorithm to the optimal solution, - the algorithm requires a maximum of 18 iterations to find the best solution which can increase the fish density to three times while keeping the risk of infection at an accepted level. Conclusion: There are many contributions of this research work. First, we assessed the agent-based modeling as a method to simulate and analyze fish disease spread dynamics as a foundation for managing aquaculture systems. Results from this study demonstrate how effective the use of agentbased method is in the simulation of infectious diseases. By using this method, we are able to study spatial aspects of the spread of fish diseases and address the stochastic nature of infections process. Agent-based models are flexible, and they can include many external factors that affect fish disease dynamics such as interactions with wild fish and ship traffic. Agent-based models successfully help us to overcome the problem associated with lack of data in fish disease transmission and contribute to our understanding of different cause-effects relationships in the dynamics of fish diseases. Secondly, we developed methods to build infection risk maps in a space-time domain conditioned upon the identification of the pathogen transmission patterns in such a space-time domain, so as to help prevent and, if needed, combat infectious fish diseases by informing the management of the fish industry in Norway. Finally, we developed a method by which we may optimize the fish densities and farm locations of aquaculture systems so as to ensure a sustainable fish industry with a minimum risk of infection and a high production capacity. This PhD study offers new research-based approaches, models and tools for analysis, predictions and management that can be used to facilitate a sustainable development of the marine aquaculture industry with a maximal economic outcome and a minimal environmental impact

    AGENT-BASED DISCRETE EVENT SIMULATION MODELING AND EVOLUTIONARY REAL-TIME DECISION MAKING FOR LARGE-SCALE SYSTEMS

    Get PDF
    Computer simulations are routines programmed to imitate detailed system operations. They are utilized to evaluate system performance and/or predict future behaviors under certain settings. In complex cases where system operations cannot be formulated explicitly by analytical models, simulations become the dominant mode of analysis as they can model systems without relying on unrealistic or limiting assumptions and represent actual systems more faithfully. Two main streams exist in current simulation research and practice: discrete event simulation and agent-based simulation. This dissertation facilitates the marriage of the two. By integrating the agent-based modeling concepts into the discrete event simulation framework, we can take advantage of and eliminate the disadvantages of both methods.Although simulation can represent complex systems realistically, it is a descriptive tool without the capability of making decisions. However, it can be complemented by incorporating optimization routines. The most challenging problem is that large-scale simulation models normally take a considerable amount of computer time to execute so that the number of solution evaluations needed by most optimization algorithms is not feasible within a reasonable time frame. This research develops a highly efficient evolutionary simulation-based decision making procedure which can be applied in real-time management situations. It basically divides the entire process time horizon into a series of small time intervals and operates simulation optimization algorithms for those small intervals separately and iteratively. This method improves computational tractability by decomposing long simulation runs; it also enhances system dynamics by incorporating changing information/data as the event unfolds. With respect to simulation optimization, this procedure solves efficient analytical models which can approximate the simulation and guide the search procedure to approach near optimality quickly.The methods of agent-based discrete event simulation modeling and evolutionary simulation-based decision making developed in this dissertation are implemented to solve a set of disaster response planning problems. This research also investigates a unique approach to validating low-probability, high-impact simulation systems based on a concrete example problem. The experimental results demonstrate the feasibility and effectiveness of our model compared to other existing systems

    Agent-based models to couple natural and human systems for watershed management analysis

    Get PDF
    This dissertation expands conventional physically-based environmental models with human factors for watershed management analysis. Using an agent-based modeling framework, two approaches, one based on optimization and the other on data mining-are applied to modeling farmers' pumping decision-making processes in the High Plains aquifer within the hydrological observatory area. The resulting agent-based models (ABMs) are coupled with a physically-based groundwater model to investigate the interactions between farmers and the underlying groundwater system. With the optimization-based approach, the computational intensity arises from the execution of the resulting coupled ABM and groundwater model. This dissertation develops a computational framework that utilizes multithreaded programming and Hadoop-based cloud computing to address the computational issues. The framework allows multiple users to access and execute the web-based application of the coupled models simultaneously without an increase in latency via computer network. In addition, another computational framework to combine Hadoop-based Cloud Computing techniques with Polynomial Chaos Expansion (PCE) based variance decomposition approach is developed to conduct global sensitivity analysis with the coupled models, and influential behavioral parameters which are used to simulate agents’ behavior are identified. Being different from the optimization-based approach, which assumes all agents are rational, the data-driven approach attempts to account for the influences of agents’ bounded rationality on their behavior. A directed information graph (DIG) algorithm is used to exploit the causal relationships between agents’ decisions (i.e., groundwater irrigation depth) and time-series of environmental, socio-economical and institutional variables, and a machine learning technique, boosted regression tree (BRT) is applied to converting these causal relationships to agents’ behavioral rules. It is found that, in comparison with the optimization-based approach, crop profits and water tables as the result of agents’ pumping behavior derived using the data-driven approach can better mimic the actual observations. Thus, we can conclude that the data-driven approach using DIG and BRT outperforms the optimization-based approach when capturing agents’ pumping behavioral uncertainty as the result of bounded rationality, and for simulating real-world behaviors of agents

    Machine Learning Based Applications for Data Visualization, Modeling, Control, and Optimization for Chemical and Biological Systems

    Get PDF
    This dissertation report covers Yan Ma’s Ph.D. research with applicational studies of machine learning in manufacturing and biological systems. The research work mainly focuses on reaction modeling, optimization, and control using a deep learning-based approaches, and the work mainly concentrates on deep reinforcement learning (DRL). Yan Ma’s research also involves with data mining with bioinformatics. Large-scale data obtained in RNA-seq is analyzed using non-linear dimensionality reduction with Principal Component Analysis (PCA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Uniform Manifold Approximation and Projection (UMAP), followed by clustering analysis using k-Means and Hierarchical Density-Based Spatial Clustering with Noise (HDBSCAN). This report focuses on 3 case studies with DRL optimization control including a polymerization reaction control with deep reinforcement learning, a bioreactor optimization, and a fed-batch reaction optimization from a reactor at Dow Inc.. In the first study, a data-driven controller based on DRL is developed for a fed-batch polymerization reaction with multiple continuous manipulative variables with continuous control. The second case study is the modeling and optimization of a bioreactor. In this study, a data-driven reaction model is developed using Artificial Neural Network (ANN) to simulate the growth curve and bio-product accumulation of cyanobacteria Plectonema. Then a DRL control agent that optimizes the daily nutrient input is applied to maximize the yield of valuable bio-product C-phycocyanin. C-phycocyanin yield is increased by 52.1% compared to a control group with the same total nutrient content in experimental validation. The third case study is employing the data-driven control scheme for optimization of a reactor from Dow Inc, where a DRL-based optimization framework is established for the optimization of the Multi-Input, Multi-Output (MIMO) reaction system with reaction surrogate modeling. Yan Ma’s research overall shows promising directions for employing the emerging technologies of data-driven methods and deep learning in the field of manufacturing and biological systems. It is demonstrated that DRL is an efficient algorithm in the study of three different reaction systems with both stochastic and deterministic policies. Also, the use of data-driven models in reaction simulation also shows promising results with the non-linear nature and fast computational speed of the neural network models

    Bridging granularity gaps to decarbonize large-scale energy systems - The case of power system planning

    Get PDF
    The comprehensive evaluation of strategies for decarbonizing large- scale energy systems requires insights from many different perspectives. In energy systems analysis, optimization models are widely used for this purpose. However, they are limited in incorporating all crucial aspects of such a complex system to be sustainably transformed. Hence, they differ in terms of their spatial, temporal, technological, and economic perspective and either have a narrow focus with high resolution or a broad scope with little detail. Against this background, we introduce the so- called granularity gaps and discuss two possibilities to address them: increasing the resolutions of the established optimization models, and the different kinds of model coupling. After laying out open challenges, we propose a novel framework to design power systems in particular. Our exemplary concept exploits the capabilities of power system optimization, transmission network simulation, distribution grid planning, and agent- based simulation. This integrated framework can serve to study the energy transition with greater comprehensibility and may be a blueprint for similar multi-model analyses

    Secrets of RLHF in Large Language Models Part I: PPO

    Full text link
    Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO code

    Exemplar-AMMs: Recognizing Crowd Movements From Pedestrian Trajectories

    Get PDF
    In this paper, we present a novel method to recognize the types of crowd movement from crowd trajectories using agent-based motion models (AMMs). Our idea is to apply a number of AMMs, referred to as exemplar-AMMs, to describe the crowd movement. Specifically, we propose an optimization framework that filters out the unknown noise in the crowd trajectories and measures their similarity to the exemplar-AMMs to produce a crowd motion feature. We then address our real-world crowd movement recognition problem as a multi-label classification problem. Our experiments show that the proposed feature outperforms the state-of-the-art methods in recognizing both simulated and real-world crowd movements from their trajectories. Finally, we have created a synthetic dataset, SynCrowd, which contains 2D crowd trajectories in various scenarios, generated by various crowd simulators. This dataset can serve as a training set or benchmark for crowd analysis work

    Convex Optimization and Online Learning: Their Applications in Discrete Choice Modeling and Pricing

    Get PDF
    University of Minnesota Ph.D. dissertation. May 2018. Major: Industrial and Systems Engineering. Advisors: Shuzhong Zhang, Zizhuo Wang. 1 computer file (PDF); ix, 129 pages.The discrete choice model has been an important tool to model customers' demand when facing a set of substitutable choices. The random utility model, which is the most commonly used discrete choice framework, assumes that the utility of each alternative is random and follows a prescribed distribution. Due to the popularity of the random utility model, the probabilistic approach has been the major method to construct and analyze choice models. In recent years, several choice frameworks that are based on convex optimization are studied. Among them, the most widely used frameworks are the representative agent model and the semi-parametric choice model. In this dissertation, we first study a special class of the semi-parametric choice model - the cross moment model (CMM) - and reformulate it as a representative agent model. We also propose an efficient algorithm to calculate the choice probabilities in the CMM model. Then, motivated by the reformulation of the CMM model, we propose a new choice framework - the welfare-based choice model - and establish the equivalence between this framework and the other two choice frameworks: the representative agent model and the semi-parametric choice model. Lastly, motivated by the multi-product pricing problem, which is an important application of discrete choice models, we develop an online learning framework where the learning problem shares some similarities with the multi-product pricing problem. We propose efficient online learning algorithms and establish convergence rate results for these algorithms. The main techniques underlying our studies are continuous optimization and convex analysis
    • …
    corecore