2,499 research outputs found

    Polyhedral Complementarity Approach to Equilibrium Problem in Linear Exchange Models

    Get PDF
    New development of original approach to the equilibrium problem in a linear exchange model and its variations is presented. The conceptual base of this approach is the scheme of polyhedral complementarity. The idea is fundamentally different from the well-known reduction to a linear complementarity problem. It may be treated as a realization of the main idea of the linear and quadratic programming methods. In this way, the finite algorithms for finding the equilibrium prices are obtained. The whole process is a successive consideration of different structures of possible solution. They are analogous to basic sets in the simplex method. The approach reveals a decreasing property of the associated mapping whose fixed point yields the equilibrium of the model. The basic methods were generalized for some variations of the linear exchange model

    A Game-Theoretic Approach to Energy Trading in the Smart Grid

    Full text link
    Electric storage units constitute a key element in the emerging smart grid system. In this paper, the interactions and energy trading decisions of a number of geographically distributed storage units are studied using a novel framework based on game theory. In particular, a noncooperative game is formulated between storage units, such as PHEVs, or an array of batteries that are trading their stored energy. Here, each storage unit's owner can decide on the maximum amount of energy to sell in a local market so as to maximize a utility that reflects the tradeoff between the revenues from energy trading and the accompanying costs. Then in this energy exchange market between the storage units and the smart grid elements, the price at which energy is traded is determined via an auction mechanism. The game is shown to admit at least one Nash equilibrium and a novel proposed algorithm that is guaranteed to reach such an equilibrium point is proposed. Simulation results show that the proposed approach yields significant performance improvements, in terms of the average utility per storage unit, reaching up to 130.2% compared to a conventional greedy approach.Comment: 11 pages, 11 figures, journa

    A comparative study between the cubic spline and b-spline interpolation methods in free energy calculations

    Get PDF
    Numerical methods are essential in computational science, as analytic calculations for large datasets are impractical. Using numerical methods, one can approximate the problem to solve it with basic arithmetic operations. Interpolation is a commonly-used method, inter alia, constructing the value of new data points within an interval of known data points. Furthermore, polynomial interpolation with a sufficiently high degree can make the data set differentiable. One consequence of using high-degree polynomials is the oscillatory behaviour towards the endpoints, also known as Runge's Phenomenon. Spline interpolation overcomes this obstacle by connecting the data points in a piecewise fashion. However, its complex formulation requires nested iterations in higher dimensions, which is time-consuming. In addition, the calculations have to be repeated for computing each partial derivative at the data point, leading to further slowdown. The B-spline interpolation is an alternative representation of the cubic spline method, where a spline interpolation at a point could be expressed as the linear combination of piecewise basis functions. It was proposed that implementing this new formulation can accelerate many scientific computing operations involving interpolation. Nevertheless, there is a lack of detailed comparison to back up this hypothesis, especially when it comes to computing the partial derivatives. Among many scientific research fields, free energy calculations particularly stand out for their use of interpolation methods. Numerical interpolation was implemented in free energy methods for many purposes, from calculating intermediate energy states to deriving forces from free energy surfaces. The results of these calculations can provide insight into reaction mechanisms and their thermodynamic properties. The free energy methods include biased flat histogram methods, which are especially promising due to their ability to accurately construct free energy profiles at the rarely-visited regions of reaction spaces. Free Energies from Adaptive Reaction Coordinates (FEARCF) that was developed by Professor Kevin J. Naidoo has many advantages over the other flat histogram methods. iii Because of its treatment of the atoms in reactions, FEARCF makes it easier to apply interpolation methods. It implements cubic spline interpolation to derive biasing forces from the free energy surface, driving the reaction towards regions with higher energy. A major drawback of the method is the slowdown experienced in higher dimensions due to the complicated nature of the cubic spline routine. If the routine is replaced by a more straightforward B-spline interpolation, sampling and generating free energy surfaces can be accelerated. The dissertation aims to perform a comparative study between the cubic spline interpolation and B-spline interpolation methods. At first, data sets of analytic functions were used instead of numerical data to compare the accuracy and compute the percentage errors of both methods by taking the functions themselves as reference. These functions were used to evaluate the performances of the two methods at the endpoints, inflections points and regions with a steep gradient. Both interpolation methods generated identically approximated values with a percentage error below the threshold of 1%, although they both performed poorly at the endpoints and the points of inflection. Increasing the number of interpolation knots reduced the errors, however, it caused overfitting in the other regions. Although significant speed-up was not observed in the univariate interpolation, cubic spline suffered from a drastic slowdown in higher dimensions with up to 103 in 3D and 105 in 4D interpolations. The same results applied to the classical molecular dynamics simulations with FEARCF with a speed-up of up to 103 when B-spline interpolation was implemented. To conclude, the B-spline interpolation method can enhance the efficiency of the free energy calculations where cubic spline interpolation has been the currently-used method

    Scheduling for a Processor Sharing System with Linear Slowdown

    Get PDF
    We consider the problem of scheduling arrivals to a congestion system with a finite number of users having identical deterministic demand sizes. The congestion is of the processor sharing type in the sense that all users in the system at any given time are served simultaneously. However, in contrast to classical processor sharing congestion models, the processing slowdown is proportional to the number of users in the system at any time. That is, the rate of service experienced by all users is linearly decreasing with the number of users. For each user there is an ideal departure time (due date). A centralized scheduling goal is then to select arrival times so as to minimize the total penalty due to deviations from ideal times weighted with sojourn times. Each deviation is assumed quadratic, or more generally convex. But due to the dynamics of the system, the scheduling objective function is non-convex. Specifically, the system objective function is a non-smooth piecewise convex function. Nevertheless, we are able to leverage the structure of the problem to derive an algorithm that finds the global optimum in a (large but) finite number of steps, each involving the solution of a constrained convex program. Further, we put forward several heuristics. The first is the traversal of neighbouring constrained convex programming problems, that is guaranteed to reach a local minimum of the centralized problem. This is a form of a "local search", where we use the problem structure in a novel manner. The second is a one-coordinate "global search", used in coordinate pivot iteration. We then merge these two heuristics into a unified "local-global" heuristic, and numerically illustrate the effectiveness of this heuristic

    Cover Tree Bayesian Reinforcement Learning

    Get PDF
    This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration policies in unknown environments. The flexibility and computational simplicity of the model render it suitable for many reinforcement learning problems in continuous state spaces. We demonstrate this in an experimental comparison with least squares policy iteration
    • …
    corecore