57,464 research outputs found
Consensus-based approach to peer-to-peer electricity markets with product differentiation
With the sustained deployment of distributed generation capacities and the
more proactive role of consumers, power systems and their operation are
drifting away from a conventional top-down hierarchical structure. Electricity
market structures, however, have not yet embraced that evolution. Respecting
the high-dimensional, distributed and dynamic nature of modern power systems
would translate to designing peer-to-peer markets or, at least, to using such
an underlying decentralized structure to enable a bottom-up approach to future
electricity markets. A peer-to-peer market structure based on a Multi-Bilateral
Economic Dispatch (MBED) formulation is introduced, allowing for
multi-bilateral trading with product differentiation, for instance based on
consumer preferences. A Relaxed Consensus+Innovation (RCI) approach is
described to solve the MBED in fully decentralized manner. A set of realistic
case studies and their analysis allow us showing that such peer-to-peer market
structures can effectively yield market outcomes that are different from
centralized market structures and optimal in terms of respecting consumers
preferences while maximizing social welfare. Additionally, the RCI solving
approach allows for a fully decentralized market clearing which converges with
a negligible optimality gap, with a limited amount of information being shared.Comment: Accepted for publication in IEEE Transactions on Power System
Consecutive Sequential Probability Ratio Tests of Multiple Statistical Hypotheses
In this paper, we develop a simple approach for testing multiple statistical
hypotheses based on the observations of a number of probability ratios
enumerated consecutively with respect to the index of hypotheses. Explicit and
tight bounds for the probability of making wrong decisions are obtained for
choosing appropriate parameters for the proposed tests. In the special case of
testing two hypotheses, our tests reduce to Wald's sequential probability ratio
tests.Comment: 29 pages, no figure; The main results of this paper have appeared in
Proceedings of SPIE Conferences, Baltimore, Maryland, April 24-27, 201
Parameter Tuning Using Gaussian Processes
Most machine learning algorithms require us to set up their parameter values before applying these algorithms to solve problems. Appropriate parameter settings will bring good performance while inappropriate parameter settings generally result in poor modelling. Hence, it is necessary to acquire the âbestâ parameter values for a particular algorithm before building the model. The âbestâ model not only reflects the ârealâ function and is well fitted to existing points, but also gives good performance when making predictions for new points with previously unseen values.
A number of methods exist that have been proposed to optimize parameter values. The basic idea of all such methods is a trial-and-error process whereas the work presented in this thesis employs Gaussian process (GP) regression to optimize the parameter values of a given machine learning algorithm. In this thesis, we consider the optimization of only two-parameter learning algorithms. All the possible parameter values are specified in a 2-dimensional grid in this work. To avoid brute-force search, Gaussian Process Optimization (GPO) makes use of âexpected improvementâ to pick useful points rather than validating every point of the grid step by step. The point with the highest expected improvement is evaluated using cross-validation and the resulting data point is added to the training set for the Gaussian process model. This process is repeated until a stopping criterion is met. The final model is built using the learning algorithm based on the best parameter values identified in this process.
In order to test the effectiveness of this optimization method on regression and classification problems, we use it to optimize parameters of some well-known machine learning algorithms, such as decision tree learning, support vector machines and boosting with trees. Through the analysis of experimental results obtained on datasets from the UCI repository, we find that the GPO algorithm yields competitive performance compared with a brute-force approach, while exhibiting a distinct advantage in terms of training time and number of cross-validation runs. Overall, the GPO method is a promising method for the optimization of parameter values in machine learning
Basics of Feature Selection and Statistical Learning for High Energy Physics
This document introduces basics in data preparation, feature selection and
learning basics for high energy physics tasks. The emphasis is on feature
selection by principal component analysis, information gain and significance
measures for features. As examples for basic statistical learning algorithms,
the maximum a posteriori and maximum likelihood classifiers are shown.
Furthermore, a simple rule based classification as a means for automated cut
finding is introduced. Finally two toolboxes for the application of statistical
learning techniques are introduced.Comment: 12 pages, 8 figures. Part of the proceedings of the Track
'Computational Intelligence for HEP Data Analysis' at iCSC 200
Rule Induction by EDA with Instance-Subpopulations
In this paper, a new rule induction method by using EDA with instance-subpopulations is proposed. The proposed method introduces a notion of instance-subpopulation, where a set of individuals matching a training instance. Then, EDA procedure is separately carried out for each instance-subpopulation. Individuals generated by each EDA procedure are merged to constitute the population at the next generation. We examined the proposed method on Breast-cancer in Wisconsin and Chess End-Game. The comparisons with other algorithms show the effectiveness of the proposed method
STRATEGY MANAGEMENT IN A MULTI-AGENT SYSTEM USING NEURAL NETWORKS FOR INDUCTIVE AND EXPERIENCE-BASED LEARNING
Intelligent agents and multi-agent systems prove to be a promising paradigm for solving problems in a distributed, cooperative way. Neural networks are a classical solution for ensuring the learning ability of agents. In this paper, we analyse a multi-agent system where agents use different training algorithms and different topologies for their neural networks, which they use to solve classification and regression problems provided by a user. Out of the three training algorithms under investigation, Backpropagation, Quickprop and Rprop, the first demonstrates inferior performance to the other two when considered in isolation. However, by optimizing the strategy of accepting or rejecting tasks, Backpropagation agents succeed in outperforming the other types of agents in terms of the total utility gained. This strategy is learned also with a neural network, by processing the results of past experiences. Therefore, we show a way in which agents can use neural network models for both external purposes and internal ones.agents, learning, neural networks, strategy management multi-agent system.
- âŠ