382 research outputs found
Policy Improvement in Cribbage
Cribbage is a card game involving multiple methods of scoring which each receive varying emphasis over the course of a typical game. Reinforcement learning is a machine learning strategy in which an agent learns to accomplish a task via direct experience by collecting rewards based on performance. In this thesis, reinforcement learning is applied to the game of cribbage, improving an agent’s policy of combining multiple basic strategies, according to the needs of the dynamic state of the game. From inspection, a reasonable policy is learned by the agent over the course of a million games, but an increase in performance was not demonstrated
Cup Stacking in Graphs
Here we introduce a new game on graphs, called cup stacking, following a line
of what can be considered as -, -, or -person games such as chip
firing, percolation, graph burning, zero forcing, cops and robbers, graph
pebbling, and graph pegging, among others. It can be more general, but the most
basic scenario begins with a single cup on each vertex of a graph. For a vertex
with cups on it we can move all its cups to a vertex at distance from
it, provided the second vertex already has at least one cup on it. The object
is to stack all cups onto some pre-described target vertex. We say that a graph
is stackable if this can be accomplished for all possible target vertices.
In this paper we study cup stacking on many families of graphs, developing a
characterization of stackability in graphs and using it to prove the
stackability of complete graphs, paths, cycles, grids, the Petersen graph, many
Kneser graphs, some trees, cubes of dimension up to 20, "somewhat balanced"
complete -partite graphs, and Hamiltonian diameter two graphs. Additionally
we use the Gallai-Edmonds Structure Theorem, the Edmonds Blossom Algorithm, and
the Hungarian algorithm to devise a polynomial algorithm to decide if a
diameter two graph is stackable.
Our proof that cubes up to dimension 20 are stackable uses Kleitman's
Symmetric Chain Decomposition and the new result of Merino, M\"utze, and
Namrata that all generalized Johnson graphs (excluding the Petersen graph) are
Hamiltonian. We conjecture that all cubes and higher-dimensional grids are
stackable, and leave the reader with several open problems, questions, and
generalizations
Pion-Nucleon Scattering in a Large-N Sigma Model
We review the large-N_c approach to meson-baryon scattering, including recent
interesting developments. We then study pion-nucleon scattering in a particular
variant of the linear sigma-model, in which the couplings of the sigma and pi
mesons to the nucleon are echoed by couplings to the entire tower of I=J
baryons (including the Delta) as dictated by large-N_c group theory. We sum the
complete set of multi-loop meson-exchange
\pi N --> \pi N and \pi N --> \sigma N Feynman diagrams, to leading order in
1/N_c. The key idea, reviewed in detail, is that large-N_c allows the
approximation of LOOP graphs by TREE graphs, so long as the loops contain at
least one baryon leg; trees, in turn, can be summed by solving classical
equations of motion. We exhibit the resulting partial-wave S-matrix and the
rich nucleon and Delta resonance spectrum of this simple model, comparing not
only to experiment but also to pion-nucleon scattering in the Skyrme model. The
moral is that much of the detailed structure of the meson-baryon S-matrix which
hitherto has been uncovered only with skyrmion methods, can also be described
by models with explicit baryon fields, thanks to the 1/N_c expansion.Comment: This LaTeX file inputs the ReVTeX macropackage; figures accompany i
Generation and properties of random graphs and analysis of randomized algorithms
We study a new method of generating random -regular graphs by
repeatedly applying an operation called pegging. The pegging
algorithm, which applies the pegging operation in each step, is a
method of generating large random regular graphs beginning with
small ones. We prove that the limiting joint distribution of the
numbers of short cycles in the resulting graph is independent
Poisson. We use the coupling method to bound the total variation
distance between the joint distribution of short cycle counts and
its limit and thereby show that is an upper bound
of the \eps-mixing time. The coupling involves two different,
though quite similar, Markov chains that are not time-homogeneous.
We also show that the -mixing time is not
. This demonstrates that the upper bound
is essentially tight. We study also the
connectivity of random -regular graphs generated by the pegging
algorithm. We show that these graphs are asymptotically almost
surely -connected for any even constant .
The problem of orientation of random hypergraphs is motivated by the
classical load balancing problem. Let be two fixed integers.
Let \orH be a hypergraph whose hyperedges are uniformly of size
.
To {\em -orient} a hyperedge, we assign exactly of its
vertices positive signs with respect to this hyperedge, and the rest
negative. A -orientation of \orH consists of a
-orientation of all hyperedges of \orH, such that each vertex
receives at most positive signs from its incident hyperedges.
When is large enough, we determine the threshold of the
existence of a -orientation of a random hypergraph. The
-orientation of hypergraphs is strongly related to a general
version of the off-line load balancing problem.
The other topic we discuss is computing the probability of induced
subgraphs in a random regular graph. Let and be a graph
on vertices. For any with , we compute the
probability that the subgraph of induced by
is . The result holds for any and is further
extended to , the probability space of
random graphs with given degree sequence . This result
provides a basic tool for studying properties, for instance the
existence or the counts, of certain types of induced subgraphs
An analysis of occupational information in three selected American geography textbooks.
Thesis (Ed.M.)--Boston Universit
Recommended from our members
A Digital Twin Framework for Production Planning Optimization: Applications for Make-To-Order Manufacturers
In this dissertation, we develop a Digital Twin framework for manufacturing systems and apply it to various production planning and scheduling problems faced by Make-To-Order (MTO) firms. While this framework can be used to digitally represent a particular manufacturing environment with high fidelity, our focus is in using it to generate realistic settings to test production planning and scheduling algorithms in practice. These algorithms have traditionally been tested by either translating a practical situation into the necessary modeling constructs, without discussion of the assumptions and inaccuracies underlying this translation, or by generating random instances of the modeling constructs, without assessing the limitations in accurately representing production environments. The consequence has been a serious gap between theory advancement and industry practice. The major goal of this dissertation is to develop a framework that allows for practical testing, evaluation, and implementation of new approaches for seamless industry adoption. We develop this framework as a modular software package and emphasize the practicality and configurability of the framework, such that minimal modelling effort is required to apply the framework to a multitude of optimization problems and manufacturing systems. Throughout this dissertation, we emphasize the importance of the underlying scheduling problems which provide the basis for additional operational decision making. We focus on the computational evaluation and comparisons of various modeling choices within the developed frameworks, with the objective of identifying models which are both effective and computationally efficient. In Part 1 of this dissertation, we consider a class of Production Planning and Execution problems faced by job shop manufacturing systems. In Part 2 of this dissertation, we consider a class of scheduling problems faced by manufacturers whose production system is dominated by a single operation
- …