4 research outputs found
Tego - A framework for adversarial planning
This study establishes a framework called β-Tego for a situation in which two agents are each given a set of players for a competitive game. Each agent places their players in an order. Players on each side at the same position in the order play one another, with the agent\u27s score being the sum of their player\u27s scores. The planning agents are permitted to simultaneous reorder their players in each of several stages. The reordering is termed competitive replanning. The resulting framework is scalable by changing the number of players and the complexity of the replanning process. The framework is demonstrated using iterated prisoner\u27s dilemma on a set of twenty players. The system is first tested with one agent unable to change the order of its players, yielding an optimization problem. The system is then tested in a competitive co-evolution of planning agents. The optimization form of the system makes globally sensible assignments of players. The co-evolutionary version concentrates on matching particular high-payoff pairs of players with the agents repeatedly reversing one another\u27s assignments, with the majority of players with smaller payoffs at risk are largely ignored
Recommended from our members
Autogenerative Networks
Artificial intelligence powered by deep neural networks has seen tremendous improvements in the last decade, achieving superhuman performance on a diverse range of tasks. Many worry that it can one day develop the ability to recursively self-improve itself, leading to an intelligence explosion known as the Singularity. Autogenerative networks, or neural networks generating neural networks, is one major plausible pathway towards realizing this possibility. The object of this thesis is to study various challenges and applications of small-scale autogenerative networks in domains such as artificial life, reinforcement learning, neural network initialization and optimization, gradient-based meta-learning, and logical networks. Chapters 2 and 3 describe novel mechanisms for generating neural network weights and embeddings. Chapters 4 and 5 identify problems and propose solutions to fix optimization difficulties in differentiable mechanisms of neural network generation known as Hypernetworks. Chapters 6 and 7 study implicit models of network generation like backpropagating through gradient descent itself and integrating discrete solvers into continuous functions. Together, the chapters in this thesiscontribute novel proposals for non-differentiable neural network generation mechanisms, significant improvements to existing differentiable network generation mechanisms, and an assimilation of different learning paradigms in autogenerative networks