117,207 research outputs found
Assessing the Impact of Game Day Schedule and Opponents on Travel Patterns and Route Choice using Big Data Analytics
The transportation system is crucial for transferring people and goods from point A to point B. However, its reliability can be decreased by unanticipated congestion resulting from planned special events. For example, sporting events collect large crowds of people at specific venues on game days and disrupt normal traffic patterns.
The goal of this study was to understand issues related to road traffic management during major sporting events by using widely available INRIX data to compare travel patterns and behaviors on game days against those on normal days. A comprehensive analysis was conducted on the impact of all Nebraska Cornhuskers football games over five years on traffic congestion on five major routes in Nebraska. We attempted to identify hotspots, the unusually high-risk zones in a spatiotemporal space containing traffic congestion that occur on almost all game days. For hotspot detection, we utilized a method called Multi-EigenSpot, which is able to detect multiple hotspots in a spatiotemporal space. With this algorithm, we were able to detect traffic hotspot clusters on the five chosen routes in Nebraska. After detecting the hotspots, we identified the factors affecting the sizes of hotspots and other parameters. The start time of the game and the Cornhuskers’ opponent for a given game are two important factors affecting the number of people coming to Lincoln, Nebraska, on game days. Finally, the Dynamic Bayesian Networks (DBN) approach was applied to forecast the start times and locations of hotspot clusters in 2018 with a weighted mean absolute percentage error (WMAPE) of 13.8%
Assessing the Potential of Classical Q-learning in General Game Playing
After the recent groundbreaking results of AlphaGo and AlphaZero, we have
seen strong interests in deep reinforcement learning and artificial general
intelligence (AGI) in game playing. However, deep learning is
resource-intensive and the theory is not yet well developed. For small games,
simple classical table-based Q-learning might still be the algorithm of choice.
General Game Playing (GGP) provides a good testbed for reinforcement learning
to research AGI. Q-learning is one of the canonical reinforcement learning
methods, and has been used by (Banerjee Stone, IJCAI 2007) in GGP. In this
paper we implement Q-learning in GGP for three small-board games (Tic-Tac-Toe,
Connect Four, Hex)\footnote{source code: https://github.com/wh1992v/ggp-rl}, to
allow comparison to Banerjee et al.. We find that Q-learning converges to a
high win rate in GGP. For the -greedy strategy, we propose a first
enhancement, the dynamic algorithm. In addition, inspired by (Gelly
Silver, ICML 2007) we combine online search (Monte Carlo Search) to
enhance offline learning, and propose QM-learning for GGP. Both enhancements
improve the performance of classical Q-learning. In this work, GGP allows us to
show, if augmented by appropriate enhancements, that classical table-based
Q-learning can perform well in small games.Comment: arXiv admin note: substantial text overlap with arXiv:1802.0594
Learning the Designer's Preferences to Drive Evolution
This paper presents the Designer Preference Model, a data-driven solution
that pursues to learn from user generated data in a Quality-Diversity
Mixed-Initiative Co-Creativity (QD MI-CC) tool, with the aims of modelling the
user's design style to better assess the tool's procedurally generated content
with respect to that user's preferences. Through this approach, we aim for
increasing the user's agency over the generated content in a way that neither
stalls the user-tool reciprocal stimuli loop nor fatigues the user with
periodical suggestion handpicking. We describe the details of this novel
solution, as well as its implementation in the MI-CC tool the Evolutionary
Dungeon Designer. We present and discuss our findings out of the initial tests
carried out, spotting the open challenges for this combined line of research
that integrates MI-CC with Procedural Content Generation through Machine
Learning.Comment: 16 pages, Accepted and to appear in proceedings of the 23rd European
Conference on the Applications of Evolutionary and bio-inspired Computation,
EvoApplications 202
Hedonic Coalition Formation for Distributed Task Allocation among Wireless Agents
Autonomous wireless agents such as unmanned aerial vehicles or mobile base
stations present a great potential for deployment in next-generation wireless
networks. While current literature has been mainly focused on the use of agents
within robotics or software applications, we propose a novel usage model for
self-organizing agents suited to wireless networks. In the proposed model, a
number of agents are required to collect data from several arbitrarily located
tasks. Each task represents a queue of packets that require collection and
subsequent wireless transmission by the agents to a central receiver. The
problem is modeled as a hedonic coalition formation game between the agents and
the tasks that interact in order to form disjoint coalitions. Each formed
coalition is modeled as a polling system consisting of a number of agents which
move between the different tasks present in the coalition, collect and transmit
the packets. Within each coalition, some agents can also take the role of a
relay for improving the packet success rate of the transmission. The proposed
algorithm allows the tasks and the agents to take distributed decisions to join
or leave a coalition, based on the achieved benefit in terms of effective
throughput, and the cost in terms of delay. As a result of these decisions, the
agents and tasks structure themselves into independent disjoint coalitions
which constitute a Nash-stable network partition. Moreover, the proposed
algorithm allows the agents and tasks to adapt the topology to environmental
changes such as the arrival/removal of tasks or the mobility of the tasks.
Simulation results show how the proposed algorithm improves the performance, in
terms of average player (agent or task) payoff, of at least 30.26% (for a
network of 5 agents with up to 25 tasks) relatively to a scheme that allocates
nearby tasks equally among agents.Comment: to appear, IEEE Transactions on Mobile Computin
- …