1,148 research outputs found
Deep Reinforcement Learning with Weighted Q-Learning
Overestimation of the maximum action-value is a well-known problem that
hinders Q-Learning performance, leading to suboptimal policies and unstable
learning. Among several Q-Learning variants proposed to address this issue,
Weighted Q-Learning (WQL) effectively reduces the bias and shows remarkable
results in stochastic environments. WQL uses a weighted sum of the estimated
action-values, where the weights correspond to the probability of each
action-value being the maximum; however, the computation of these probabilities
is only practical in the tabular settings. In this work, we provide the
methodological advances to benefit from the WQL properties in Deep
Reinforcement Learning (DRL), by using neural networks with Dropout Variational
Inference as an effective approximation of deep Gaussian processes. In
particular, we adopt the Concrete Dropout variant to obtain calibrated
estimates of epistemic uncertainty in DRL. We show that model uncertainty in
DRL can be useful not only for action selection, but also action evaluation. We
analyze how the novel Weighted Deep Q-Learning algorithm reduces the bias
w.r.t. relevant baselines and provide empirical evidence of its advantages on
several representative benchmarks.Comment: Corrected typo
OddAssist - An eSports betting recommendation system
It is globally accepted that sports betting has been around for as long as the sport itself. Back in
the 1st century, circuses hosted chariot races and fans would bet on who they thought would
emerge victorious. With the evolution of technology, sports evolved and, mainly, the
bookmakers evolved. Due to the mass digitization, these houses are now available online, from
anywhere, which makes this market inherently more tempting. In fact, this transition has
propelled the sports betting industry into a multi-billion-dollar industry that can rival the sports
industry.
Similarly, younger generations are increasingly attached to the digital world, including
electronic sports – eSports. In fact, young men are more likely to follow eSports than traditional
sports. Counter-Strike: Global Offensive, the videogame on which this dissertation focuses, is
one of the pillars of this industry and during 2022, 15 million dollars were distributed in
tournament prizes and there was a peak of 2 million concurrent viewers. This factor, combined
with the digitization of bookmakers, make the eSports betting market extremely appealing for
exploring machine learning techniques, since young people who follow this type of sports also
find it easy to bet online.
In this dissertation, a betting recommendation system is proposed, implemented, tested, and
validated, which considers the match history of each team, the odds of several bookmakers and
the general feeling of fans in a discussion forum.
The individual machine learning models achieved great results by themselves. More specifically,
the match history model managed an accuracy of 66.66% with an expected calibration error of
2.10% and the bookmaker odds model, with an accuracy of 65.05% and a calibration error of
2.53%.
Combining the models through stacking increased the accuracy to 67.62% but worsened the
expected calibration error to 5.19%. On the other hand, merging the datasets and training a
new, stronger model on that data improved the accuracy to 66.81% and had an expected
calibration error of 2.67%.
The solution is thoroughly tested in a betting simulation encapsulating 2500 matches. The
system’s final odd is compared with the odds of the bookmakers and the expected long-term
return is computed. A bet is made depending on whether it is above a certain threshold. This
strategy called positive expected value betting was used at multiple thresholds and the results
were compared.
While the stacking solution did not perform in a betting environment, the match history model
prevailed with profits form 8% to 90%; the odds model had profits ranging from 13% to 211%;
and the dataset merging solution profited from 11% to 77%, all depending on the minimum
expected value thresholds.
Therefore, from this work resulted several machine learning approaches capable of profiting
from Counter Strike: Global Offensive bets long-term.É globalmente aceite que as apostas desportivas existem há tanto tempo quanto o próprio
desporto. Mesmo no primeiro século, os circos hospedavam corridas de carruagens e os fãs
apostavam em quem achavam que sairia vitorioso, semelhante às corridas de cavalo de agora.
Com a evolução da tecnologia, os desportos foram evoluindo e, principalmente, evoluíram as
casas de apostas. Devido à onda de digitalização em massa, estas casas passaram a estar
disponíveis online, a partir de qualquer sítio, o que torna este mercado inerentemente mais
tentador. De facto, esta transição propulsionou a indústria das apostas desportivas para uma
indústria multibilionária que agora pode mesmo ser comparada à indústria dos desportos.
De forma semelhante, gerações mais novas estão cada vez mais ligadas ao digital, incluindo
desportos digitais – eSports. Counter-Strike: Global Offensive, o videojogo sobre o qual esta
dissertação incide, é um dos grandes impulsionadores desta indústria e durante 2022, 15
milhões de dólares foram distribuídos em prémios de torneios e houve um pico de espectadores
concorrentes de 2 milhões. Embora esta realidade não seja tão pronunciada em Portugal, em
vários países, jovens adultos do sexo masculino, têm mais probabilidade de acompanharem
eSports que desportos tradicionais. Este fator, aliado à digitalização das casas de apostas,
tornam o mercado de apostas em eSports muito apelativo para a exploração técnicas de
aprendizagem automática, uma vez que os jovens que acompanham este tipo de desportos têm
facilidade em apostar online.
Nesta dissertação é proposto, implementado, testado e validado um sistema de recomendação
de apostas que considera o histórico de resultados de cada equipa, as cotas de várias casas de
apostas e o sentimento geral dos fãs num fórum de discussão – HLTV. Deste modo, foram
inicialmente desenvolvidos 3 sistemas de aprendizagem automática.
Para avaliar os sistemas criados, foi considerado o período de outubro de 2020 até março de
2023, o que corresponde a 2500 partidas. Porém, sendo o período de testes tão extenso, existe
muita variação na competitividade das equipas. Deste modo, para evitar que os modelos
ficassem obsoletos durante este período de teste, estes foram re-treinados no mínimo uma vez
por mês durante a duração do período de testes.
O primeiro sistema de aprendizagem automática incide sobre a previsão a partir de resultados
anteriores, ou seja, o histórico de jogos entre as equipas. A melhor solução foi incorporar os
jogadores na previsão, juntamente com o ranking da equipa e dando mais peso aos jogos mais
recentes. Esta abordagem, utilizando regressão logística teve uma taxa de acerto de 66.66%
com um erro expectável de calibração de 2.10%.
O segundo sistema compila as cotas das várias casas de apostas e faz previsões com base em
padrões das suas variações. Neste caso, incorporar as casas de aposta tendo atingido uma taxa
de acerto de 65.88% utilizando regressão logística, porém, era um modelo pior calibrado que o
modelo que utilizava a média das cotas utilizando gradient boosting machine, que exibiu uma
taxa de acerto de 65.06%, mas melhores métricas de calibração, com um erro expectável de
2.53%.
O terceiro sistema, baseia-se no sentimento dos fãs no fórum HLTV. Primeiramente, é utilizado
o GPT 3.5 para extrair o sentimento de cada comentário, com uma taxa geral de acerto de
84.28%. No entanto, considerando apenas os comentários classificados como conclusivos, a taxa de acerto é de 91.46%. Depois de classificados, os comentários são depois passados a um
modelo support vector machine que incorpora o comentador e a sua taxa de acerto nas partidas
anteriores. Esta solução apenas previu corretamente 59.26% dos casos com um erro esperado
de calibração de 3.22%.
De modo a agregar as previsões destes 3 modelos, foram testadas duas abordagens.
Primeiramente, foi testado treinar um novo modelo a partir das previsões dos restantes
(stacking), obtendo uma taxa de acerto de 67.62%, mas com um erro de calibração esperado
de 5.19%. Na segunda abordagem, por outro lado, são agregados os dados utilizados no treino
dos 3 modelos individuais, e é treinado um novo modelo com base nesse conjunto de dados
mais complexo. Esta abordagem, recorrendo a support vector machine, obteve uma taxa de
acerto mais baixa, 66.81% mas um erro esperado de calibração mais baixo, 2.67%.
Por fim, as abordagens são postas à prova através de um simulador de apostas, onde sistema
cada faz uma previsão e a compara com a cota oferecia pelas casas de apostas. A simulação é
feita para vários patamares de retorno mínimo esperado, onde os sistemas apenas apostam
caso a taxa esperada de retorno da cota seja superior à do patamar.
Esta cota final é depois comparada com as cotas das casas de apostas e, caso exista uma casa
com uma cota superior, uma aposta é feita. Esta estratégia denomina-se de apostas de valor
esperado positivo, ou seja, apostas cuja cota é demasiado elevada face à probabilidade de se
concretizar e que geram lucros a longo termo. Nesta simulação, os melhores resultados, para
uma taxa de mínima de 5% foram os modelos criados a partir das cotas das casas de apostas,
com lucros entre os 13% e os 211%; o dos dados históricos que lucrou entre 8% e 90%; e por
fim, o modelo composto, com lucros entre os 11% e os 77%.
Assim, deste trabalho resultaram diversos sistemas baseados em machine learning capazes de
obter lucro a longo-termo a apostar em Counter Strike: Global Offensive
A data mining approach to predict probabilities of football matches
Com um crescimento cada vez maior dos volumes apostados em competições desportivas torna-se importante verificar até onde as técnicas de aprendizagem computacional conseguem trazer valor a esta área. É feita uma avaliação da performance de algoritmos estado-da-arte em diversas métricas, incorporado na metodologia CRISP-DM que é percorrida desde a aquisição de dados via web-scraping, passando pela geração e seleção de features. É também explorado o universo de técnicas de ensemble numa tentativa de melhorar os modelos do ponto de vista do bias-variance trade-off, com especial foco nos ensembles de redes neuronais.With the increasing growth of the amount of money invested in sports betting markets it is important to verify how far the machine learning techniques can bring value to this area. A performance evaluation of the state-of-art algorithms is performed and evaluated according to several metrics, incorporated in the CRISP-DM methodology that goes from web-scraping through to generation and selection of features. It is also explored the universe of ensemble techniques in an attempt to improve the models from the point of view of bias-variance trade-off, with a special focus on neural network ensembles
Prediction of high-performance concrete compressive strength through a comparison of machine learning techniques
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceHigh-performance concrete (HPC) is a highly complex composite material whose characteristics are extremely difficult to model. One of those characteristics is the concrete compressive strength, a nonlinear function of the same ingredients that compose HPC: cement, fly ash, blast furnace slag, water, superplasticizer, age, and coarse and fine aggregates. Research has shown time and time again that concrete strength is not determined just by the water-to-cement ratio, which was for years the go to metric. In addition, traditional methods that attempt to model HPC, such as regression analysis, do not provide sufficient prediction power due to nonlinear proprieties of the mixture. Therefore, this study attempts to optimize the prediction and modeling of the compressive strength of HPC by analyzing seven different machine learning (ML) algorithms: three regularization algorithms (Lasso, Ridge and Elastic Net), three ensemble algorithms (Random Forest, Gradient Boost and AdaBoost), and Artificial Neural Networks. All techniques were built and tested with a dataset composed of data from 17 different concrete strength test laboratories, under the same experimental conditions, which enabled a fair comparison amongst them and between different previous studies in the field. Feature importance analysis and outlier analysis were also performed, and all models were subject to a
Wilcoxon Signed-Ranks Test to ensure statistically significant results. The final results show that the
more complex ML algorithms provided greater accuracy than the regularization techniques, with Gradient Boost being the superior model amongst them, providing more accurate predictions than the sate-of-the-art. Better results were achieved using all variables and without removing outlier observations
Gaining Insight into Determinants of Physical Activity using Bayesian Network Learning
Contains fulltext :
228326pre.pdf (preprint version ) (Open Access)
Contains fulltext :
228326pub.pdf (publisher's version ) (Open Access)BNAIC/BeneLearn 202
Bayesian Learning in the Counterfactual World
Recent years have witnessed a surging interest towards the use of machine learning tools for causal inference. In contrast to the usual large data settings where the primary goal is prediction, many disciplines, such as health, economic and social sciences, are instead interested in causal questions. Learning individualized responses to an intervention is a crucial task in many applied fields (e.g., precision medicine, targeted advertising, precision agriculture, etc.) where the ultimate goal is to design optimal and highly-personalized policies based on individual features. In this work, I thus tackle the problem of estimating causal effects of an intervention that are heterogeneous across a population of interest and depend on an individual set of characteristics (e.g., a patient's clinical record, user's browsing history, etc..) in high-dimensional observational data settings. This is done by utilizing Bayesian Nonparametric or Probabilistic Machine Learning tools that are specifically adjusted for the causal setting and have desirable uncertainty quantification properties, with a focus on the issues of interpretability/explainability and inclusion of domain experts' prior knowledge. I begin by introducing terminology and concepts from causality and causal reasoning in the first chapter. Then I include a literature review of some of the state-of-the-art regression-based methods for heterogeneous treatment effects estimation, with an attempt to build a unifying taxonomy and lay down the finite-sample empirical properties of these models. The chapters forming the core of the dissertation instead present some novel methods addressing existing issues in individualized causal effects estimation: Chapter 3 develops both a Bayesian tree ensemble method and a deep learning architecture to tackle interpretability, uncertainty coverage and targeted regularization; Chapter 4 instead introduces a novel multi-task Deep Kernel Learning method particularly suited for multi-outcome | multi-action scenarios. The last chapter concludes with a discussion
- …