37 research outputs found
OddAssist - An eSports betting recommendation system
It is globally accepted that sports betting has been around for as long as the sport itself. Back in
the 1st century, circuses hosted chariot races and fans would bet on who they thought would
emerge victorious. With the evolution of technology, sports evolved and, mainly, the
bookmakers evolved. Due to the mass digitization, these houses are now available online, from
anywhere, which makes this market inherently more tempting. In fact, this transition has
propelled the sports betting industry into a multi-billion-dollar industry that can rival the sports
industry.
Similarly, younger generations are increasingly attached to the digital world, including
electronic sports – eSports. In fact, young men are more likely to follow eSports than traditional
sports. Counter-Strike: Global Offensive, the videogame on which this dissertation focuses, is
one of the pillars of this industry and during 2022, 15 million dollars were distributed in
tournament prizes and there was a peak of 2 million concurrent viewers. This factor, combined
with the digitization of bookmakers, make the eSports betting market extremely appealing for
exploring machine learning techniques, since young people who follow this type of sports also
find it easy to bet online.
In this dissertation, a betting recommendation system is proposed, implemented, tested, and
validated, which considers the match history of each team, the odds of several bookmakers and
the general feeling of fans in a discussion forum.
The individual machine learning models achieved great results by themselves. More specifically,
the match history model managed an accuracy of 66.66% with an expected calibration error of
2.10% and the bookmaker odds model, with an accuracy of 65.05% and a calibration error of
2.53%.
Combining the models through stacking increased the accuracy to 67.62% but worsened the
expected calibration error to 5.19%. On the other hand, merging the datasets and training a
new, stronger model on that data improved the accuracy to 66.81% and had an expected
calibration error of 2.67%.
The solution is thoroughly tested in a betting simulation encapsulating 2500 matches. The
system’s final odd is compared with the odds of the bookmakers and the expected long-term
return is computed. A bet is made depending on whether it is above a certain threshold. This
strategy called positive expected value betting was used at multiple thresholds and the results
were compared.
While the stacking solution did not perform in a betting environment, the match history model
prevailed with profits form 8% to 90%; the odds model had profits ranging from 13% to 211%;
and the dataset merging solution profited from 11% to 77%, all depending on the minimum
expected value thresholds.
Therefore, from this work resulted several machine learning approaches capable of profiting
from Counter Strike: Global Offensive bets long-term.É globalmente aceite que as apostas desportivas existem há tanto tempo quanto o próprio
desporto. Mesmo no primeiro século, os circos hospedavam corridas de carruagens e os fãs
apostavam em quem achavam que sairia vitorioso, semelhante às corridas de cavalo de agora.
Com a evolução da tecnologia, os desportos foram evoluindo e, principalmente, evoluíram as
casas de apostas. Devido à onda de digitalização em massa, estas casas passaram a estar
disponíveis online, a partir de qualquer sítio, o que torna este mercado inerentemente mais
tentador. De facto, esta transição propulsionou a indústria das apostas desportivas para uma
indústria multibilionária que agora pode mesmo ser comparada à indústria dos desportos.
De forma semelhante, gerações mais novas estão cada vez mais ligadas ao digital, incluindo
desportos digitais – eSports. Counter-Strike: Global Offensive, o videojogo sobre o qual esta
dissertação incide, é um dos grandes impulsionadores desta indústria e durante 2022, 15
milhões de dólares foram distribuídos em prémios de torneios e houve um pico de espectadores
concorrentes de 2 milhões. Embora esta realidade não seja tão pronunciada em Portugal, em
vários países, jovens adultos do sexo masculino, têm mais probabilidade de acompanharem
eSports que desportos tradicionais. Este fator, aliado à digitalização das casas de apostas,
tornam o mercado de apostas em eSports muito apelativo para a exploração técnicas de
aprendizagem automática, uma vez que os jovens que acompanham este tipo de desportos têm
facilidade em apostar online.
Nesta dissertação é proposto, implementado, testado e validado um sistema de recomendação
de apostas que considera o histórico de resultados de cada equipa, as cotas de várias casas de
apostas e o sentimento geral dos fãs num fórum de discussão – HLTV. Deste modo, foram
inicialmente desenvolvidos 3 sistemas de aprendizagem automática.
Para avaliar os sistemas criados, foi considerado o período de outubro de 2020 até março de
2023, o que corresponde a 2500 partidas. Porém, sendo o período de testes tão extenso, existe
muita variação na competitividade das equipas. Deste modo, para evitar que os modelos
ficassem obsoletos durante este período de teste, estes foram re-treinados no mínimo uma vez
por mês durante a duração do período de testes.
O primeiro sistema de aprendizagem automática incide sobre a previsão a partir de resultados
anteriores, ou seja, o histórico de jogos entre as equipas. A melhor solução foi incorporar os
jogadores na previsão, juntamente com o ranking da equipa e dando mais peso aos jogos mais
recentes. Esta abordagem, utilizando regressão logística teve uma taxa de acerto de 66.66%
com um erro expectável de calibração de 2.10%.
O segundo sistema compila as cotas das várias casas de apostas e faz previsões com base em
padrões das suas variações. Neste caso, incorporar as casas de aposta tendo atingido uma taxa
de acerto de 65.88% utilizando regressão logística, porém, era um modelo pior calibrado que o
modelo que utilizava a média das cotas utilizando gradient boosting machine, que exibiu uma
taxa de acerto de 65.06%, mas melhores métricas de calibração, com um erro expectável de
2.53%.
O terceiro sistema, baseia-se no sentimento dos fãs no fórum HLTV. Primeiramente, é utilizado
o GPT 3.5 para extrair o sentimento de cada comentário, com uma taxa geral de acerto de
84.28%. No entanto, considerando apenas os comentários classificados como conclusivos, a taxa de acerto é de 91.46%. Depois de classificados, os comentários são depois passados a um
modelo support vector machine que incorpora o comentador e a sua taxa de acerto nas partidas
anteriores. Esta solução apenas previu corretamente 59.26% dos casos com um erro esperado
de calibração de 3.22%.
De modo a agregar as previsões destes 3 modelos, foram testadas duas abordagens.
Primeiramente, foi testado treinar um novo modelo a partir das previsões dos restantes
(stacking), obtendo uma taxa de acerto de 67.62%, mas com um erro de calibração esperado
de 5.19%. Na segunda abordagem, por outro lado, são agregados os dados utilizados no treino
dos 3 modelos individuais, e é treinado um novo modelo com base nesse conjunto de dados
mais complexo. Esta abordagem, recorrendo a support vector machine, obteve uma taxa de
acerto mais baixa, 66.81% mas um erro esperado de calibração mais baixo, 2.67%.
Por fim, as abordagens são postas à prova através de um simulador de apostas, onde sistema
cada faz uma previsão e a compara com a cota oferecia pelas casas de apostas. A simulação é
feita para vários patamares de retorno mínimo esperado, onde os sistemas apenas apostam
caso a taxa esperada de retorno da cota seja superior à do patamar.
Esta cota final é depois comparada com as cotas das casas de apostas e, caso exista uma casa
com uma cota superior, uma aposta é feita. Esta estratégia denomina-se de apostas de valor
esperado positivo, ou seja, apostas cuja cota é demasiado elevada face à probabilidade de se
concretizar e que geram lucros a longo termo. Nesta simulação, os melhores resultados, para
uma taxa de mínima de 5% foram os modelos criados a partir das cotas das casas de apostas,
com lucros entre os 13% e os 211%; o dos dados históricos que lucrou entre 8% e 90%; e por
fim, o modelo composto, com lucros entre os 11% e os 77%.
Assim, deste trabalho resultaram diversos sistemas baseados em machine learning capazes de
obter lucro a longo-termo a apostar em Counter Strike: Global Offensive
Visual Co-occurence Learning using Denoising Autoencoders
Modern recommendation systems are leveraging the recent advances in deep neural networks to provide better recommendations. In addition to making accurate recommendations to users, we are interested in the recommendation of items that are complementary to a set of other items. More specifically, given a user query containing items from different categories, we seek to recommend one or more items from our inventory based on latent representations of their visual appearance. For this purpose, a denoising autoencoder (DAE) is used. The capacity of DAEs to remove the noise from corrupted inputs by predicting their corresponding uncorrupted counterparts is investigated. Used with the right corruption process, we show that they can be used as regular prediction models. Furthermore, we measure experimentally two of their specificities. The first is their capacity to predict any potentially missing variable from their inputs. The second is their ability to predict multiple missing variables at the same time given a limited amount of information at their disposal. Finally, we experiment with the use of DAEs to recommend fashion items that are jointly fashionable with a user query. Latent representations of items contained in the user query are being fed into a DAE to predict the latent representation of the ideal item to recommend. This ideal item is then matched to a real item from our inventory that we end up recommending to the user
Metric Optimization and Mainstream Bias Mitigation in Recommender Systems
The first part of this thesis focuses on maximizing the overall
recommendation accuracy. This accuracy is usually evaluated with some
user-oriented metric tailored to the recommendation scenario, but because
recommendation is usually treated as a machine learning problem, recommendation
models are trained to maximize some other generic criteria that does not
necessarily align with the criteria ultimately captured by the user-oriented
evaluation metric. Recent research aims at bridging this gap between training
and evaluation via direct ranking optimization, but still assumes that the
metric used for evaluation should also be the metric used for training. We
challenge this assumption, mainly because some metrics are more informative
than others. Indeed, we show that models trained via the optimization of a loss
inspired by Rank-Biased Precision (RBP) tend to yield higher accuracy, even
when accuracy is measured with metrics other than RBP. However, the superiority
of this RBP-inspired loss stems from further benefiting users who are already
well-served, rather than helping those who are not.
This observation inspires the second part of this thesis, where our focus
turns to helping non-mainstream users. These are users who are difficult to
recommend to either because there is not enough data to model them, or because
they have niche taste and thus few similar users to look at when recommending
in a collaborative way. These differences in mainstreamness introduce a bias
reflected in an accuracy gap between users or user groups, which we try to
narrow.Comment: PhD Thesis defended on Nov 14, 202
On Transforming Reinforcement Learning by Transformer: The Development Trajectory
Transformer, originally devised for natural language processing, has also
attested significant success in computer vision. Thanks to its super expressive
power, researchers are investigating ways to deploy transformers to
reinforcement learning (RL) and the transformer-based models have manifested
their potential in representative RL benchmarks. In this paper, we collect and
dissect recent advances on transforming RL by transformer (transformer-based RL
or TRL), in order to explore its development trajectory and future trend. We
group existing developments in two categories: architecture enhancement and
trajectory optimization, and examine the main applications of TRL in robotic
manipulation, text-based games, navigation and autonomous driving. For
architecture enhancement, these methods consider how to apply the powerful
transformer structure to RL problems under the traditional RL framework, which
model agents and environments much more precisely than deep RL methods, but
they are still limited by the inherent defects of traditional RL algorithms,
such as bootstrapping and "deadly triad". For trajectory optimization, these
methods treat RL problems as sequence modeling and train a joint state-action
model over entire trajectories under the behavior cloning framework, which are
able to extract policies from static datasets and fully use the long-sequence
modeling capability of the transformer. Given these advancements, extensions
and challenges in TRL are reviewed and proposals about future direction are
discussed. We hope that this survey can provide a detailed introduction to TRL
and motivate future research in this rapidly developing field.Comment: 26 page
Recommended from our members
Constraint based approaches to interpretable and semi-supervised machine learning
Interpretability and Explainability of machine learning algorithms are becoming increasingly important as Machine Learning (ML) systems get widely applied to domains like clinical healthcare, social media and governance. A related major challenge in deploying ML systems pertains to reliable learning when expert annotation is severely limited. This dissertation prescribes a common framework to address these challenges, based on the use of constraints that can make an ML model more interpretable, lead to novel methods for explaining ML models, or help to learn reliably with limited supervision.
In particular, we focus on the class of latent variable models and develop a general learning framework by constraining realizations of latent variables and/or model parameters. We propose specific constraints that can be used to develop identifiable latent variable models, that in turn learn interpretable outcomes. The proposed framework is first used in Non–negative Matrix Factorization and Probabilistic Graphical Models. For both models, algorithms are proposed to incorporate such constraints with seamless and tractable augmentation of the associated learning and inference procedures. The utility of the proposed methods is demonstrated for our working application domain – identifiable phenotyping using Electronic Health Records (EHRs). Evaluation by domain experts reveals that the proposed models are indeed more clinically relevant (and hence more interpretable) than existing counterparts. The work also demonstrates that while there may be inherent trade–offs between constraining models to encourage interpretability, the quantitative performance of downstream tasks remains competitive.
We then focus on constraint based mechanisms to explain decisions or outcomes of supervised black-box models. We propose an explanation model based on generating examples where the nature of the examples is constrained i.e. they have to be sampled from the underlying data domain. To do so, we train a generative model to characterize the data manifold in a high dimensional ambient space. Constrained sampling then allows us to generate naturalistic examples that lie along the data manifold. We propose ways to summarize model behavior using such constrained examples.
In the last part of the contributions, we argue that heterogeneity of data sources is useful in situations where very little to no supervision is available. This thesis leverages such heterogeneity (via constraints) for two critical but widely different machine learning algorithms. In each case, a novel algorithm in the sub-class of co–regularization is developed to combine information from heterogeneous sources. Co–regularization is a framework of constraining latent variables and/or latent distributions in order to leverage heterogeneity. The proposed algorithms are utilized for clustering, where the intent is to generate a partition or grouping of observed samples, and for Learning to Rank algorithms – used to rank a set of observed samples in order of preference with respect to a specific search query. The proposed methods are evaluated on clustering web documents, social network users, and information retrieval applications for ranking search queries.Electrical and Computer Engineerin
Learning to Dream, Dreaming to Learn
The importance of sleep for healthy brain function is widely acknowledged. However, it remains mysterious how the sleeping brain, disconnected from the outside world and plunged into the fantastic experiences of dreams, is actively learning. A main feature of dreams is the generation of new realistic sensory experiences in absence of external input, from the combination of diverse memory elements. How do cortical networks host the generation of these sensory experiences during sleep? What function could these generated experiences serve?
In this thesis, we attempt to answer these questions using an original, computational approach inspired by modern artificial intelligence. In light of existing cognitive theories and experimental data, we suggest that cortical networks implement a generative model of the sensorium that is systematically optimized during wakefulness and sleep states. By performing network simulations on datasets of natural images, our results not only propose potential mechanisms for dream generation during sleep states, but suggest that dreaming is an essential feature for learning semantic representations throughout mammalian development
Neural Methods for Effective, Efficient, and Exposure-Aware Information Retrieval
Neural networks with deep architectures have demonstrated significant
performance improvements in computer vision, speech recognition, and natural
language processing. The challenges in information retrieval (IR), however, are
different from these other application areas. A common form of IR involves
ranking of documents--or short passages--in response to keyword-based queries.
Effective IR systems must deal with query-document vocabulary mismatch problem,
by modeling relationships between different query and document terms and how
they indicate relevance. Models should also consider lexical matches when the
query contains rare terms--such as a person's name or a product model
number--not seen during training, and to avoid retrieving semantically related
but irrelevant results. In many real-life IR tasks, the retrieval involves
extremely large collections--such as the document index of a commercial Web
search engine--containing billions of documents. Efficient IR methods should
take advantage of specialized IR data structures, such as inverted index, to
efficiently retrieve from large collections. Given an information need, the IR
system also mediates how much exposure an information artifact receives by
deciding whether it should be displayed, and where it should be positioned,
among other results. Exposure-aware IR systems may optimize for additional
objectives, besides relevance, such as parity of exposure for retrieved items
and content publishers. In this thesis, we present novel neural architectures
and methods motivated by the specific needs and challenges of IR tasks.Comment: PhD thesis, Univ College London (2020