75 research outputs found
Recommended from our members
Accuracy and interpretability trade-offs in machine learning applied to safer gambling
Responsible gambling is an area of research and industry which seeks to understand the pathways to harm from gambling and implement programmes to reduce or prevent harm that gambling might cause. There is a growing body of research that has used gambling behavioural data to model and predict harmful gambling, and the industry is showing increasing interest in technologies that can help gambling operators to better predict harm and prevent it through appropriate interventions. However, industry surveys and feedback clearly indicate that in order to enable wider adoption of such data-driven methods, industry and policy makers require a greater understanding of how machine learning methods make these predictions. In this paper, we make use of the TREPAN algorithm for extracting decision trees from Neural Networks and Random Forests. We present the first comparative evaluation of predictive performance and tree properties for extracted trees, which is also the first comparative evaluation of knowledge extraction for safer gambling. Results indicate that TREPAN extracts better performing trees than direct learning of decision trees from the data. Overall, trees extracted with TREPAN from different models offer a good compromise between prediction accuracy and interpretability. TREPAN can produce decision trees with extended tests rules of different forms, so that interpretability depends on multiple factors. We present detailed results and a discussion of the trade-offs with regard to performance and interpretability and use in the gambling industry
Recommended from our members
Online problem gambling: a comparison of casino players and sports bettors via predictive modeling using behavioral tracking data
In this study, the differences in behavior between two groups of online gamblers were investigated. The first group comprised individuals who played casino games, and the second group comprised those who bet on sports events. The focal point of the study was on problem gambling, and the objective was to identify and quantify both common and distinct traits that are characteristic to casino and sports problem gamblers. To this end, a set of gamblers from the gaming operator LeoVegas was studied. Each gambler was ascribed two binary variables: one separating casino players from sports bettors, and one indicating whether there was an exclusion related to problem gambling. For each of the four combinations of the two variables, 2500 gamblers were randomly selected for a thorough comparison, resulting in a total of 10,000 participants. The comparison was performed by constructing two predictive models, estimating risk scores using these models, and scrutinizing the risk scores by means of a technique originating from collaborative game theory. The number of cash wagers per active day contributed the most to problem-gambling-related exclusion in the case of sports betting, whereas the volume of money spent contributed the most to this exclusion in the case of casino players. The contribution of the volume of losses per active day was noticeable in the case of both casino players and sports bettors. For casino players, gambling via desktop computers contributed positively to problem-gambling-related exclusion. For sports bettors, it was more concerning when the individual used mobile devices. The number of approved deposits per active day contributed to problem-gambling-related exclusion to a larger extent for sports bettors than casino players. The main conclusion is that the studied explanatory variables contribute differently to problem-gambling-related exclusion among casino players and sports bettors
Transparency in Responsible Gambling: A Systematic Review. EROGamb 2 Systematic Review
Persuasive, immersive and attention-grabbing elements of technology and personalised marketing content are widely embedded in interactive online marketing to engage and persuade usersto engage in more online interaction and transactions. This has the potential to pose a risk of excessive and obsessive use of technology, leading to behavioural addiction. Similarly, Internet gambling enables 24/7 accessibility, personalised and persuasive elements for marketing purposes, the capability of immersive and rewarding betting experience, enhanced privacy to facilitate perceived escape from the real world, and ease of transactions, which may potentially create an environment where
individuals are more likely to chase losses and lose control. Evidence suggests Internet gambling is associated with higher risk of problematic gambling and gambling-related harm compared to landbased gambling (Effertz et al., 2018; Kairouz et al., 2012; Papineau et al., 2018; Wu et al., 2014). Gambling operators and governments have developed and implemented programs and policies (e.g., age restriction policy, deposit limit tools, self-exclusion programs) designed to promote Responsible
Gambling (RG) and minimise gambling-related harm.
Responsible and safer gambling is naturally associated with transparency. Transparency, as defined in this review, involves providing a customer with explicit information about chance of winning as well
as other types of information that is shared by gambling operators. At the heart of RG efforts is informed decision making. The principle is to help individuals make informed choice by providing them with transparency in games and promotion materials. However, there is a distinct lack of consensus on what transparency should involve in RG practices, and no prior research has aimed at reviewing transparency in RG practices systematically. Informed by our narrative review of transparency in persuasive technology, immersive technology and online marketing (Wang et al., 2021) all of which are closely associated with the online gambling world, we advocate that RG-driven transparency involves multiple aspects such as user autonomy, system explainability and transparency in advertising. We consider transparency and explainability (or accountability) as an indivisible whole that promotes RG by facilitating communication and understanding of information for individuals to make informed choices.
In the present research, we conducted a systematic review of literature in the RG domain using narrative synthesis to examine evidence relating to transparency in current RG practices in the gambling industry. This review did not intend to examine the effectiveness of specific RG tools or strategies or provide prescriptive legislative and corporate guidelines; instead, we focused on the fundamental aspects of transparency that should be considered and practised by industry for the
benefit of individuals who gamble. In this review, we found that transparency issues have rarely been explored. Using sources from database searching, handsearching and grey literature, we included all
types of articles (i.e., qualitative studies, quantitative studies, literature review, and position articles) in this review. Most empirical studies were focused on effectiveness of a specific RG tool or intervention; most review or position articles did not directly explore transparency issues or only involved specific aspects of transparency; and no systematic or non-systematic reviews of transparency in RG practices were found.
Through this review, we conceptualised RG-driven transparency by categorising it into seven themes involved in or implied by the existing literature for a better understanding of what constitutes RGdriven transparency in games and promotion materials. These themes are Transparency of Information and Education for Safer Gambling (including fairness of games and gamblers’ fallacy, potential risks and negative consequences, safer gambling cognition and behaviour, boundary between gaming and gambling), Transparency of RG Tools (including availability and accessibility of RG tools, effectiveness of RG tools, personalisation of RG strategies), Transparency of Data-driven Approaches and Persuasive Technologies (including purposes and benefits of using personal data, data usage and privacy protection, individual autonomy, algorithmic transparency, trade-off determination), Transparency in Advertising, Transparency of Corporate Social Responsibility and Individual Responsibility (including division of responsibility, gambling policy and staff training, CSR reporting and assessment), Transparency of Research Evidence and Funding Sources, and Design Considerations for Improving Transparency. We provided stakeholders (including gambling operators, regulators, researchers and individuals who gamble) with a checklist of recommendations for best
practices in RG-driven transparency according to this review.
In practice, all stakeholders should collaborate to facilitate individuals to make informed choices and achieve the objectives of responsible and safer gambling, as improving transparency requires effort from multiple parties. For example, using online gambling behaviour data for the purpose of promoting safer gambling and minimising gambling-related harm is highly promising. In order to provide interpretable information about models and algorithms used for individuals who will be affected or benefit from them, the gambling industry needs transparency and explainability of these models and algorithms from professionals and researchers in the first place. Professionals from multidisciplinary backgrounds such as Psychology, Computer science and HCI should collaborate to design the online RG information, RG tools and interventions in a way that can facilitate long-term sustainable positive behaviour change. Persuasive technologies to benefit users’ positive, heathy behaviour change are usually designed and implemented in a short time period, however, both iterative design methods and longitudinal studies are necessary to ensure such technologies with the
intervention strategies are supported by psychological theories and empirical studies to have actual benefits with minimised risks such as privacy issues and behavioural addiction. Future research is required to empirically validate the checklist of recommendations for improving RG-driven transparency and to address the trade-off issues related to transparency (e.g., how to balance transparency with user experience requirements or the good intent of persuasive technologies and RG interventions). Furthermore, more practicalities and detailed guidelines for gambling operators on how to embed RG-driven transparency into games and promotion materials are required with efforts from multiple stakeholders in future
Unexplainability and Incomprehensibility of Artificial Intelligence
Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations
Application of machine learning algorithm in the sheet metal industry : an exploratory case study
This study solved a practical problem in a case in the sheet metal industry using machine learning and deep learning algorithms. The problem in the case company was related to detecting the minimum gaps between components, which were produced after the punching operation of a metal sheet. Due to the narrow gaps between the components, an automated sheer machine could not grip the rest of the sheet skeleton properly after the punching operation. This resulted in some of the scraped sheet on the worktable being left behind, which needed a human operator to intervene. This caused an extra trigger to the production line that resulted in a break in production. To solve this critical problem, the relevant images of the components and the gaps between them were analyzed using machine learning and deep learning techniques. The outcome of this study contributed to eliminating the production bottleneck by optimizing the gaps between the punched components. This optimization process facilitated the easy and safe movement of the gripper machine and contributed to minimizing the sheet waste.© 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.fi=vertaisarvioitu|en=peerReviewed
Transparency in persuasive technology, immersive technology, and online marketing: Facilitating users’ informed decision making and practical implications
In the current age of emerging technologies and big data, transparency has become an important issue for technology users and online consumers. However, there is a lack of consensus on what constitutes transparency across domains of research, not to mention transparency guidelines for designers and marketers. In this review, we explored the question of what transparency means in current research and practices by reviewing the literature in three domains: persuasive technology, immersive technology and online marketing. Literature reviewed, including both empirical research and position articles, covered multidisciplinary areas including computer science and information technology, psychology, healthcare, human computer interaction, business and management, law and public health. In this paper, we summarized our findings through a framework of transparency and provided insights into the different aspects of transparency, categorized into ten themes (i.e., Organizational Transparency, Information Transparency, Transparency of System Design, Data Privacy and Informed Consent, Transparency of Online Advertising, Potential Risks, User Autonomy, Informed Decision Making, Information Visualization, Personalization and User-centered design) along three dimensions (i.e., Types of transparency, Impact on User and Potential Solutions). Addressing aspects of transparency will facilitate users’ autonomy and contribute to their informed decision making
Transparency in persuasive technology, immersive technology, and online marketing: facilitating users’ informed decision making and practical implications
In the current age of emerging technologies and big data, transparency has become an important issue for technology users and online consumers. However, there is a lack of consensus on what constitutes transparency across domains of research, not to mention transparency guidelines for designers and marketers. In this review, we explored the question of what transparency means in current research and practices by reviewing the literature in three domains: persuasive technology, immersive technology and online marketing. Literature reviewed, including both empirical research and position articles, covered multidisciplinary areas including computer science and information technology, psychology, healthcare, human computer interaction, business and management, law and public health. In this paper, we summarized our findings through a framework of transparency and provided insights into the different aspects of transparency, categorized into ten themes (i.e., Organizational Transparency, Information Transparency, Transparency of System Design, Data Privacy and Informed Consent, Transparency of Online Advertising, Potential Risks, User Autonomy, Informed Decision Making, Information Visualization, Personalization and User-centered design) along three dimensions (i.e., Types of transparency, Impact on User and Potential Solutions). Addressing aspects of transparency will facilitate users’ autonomy and contribute to their informed decision making
Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
Recent years have seen a tremendous growth in Artificial Intelligence (AI)-based methodological development in a broad range of domains. In this rapidly evolving field, large number of methods are being reported using machine learning (ML) and Deep Learning (DL) models. Majority of these models are inherently complex and lacks explanations of the decision making process causing these models to be termed as 'Black-Box'. One of the major bottlenecks to adopt such models in mission-critical application domains, such as banking, e-commerce, healthcare, and public services and safety, is the difficulty in interpreting them. Due to the rapid proleferation of these AI models, explaining their learning and decision making process are getting harder which require transparency and easy predictability. Aiming to collate the current state-of-the-art in interpreting the black-box models, this study provides a comprehensive analysis of the explainable AI (XAI) models. To reduce false negative and false positive outcomes of these back-box models, finding flaws in them is still difficult and inefficient. In this paper, the development of XAI is reviewed meticulously through careful selection and analysis of the current state-of-the-art of XAI research. It also provides a comprehensive and in-depth evaluation of the XAI frameworks and their efficacy to serve as a starting point of XAI for applied and theoretical researchers. Towards the end, it highlights emerging and critical issues pertaining to XAI research to showcase major, model-specific trends for better explanation, enhanced transparency, and improved prediction accuracy
- …