51,830 research outputs found

    Impact of the introduction of machine gaming in Queensland on minor and major bingo

    Get PDF
    Material for this paper comes from as report commissioned by the Department of Family Services, Aboriginal and Islander Affairs. The report is the result of a multi strategy research project designed to assess the impact of gaming machines on the fundraising capacity of charitable and community organisations in Queensland. The study was conducted during the 1993 calendar year. The first Queensland gaming machine was commissioned on the 11 February, 1992 at 11.30 am in Brisbane at the Kedron Wavell Services Club. Eighteen more clubs followed that week. Six months later there were gaming machines in 335 clubs, and 250 hotels and taverns, representing a state wide total of 7,974 machines in operation. The 10,000 gaming machine was commissioned on the 18 March, 1993 and the 1,000 operational gaming machine site was opened on 18th February, 1994

    Professional Judgment in an Era of Artificial Intelligence and Machine Learning

    Get PDF
    Though artificial intelligence (AI) in healthcare and education now accomplishes diverse tasks, there are two features that tend to unite the information processing behind efforts to substitute it for professionals in these fields: reductionism and functionalism. True believers in substitutive automation tend to model work in human services by reducing the professional role to a set of behaviors initiated by some stimulus, which are intended to accomplish some predetermined goal, or maximize some measure of well-being. However, true professional judgment hinges on a way of knowing the world that is at odds with the epistemology of substitutive automation. Instead of reductionism, an encompassing holism is a hallmark of professional practice—an ability to integrate facts and values, the demands of the particular case and prerogatives of society, and the delicate balance between mission and margin. Any presently plausible vision of substituting AI for education and health-care professionals would necessitate a corrosive reductionism. The only way these sectors can progress is to maintain, at their core, autonomous professionals capable of carefully intermediating between technology and the patients it would help treat, or the students it would help learn

    The EVF Model: A Novel Framework for Understanding Gambling and, by Extension, Poker

    Full text link
    There are several senses in which the term gambling is used. All have liabilities, problems that have muddied the waters in scientific research, generated conflicting legal decisions, compromised debates over ethical and moral issues, and have led to uneven legislation. Here, a novel framework for the term is offered, based on two continuous variables: a) the Expected Value (EV) of any arbitrary game and, b) the inherent Flexibility (F) of that game. This EVF model produces a classification system for all the enterprises that can or have been called gambling. It is one that allows for more measured decisions to be made and provides a more coherent platform on which to deliberate the many significant issues that have been raised over the years. It also permits a sensible answer to the question of the nature of games like the stock market, opening a small business, and especially, poker

    For Better or Worse: The Impact of Counterfactual Explanations' Directionality on User Behavior in xAI

    Full text link
    Counterfactual explanations (CFEs) are a popular approach in explainable artificial intelligence (xAI), highlighting changes to input data necessary for altering a model's output. A CFE can either describe a scenario that is better than the factual state (upward CFE), or a scenario that is worse than the factual state (downward CFE). However, potential benefits and drawbacks of the directionality of CFEs for user behavior in xAI remain unclear. The current user study (N=161) compares the impact of CFE directionality on behavior and experience of participants tasked to extract new knowledge from an automated system based on model predictions and CFEs. Results suggest that upward CFEs provide a significant performance advantage over other forms of counterfactual feedback. Moreover, the study highlights potential benefits of mixed CFEs improving user performance compared to downward CFEs or no explanations. In line with the performance results, users' explicit knowledge of the system is statistically higher after receiving upward CFEs compared to downward comparisons. These findings imply that the alignment between explanation and task at hand, the so-called regulatory fit, may play a crucial role in determining the effectiveness of model explanations, informing future research directions in xAI. To ensure reproducible research, the entire code, underlying models and user data of this study is openly available: https://github.com/ukuhl/DirectionalAlienZooComment: 22 pages, 3 figures This work has been accepted for presentation at the 1st World Conference on eXplainable Artificial Intelligence (xAI 2023), July 26-28, 2023 - Lisbon, Portuga
    • …
    corecore