5 research outputs found

    Coordination in the presence of asset markets

    Full text link
    We explore the relationship between outcomes in a coordination game and a pre-play asset market where asset values are determined by outcomes in the subsequent coordination game. Across two experiments, we vary the payoffs from the market relative to the game, the degree of interdependence in the game, and whether traders' asset payoffs are dependent on outcomes in their own or another game. Markets lead to significantly lower efficiency across treatments, even when they produce no distortion of incentives in the game. Market prices forecast game outcomes. Our experiments shed light on how financial markets may influence affiliated economic outcomes

    AI-driven large language models: strengths, weaknesses, opportunities and threats

    No full text
    The teaching and learning group within the School of Computer Science and Mathematics would like to lead a discussion around a topic that will likely impact HE significantly: the use of AI-Driven Large Language Models (LLMs) such as those seen powering OpenAI’s GPT-3 Driven ChatGPT, Meta’s LLaMA and DeepMind’s Gopher/Chinchilla AI. Several technology companies and technologies such as Microsoft’s Bing search engine aim to embed these LLM tools to enhance their existing products, and many in the IT industry anticipate them becoming key productivity assistants across many sectors where the written word is sought after. However, they also present significant challenges to Academia when used in assessments such as Online Exams, in class tests and coursework assignments, with their abilities to generate text based on information trained from data scrapped from the web, and their power and accuracy of the information may be used to circumvent a student’s need for good academic study practices and demonstration of knowledge. This session aims to present an overview of LLMs, their current abilities to generate knowledge representation across a variety of different disciplines (using computer science and mathematics as an example), key weaknesses of LLMs such as AI Hallucinations, how their contextual abilities can be exploited as a meaningful tool and comparing them with similar productivity tools such as code generators, spell checkers, online search and reference tools and our ability to detect their usage. The session also would like to open some cross-disciplinary debate in their effect on the professionalism, character, and employability of our graduates, where inappropriate use of these tools may harm the reputation and standing of LJMU and its graduates and approaches in our curriculums to best educate our student base on their capabilities and improprieties. Ultimately, these tools are going to increase in popularity and usage in the coming years – should be fear them or embrace them? AI-driven large language models: strengths, weaknesses, opportunities and threats Powerpoint. Only LJMU staff and students have access to this resource
    corecore