3 research outputs found

    Large language models and artificial intelligence, the end of (language) learning as we know it—or not quite?

    Get PDF
    PreprintThe rapid advancements in large language models (LLM) and artificial intelligence (AI) have been a subject of recent significant interest and debate. This paper explores the impact of these developments on language learning. I discuss the technology underlying AI-based tools and the natural language processing (NLP) tasks they were originally designed for. This will help us to identify opportunities and limitations regarding their use in the context of language learning. I then examine how such technology can be used efficiently and effectively in language teaching and learning. The availability of such tools will require language teaching to focus on the non-mechanical aspects of writing. Similarly, automatically produced personalized teaching and learning materials will not replace human teachers, but give space for and support human–human interaction

    Predictive Text Entry using Syntax and Semantics

    No full text
    Most cellular telephones use numeric keypads, where texting is supported by dictionaries and frequency models. Given a key sequence, the entry system recognizes the matching words and proposes a rankordered list of candidates. The ranking quality is instrumental to an effective entry. This paper describes a new method to enhance entry that combines syntax and language models. We first investigate components to improve the ranking step: language models and semantic relatedness. We then introduce a novel syntactic model to capture the word context, optimize ranking, and then reduce the number of keystrokes per character (KSPC) needed to write a text. We finally combine this model with the other components and we discuss the results. We show that our syntax-based model reaches an error reduction in KSPC of 12.4 % on a Swedish corpus over a baseline using word frequencies. We also show that bigrams are superior to all the other models. However, bigrams have a memory footprint that is unfit for most devices. Nonetheless, bigrams can be further improved by the addition of syntactic models with an error reduction that reaches 29.4%.
    corecore