9,401 research outputs found

    From Texts to Prerequisites. Identifying and Annotating Propaedeutic Relations in Educational Textual Resources

    Get PDF
    openPrerequisite Relations (PRs) are dependency relations established between two distinct concepts expressing which piece(s) of information a student has to learn first in order to understand a certain target concept. Such relations are one of the most fundamental in Education, playing a crucial role not only for what concerns new knowledge acquisition, but also in the novel applications of Artificial Intelligence to distant and e-learning. Indeed, resources annotated with such information could be used to develop automatic systems able to acquire and organize the knowledge embodied in educational resources, possibly fostering educational applications personalized, e.g., on students' needs and prior knowledge. The present thesis discusses the issues and challenges of identifying PRs in educational textual materials with the purpose of building a shared understanding of the relation among the research community. To this aim, we present a methodology for dealing with prerequisite relations as established in educational textual resources which aims at providing a systematic approach for uncovering PRs in textual materials, both when manually annotating and automatically extracting the PRs. The fundamental principles of our methodology guided the development of a novel framework for PR identification which comprises three components, each tackling a different task: (i) an annotation protocol (PREAP), reporting the set of guidelines and recommendations for building PR-annotated resources; (ii) an annotation tool (PRET), supporting the creation of manually annotated datasets reflecting the principles of PREAP; (iii) an automatic PR learning method based on machine learning (PREL). The main novelty of our methodology and framework lies in the fact that we propose to uncover PRs from textual resources relying solely on the content of the instructional material: differently from other works, rather than creating de-contextualised PRs, we acknowledge the presence of a PR between two concepts only if emerging from the way they are presented in the text. By doing so, we anchor relations to the text while modelling the knowledge structure entailed in the resource. As an original contribution of this work, we explore whether linguistic complexity of the text influences the task of manual identification of PRs. To this aim, we investigate the interplay between text and content in educational texts through a crowd-sourcing experiment on concept sequencing. Our methodology values the content of educational materials as it incorporates the evidence acquired from such investigation which suggests that PR recognition is highly influenced by the way in which concepts are introduced in the resource and by the complexity of the texts. The thesis reports a case study dealing with every component of the PR framework which produced a novel manually-labelled PR-annotated dataset.openXXXIII CICLO - DIGITAL HUMANITIES. TECNOLOGIE DIGITALI, ARTI, LINGUE, CULTURE E COMUNICAZIONE - Lingue, culture e tecnologie digitaliAlzetta, Chiar

    Program Verification in the presence of complex numbers, functions with branch cuts etc

    Get PDF
    In considering the reliability of numerical programs, it is normal to "limit our study to the semantics dealing with numerical precision" (Martel, 2005). On the other hand, there is a great deal of work on the reliability of programs that essentially ignores the numerics. The thesis of this paper is that there is a class of problems that fall between these two, which could be described as "does the low-level arithmetic implement the high-level mathematics". Many of these problems arise because mathematics, particularly the mathematics of the complex numbers, is more difficult than expected: for example the complex function log is not continuous, writing down a program to compute an inverse function is more complicated than just solving an equation, and many algebraic simplification rules are not universally valid. The good news is that these problems are theoretically capable of being solved, and are practically close to being solved, but not yet solved, in several real-world examples. However, there is still a long way to go before implementations match the theoretical possibilities

    DeepEval: An Integrated Framework for the Evaluation of Student Responses in Dialogue Based Intelligent Tutoring Systems

    Get PDF
    The automatic assessment of student answers is one of the critical components of an Intelligent Tutoring System (ITS) because accurate assessment of student input is needed in order to provide effective feedback that leads to learning. But this is a very challenging task because it requires natural language understanding capabilities. The process requires various components, concepts identification, co-reference resolution, ellipsis handling etc. As part of this thesis, we thoroughly analyzed a set of student responses obtained from an experiment with the intelligent tutoring system DeepTutor in which college students interacted with the tutor to solve conceptual physics problems, designed an automatic answer assessment framework (DeepEval), and evaluated the framework after implementing several important components. To evaluate our system, we annotated 618 responses from 41 students for correctness. Our system performs better as compared to the typical similarity calculation method. We also discuss various issues in automatic answer evaluation

    INVALSI data: assessments on teaching and methodologies

    Get PDF
    The school system has always aimed to achieve quality teaching, which is able, on the one hand, to give adequate responses to the expectations of all the stakeholders and, on the other, to introduce tools, actions, and checks through which the training offer can be constantly improved. This process is undoubtedly linked to scientific research. Researchers and Academics start from the data available to them or collect new ones, to discover and/or interpret facts and to find answers and new cues of reflection. A favorable environment for this work was the Seminar “INVALSI data: a research and educational teaching tool”, in its fourth edition in November 2019. The volume consists of six chapters, which are arise within the aforementioned Seminar context and, while dealing with heterogeneous topics, offer important examples of research both on teaching and on the methodologies applied to it. As a Statistical Service, which for years has taken care of the collection and dissemination of data, we hope that in this, as in the other volumes of the series, the reader will find confirmation of the importance that data play, both in scientific research and in practice in classroom

    SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models

    Full text link
    Recent advances in large language models (LLMs) have demonstrated notable progress on many mathematical benchmarks. However, most of these benchmarks only feature problems grounded in junior and senior high school subjects, contain only multiple-choice questions, and are confined to a limited scope of elementary arithmetic operations. To address these issues, this paper introduces an expansive benchmark suite SciBench that aims to systematically examine the reasoning capabilities required for complex scientific problem solving. SciBench contains two carefully curated datasets: an open set featuring a range of collegiate-level scientific problems drawn from mathematics, chemistry, and physics textbooks, and a closed set comprising problems from undergraduate-level exams in computer science and mathematics. Based on the two datasets, we conduct an in-depth benchmark study of two representative LLMs with various prompting strategies. The results reveal that current LLMs fall short of delivering satisfactory performance, with an overall score of merely 35.80%. Furthermore, through a detailed user study, we categorize the errors made by LLMs into ten problem-solving abilities. Our analysis indicates that no single prompting strategy significantly outperforms others and some strategies that demonstrate improvements in certain problem-solving skills result in declines in other skills. We envision that SciBench will catalyze further developments in the reasoning abilities of LLMs, thereby ultimately contributing to scientific research and discovery.Comment: Work in progress, 18 page

    From Frequency to Meaning: Vector Space Models of Semantics

    Full text link
    Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field

    TELMA Cross Experiment Guidelines

    Get PDF
    Cerulli, M., Pedemonte, B., Robotti, E. (eds.). Internal Report, R.I. 01/07, I.T.D. - C.N.R., GenovaThis document contains the guidelines developed by members of TELMA as a means for planning, conducting, and analysing a cross experiment aimed at contributing to the construction of a shared research perspective among TELMA teams . This is the product of the PhD students and young researchers that brought forward the whole activity. The actual experimental phase was proceeded by a reflective phase in which an agreement was achieved on what research questions to address during the experiment. On this basis the first version of the guidelines document was built, containing all the research questions to be addressed, but also the experimental plans for each team. This included the employed didactical functionalities of the considered ICT tools, indications of the experimental settings, and the methods of data collection and analysis. During the whole experimental phase, the document was constantly updated, and shared among the involved persons which were periodically required to compare the different activities and reflections brought forward by all the teams

    INVALSI data: assessments on teaching and methodologies

    Get PDF
    The school system has always aimed to achieve quality teaching, which is able, on the one hand, to give adequate responses to the expectations of all the stakeholders and, on the other, to introduce tools, actions, and checks through which the training offer can be constantly improved. This process is undoubtedly linked to scientific research. Researchers and Academics start from the data available to them or collect new ones, to discover and/or interpret facts and to find answers and new cues of reflection. A favorable environment for this work was the Seminar “INVALSI data: a research and educational teaching tool”, in its fourth edition in November 2019. The volume consists of six chapters, which are arise within the aforementioned Seminar context and, while dealing with heterogeneous topics, offer important examples of research both on teaching and on the methodologies applied to it. As a Statistical Service, which for years has taken care of the collection and dissemination of data, we hope that in this, as in the other volumes of the series, the reader will find confirmation of the importance that data play, both in scientific research and in practice in classroom
    • …
    corecore