632 research outputs found

    Semantics-based automated essay evaluation

    Get PDF
    Automated essay evaluation (AEE) is a widely used practical solution for replacing time-consuming manual grading of student essays. Automated systems are used in combination with human graders in different high-stake assessments, as well as in classrooms. During the last 50 years, since the beginning of the development of the field, many challenges have arisen in the field, including seeking ways to evaluate the semantic content, providing automated feedback, determining reliability of grades, making the field more "exposed", and others. In this dissertation we address several of these challenges and propose novel solutions for semantic based essay evaluation. Most of the AEE research has been conducted by commercial organizations that protect their investments by releasing proprietary systems where details are not publicly available. We provide comparison (as detailed as possible) of 20 state-of-the-art approaches for automated essay evaluation and we propose a new automated essay evaluation system named SAGE (Semantic Automated Grader for Essays) with all the technological details revealed to the scientific community. Lack of consideration of text semantics is one of the main weaknesses of the existing state-of-the-art systems. We address the evaluation of essay semantics from perspectives of essay coherence and semantic error detection. Coherence describes the flow of information in an essay and allows us to evaluate the connections between the discourse. We propose two groups of coherence attributes: coherence attributes obtained in a highly dimensional semantic space and coherence attributes obtained from a sentence-similarity networks. Furthermore, we propose the Automated Error Detection (AED) system and evaluate the essay semantics from the perspective of essay consistency. The system detects semantic errors using information extraction and logic reasoning and is able to provide semantic feedback for the writer. The proposed system SAGE achieves significantly higher grading accuracy compared with other state-of-the-art automated essay evaluation systems. In the last part of the dissertation we address the question of reliability of grades. Despite the unified grading rules, human graders introduce bias into scores. Consequently, a grading model has to implement a grading logic that may be a mixture of grading logics from various graders. We propose an approach for separating a set of essays into subsets that represent different graders, which uses an explanation methodology and clustering. The results show that learning from the ensemble of separated models significantly improves the average prediction accuracy on artificial and real-world datasets

    Optimization of Window Size for Calculating Semantic Coherence Within an Essay

    Get PDF
    Over the last fifty years, as the field of automated essay evaluation has progressed, several ways have been offered. The three aspects of style, substance, and semantics are the primary focus of automated essay evaluation. The style and content attributes have received the most attention, while the semantics attribute has received less attention. A smaller fraction of the essay (window) is chosen to measure semantics, and the essay is broken into smaller portions using this window. The goal of this work is to determine an acceptable window size for measuring semantic coherence between different parts of the essay with more precision

    Multi Domain Semantic Information Retrieval Based on Topic Model

    Get PDF
    Over the last decades, there have been remarkable shifts in the area of Information Retrieval (IR) as huge amount of information is increasingly accumulated on the Web. The gigantic information explosion increases the need for discovering new tools that retrieve meaningful knowledge from various complex information sources. Thus, techniques primarily used to search and extract important information from numerous database sources have been a key challenge in current IR systems. Topic modeling is one of the most recent techniquesthat discover hidden thematic structures from large data collections without human supervision. Several topic models have been proposed in various fields of study and have been utilized extensively for many applications. Latent Dirichlet Allocation (LDA) is the most well-known topic model that generates topics from large corpus of resources, such as text, images, and audio.It has been widely used in many areas in information retrieval and data mining, providing efficient way of identifying latent topics among document collections. However, LDA has a drawback that topic cohesion within a concept is attenuated when estimating infrequently occurring words. Moreover, LDAseems not to consider the meaning of words, but rather to infer hidden topics based on a statisticalapproach. However, LDA can cause either reduction in the quality of topic words or increase in loose relations between topics. In order to solve the previous problems, we propose a domain specific topic model that combines domain concepts with LDA. Two domain specific algorithms are suggested for solving the difficulties associated with LDA. The main strength of our proposed model comes from the fact that it narrows semantic concepts from broad domain knowledge to a specific one which solves the unknown domain problem. Our proposed model is extensively tested on various applications, query expansion, classification, and summarization, to demonstrate the effectiveness of the model. Experimental results show that the proposed model significantly increasesthe performance of applications

    Semantics-based automated essay evaluation

    Get PDF
    Automated essay evaluation (AEE) is a widely used practical solution for replacing time-consuming manual grading of student essays. Automated systems are used in combination with human graders in different high-stake assessments, as well as in classrooms. During the last 50 years, since the beginning of the development of the field, many challenges have arisen in the field, including seeking ways to evaluate the semantic content, providing automated feedback, determining reliability of grades, making the field more "exposed", and others. In this dissertation we address several of these challenges and propose novel solutions for semantic based essay evaluation. Most of the AEE research has been conducted by commercial organizations that protect their investments by releasing proprietary systems where details are not publicly available. We provide comparison (as detailed as possible) of 20 state-of-the-art approaches for automated essay evaluation and we propose a new automated essay evaluation system named SAGE (Semantic Automated Grader for Essays) with all the technological details revealed to the scientific community. Lack of consideration of text semantics is one of the main weaknesses of the existing state-of-the-art systems. We address the evaluation of essay semantics from perspectives of essay coherence and semantic error detection. Coherence describes the flow of information in an essay and allows us to evaluate the connections between the discourse. We propose two groups of coherence attributes: coherence attributes obtained in a highly dimensional semantic space and coherence attributes obtained from a sentence-similarity networks. Furthermore, we propose the Automated Error Detection (AED) system and evaluate the essay semantics from the perspective of essay consistency. The system detects semantic errors using information extraction and logic reasoning and is able to provide semantic feedback for the writer. The proposed system SAGE achieves significantly higher grading accuracy compared with other state-of-the-art automated essay evaluation systems. In the last part of the dissertation we address the question of reliability of grades. Despite the unified grading rules, human graders introduce bias into scores. Consequently, a grading model has to implement a grading logic that may be a mixture of grading logics from various graders. We propose an approach for separating a set of essays into subsets that represent different graders, which uses an explanation methodology and clustering. The results show that learning from the ensemble of separated models significantly improves the average prediction accuracy on artificial and real-world datasets

    Ontology engineering of automatic text processing methods

    Get PDF
    Currently, ontologies are recognized as the most effective means of formalizing and systematizing knowledge and data in scientific subject area (SSA). Practice has shown that using ontology design patterns is effective in developing the ontology of scientific subject areas. This is due to the fact that scientific subject areas ontology, as a rule, contains a large number of typical fragments that are well described by patterns of ontology design. In the paper, we present an approach to ontology engineering of automatic text processing methods based on ontology design patterns. In order to get an ontology that would describe automatic text processing sufficiently fully, it is required to process a large number of scientific publications and information resources containing information from modeling area. It is possible to facilitate and speed up the process of updating ontology with information from such sources by using lexical and syntactic patterns of ontology design. Our ontology of automatic text processing will become the conceptual basis of an intelligent information resource on modern methods of automatic text processing, which will provide systematization of all information on these methods, its integration into a single information space, convenient navigation through it, as well as meaningful access to it

    Inclusion de sens dans la représentation de documents textuels : état de l'art

    Get PDF
    Ce document donne un aperçu de l'état de l'art dans le domaine de la représentation du sens dans les documents textuels

    Is Artificial Intelligence Really the Next Big Thing in Learning and Teaching in Higher Education? A Conceptual Paper

    Get PDF
    Artificial Intelligence in higher education (AIED) is becoming a more important research area with increasing developments and application of AI within the wider society. However, as yet AI based tools have not been widely adopted in higher education. As a result there is a lack of sound evidence available on the pedagogical impact of AI for learning and teaching. This conceptual paper thus seeks to bridge the gap and addresses the following question: is artificial intelligence really the new big thing that will revolutionise learning and teaching in higher education? Adopting the technological pedagogical content knowledge (TPACK) framework and the Unified Theory of Acceptance and Use of Technology (UTAUT) as the theoretical foundations, we argue that Artificial Intelligence (AI) technologies, at least in their current state of development, do not afford any real new advances for pedagogy in higher education. This is mainly because there does not seem to be valid evidence as to how the use of AI technologies and applications has helped students improve learning, and/or helped tutors make effective pedagogical changes. In addition, the pedagogical affordances of AI have not yet been clearly defined. The challenges that the higher education sector is currently experiencing relating to AI adoption are discussed at three hierarchical levels, namely national, institutional and personal levels. The paper ends with recommendations with regard to accelerating AI use in universities. This includes developing dedicated AI adoption strategies at the institutional level, updating the existing technology infrastructure and up-skilling academic tutors for AI

    Measuring associational thinking through word embeddings

    Full text link
    [EN] The development of a model to quantify semantic similarity and relatedness between words has been the major focus of many studies in various fields, e.g. psychology, linguistics, and natural language processing. Unlike the measures proposed by most previous research, this article is aimed at estimating automatically the strength of associative words that can be semantically related or not. We demonstrate that the performance of the model depends not only on the combination of independently constructed word embeddings (namely, corpus- and network-based embeddings) but also on the way these word vectors interact. The research concludes that the weighted average of the cosine-similarity coefficients derived from independent word embeddings in a double vector space tends to yield high correlations with human judgements. Moreover, we demonstrate that evaluating word associations through a measure that relies on not only the rank ordering of word pairs but also the strength of associations can reveal some findings that go unnoticed by traditional measures such as Spearman's and Pearson's correlation coefficients.s Financial support for this research has been provided by the Spanish Ministry of Science, Innovation and Universities [grant number RTC 2017-6389-5], the Spanish ¿Agencia Estatal de Investigación¿ [grant number PID2020-112827GB-I00 / AEI / 10.13039/501100011033], and the European Union¿s Horizon 2020 research and innovation program [grant number 101017861: project SMARTLAGOON]. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.Periñán-Pascual, C. (2022). Measuring associational thinking through word embeddings. Artificial Intelligence Review. 55(3):2065-2102. https://doi.org/10.1007/s10462-021-10056-62065210255

    Intelligent Information Access to Linked Data - Weaving the Cultural Heritage Web

    Get PDF
    The subject of the dissertation is an information alignment experiment of two cultural heritage information systems (ALAP): The Perseus Digital Library and Arachne. In modern societies, information integration is gaining importance for many tasks such as business decision making or even catastrophe management. It is beyond doubt that the information available in digital form can offer users new ways of interaction. Also, in the humanities and cultural heritage communities, more and more information is being published online. But in many situations the way that information has been made publicly available is disruptive to the research process due to its heterogeneity and distribution. Therefore integrated information will be a key factor to pursue successful research, and the need for information alignment is widely recognized. ALAP is an attempt to integrate information from Perseus and Arachne, not only on a schema level, but to also perform entity resolution. To that end, technical peculiarities and philosophical implications of the concepts of identity and co-reference are discussed. Multiple approaches to information integration and entity resolution are discussed and evaluated. The methodology that is used to implement ALAP is mainly rooted in the fields of information retrieval and knowledge discovery. First, an exploratory analysis was performed on both information systems to get a first impression of the data. After that, (semi-)structured information from both systems was extracted and normalized. Then, a clustering algorithm was used to reduce the number of needed entity comparisons. Finally, a thorough matching was performed on the different clusters. ALAP helped with identifying challenges and highlighted the opportunities that arise during the attempt to align cultural heritage information systems

    The development of a fuzzy semantic sentence similarity measure

    Get PDF
    A problem in the field of semantic sentence similarity is the inability of sentence similarity measures to accurately represent the effect perception based (fuzzy) words, which are commonly used in natural language, have on sentence similarity. This research project developed a new sentence similarity measure to solve this problem. The new measure, Fuzzy Algorithm for Similarity Testing (FAST) is a novel ontology-based similarity measure that uses concepts of fuzzy and computing with words to allow for the accurate representation of fuzzy based words. Through human experimentation fuzzy sets were created for six categories of words based on their levels of association with particular concepts. These fuzzy sets were then defuzzified and the results used to create new ontological relations between the fuzzy words contained within them and from that a new fuzzy ontology was created. Using these relationships allows for the creation of a new ontology-based fuzzy semantic text similarity algorithm that is able to show the effect of fuzzy words on computing sentence similarity as well as the effect that fuzzy words have on non-fuzzy words within a sentence. In order to evaluate FAST, two new test datasets were created through the use of questionnaire based human experimentation. This involved the generation of a robust methodology for creating usable fuzzy datasets (including an automated method that was used to create one of the two fuzzy datasets). FAST was evaluated through experiments conducted using the new fuzzy datasets. The results of the evaluation showed that there was an improved level of correlation between FAST and human test results over two existing sentence similarity measures demonstrating its success in representing the similarity between pairs of sentences containing fuzzy words
    corecore