327 research outputs found

    Hybrid robust deep and shallow semantic processing for creativity support in document production

    Get PDF
    The research performed in the DeepThought project (http://www.project-deepthought.net) aims at demonstrating the potential of deep linguistic processing if added to existing shallow methods that ensure robustness. Classical information retrieval is extended by high precision concept indexing and relation detection. We use this approach to demonstrate the feasibility of three ambitious applications, one of which is a tool for creativity support in document production and collective brainstorming. This application is described in detail in this paper. Common to all three applications, and the basis for their development is a platform for integrated linguistic processing. This platform is based on a generic software architecture that combines multiple NLP components and on robust minimal recursive semantics (RMRS) as a uniform representation language

    The DeepThought Core Architecture Framework

    Get PDF
    The research performed in the DeepThought project aims at demonstrating the potential of deep linguistic processing if combined with shallow methods for robustness. Classical information retrieval is extended by high precision concept indexing and relation detection. On the basis of this approach, the feasibility of three ambitious applications will be demonstrated, namely: precise information extraction for business intelligence; email response management for customer relationship management; creativity support for document production and collective brainstorming. Common to these applications, and the basis for their development is the XML-based, RMRS-enabled core architecture framework that will be described in detail in this paper. The framework is not limited to the applications envisaged in the DeepThought project, but can also be employed e.g. to generate and make use of XML standoff annotation of documents and linguistic corpora, and in general for a wide range of NLP-based applications and research purposes

    Adaptive hypermedia for education and training

    Get PDF
    Adaptive hypermedia (AH) is an alternative to the traditional, one-size-fits-all approach in the development of hypermedia systems. AH systems build a model of the goals, preferences, and knowledge of each individual user; this model is used throughout the interaction with the user to adapt to the needs of that particular user (Brusilovsky, 1996b). For example, a student in an adaptive educational hypermedia system will be given a presentation that is adapted specifically to his or her knowledge of the subject (De Bra & Calvi, 1998; Hothi, Hall, & Sly, 2000) as well as a suggested set of the most relevant links to proceed further (Brusilovsky, Eklund, & Schwarz, 1998; Kavcic, 2004). An adaptive electronic encyclopedia will personalize the content of an article to augment the user's existing knowledge and interests (Bontcheva & Wilks, 2005; Milosavljevic, 1997). A museum guide will adapt the presentation about every visited object to the user's individual path through the museum (Oberlander et al., 1998; Stock et al., 2007). Adaptive hypermedia belongs to the class of user-adaptive systems (Schneider-Hufschmidt, Kühme, & Malinowski, 1993). A distinctive feature of an adaptive system is an explicit user model that represents user knowledge, goals, and interests, as well as other features that enable the system to adapt to different users with their own specific set of goals. An adaptive system collects data for the user model from various sources that can include implicitly observing user interaction and explicitly requesting direct input from the user. The user model is applied to provide an adaptation effect, that is, tailor interaction to different users in the same context. In different kinds of adaptive systems, adaptation effects could vary greatly. In AH systems, it is limited to three major adaptation technologies: adaptive content selection, adaptive navigation support, and adaptive presentation. The first of these three technologies comes from the fields of adaptive information retrieval (IR) and intelligent tutoring systems (ITS). When the user searches for information, the system adaptively selects and prioritizes the most relevant items (Brajnik, Guida, & Tasso, 1987; Brusilovsky, 1992b)

    High-level feature detection from video in TRECVid: a 5-year retrospective of achievements

    Get PDF
    Successful and effective content-based access to digital video requires fast, accurate and scalable methods to determine the video content automatically. A variety of contemporary approaches to this rely on text taken from speech within the video, or on matching one video frame against others using low-level characteristics like colour, texture, or shapes, or on determining and matching objects appearing within the video. Possibly the most important technique, however, is one which determines the presence or absence of a high-level or semantic feature, within a video clip or shot. By utilizing dozens, hundreds or even thousands of such semantic features we can support many kinds of content-based video navigation. Critically however, this depends on being able to determine whether each feature is or is not present in a video clip. The last 5 years have seen much progress in the development of techniques to determine the presence of semantic features within video. This progress can be tracked in the annual TRECVid benchmarking activity where dozens of research groups measure the effectiveness of their techniques on common data and using an open, metrics-based approach. In this chapter we summarise the work done on the TRECVid high-level feature task, showing the progress made year-on-year. This provides a fairly comprehensive statement on where the state-of-the-art is regarding this important task, not just for one research group or for one approach, but across the spectrum. We then use this past and on-going work as a basis for highlighting the trends that are emerging in this area, and the questions which remain to be addressed before we can achieve large-scale, fast and reliable high-level feature detection on video

    A data mining approach to ontology learning for automatic content-related question-answering in MOOCs.

    Get PDF
    The advent of Massive Open Online Courses (MOOCs) allows massive volume of registrants to enrol in these MOOCs. This research aims to offer MOOCs registrants with automatic content related feedback to fulfil their cognitive needs. A framework is proposed which consists of three modules which are the subject ontology learning module, the short text classification module, and the question answering module. Unlike previous research, to identify relevant concepts for ontology learning a regular expression parser approach is used. Also, the relevant concepts are extracted from unstructured documents. To build the concept hierarchy, a frequent pattern mining approach is used which is guided by a heuristic function to ensure that sibling concepts are at the same level in the hierarchy. As this process does not require specific lexical or syntactic information, it can be applied to any subject. To validate the approach, the resulting ontology is used in a question-answering system which analyses students' content-related questions and generates answers for them. Textbook end of chapter questions/answers are used to validate the question-answering system. The resulting ontology is compared vs. the use of Text2Onto for the question-answering system, and it achieved favourable results. Finally, different indexing approaches based on a subject's ontology are investigated when classifying short text in MOOCs forum discussion data; the investigated indexing approaches are: unigram-based, concept-based and hierarchical concept indexing. The experimental results show that the ontology-based feature indexing approaches outperform the unigram-based indexing approach. Experiments are done in binary classification and multiple labels classification settings . The results are consistent and show that hierarchical concept indexing outperforms both concept-based and unigram-based indexing. The BAGGING and random forests classifiers achieved the best result among the tested classifiers

    Knowledge management using machine learning, natural language processing and ontology

    Get PDF
    This research developed a concept indexing framework which systematically integrates machine learning, natural language processing and ontology technologies to facilitate knowledge acquisition, extraction and organisation. The research reported in this thesis focuses first on the conceptual model of concept indexing, which represents knowledge as entities and concepts. Then the thesis outlines its benefits and the system architecture using this conceptual model. Next, the thesis presents a knowledge acquisition framework using machine learning in focused crawling Web content to enable automatic knowledge acquisition. Then, the thesis presents two language resources developed to enable ontology tagging, which are: an ontology dictionary and an ontologically tagged corpus. The ontologically tagged corpus is created using a heuristic algorithm developed in the thesis. Next, the ontology tagging algorithm is developed with the ontology dictionary and the ontologically tagged corpus to enable ontology tagging. Finally, the thesis presents the conceptual model, the system architecture, and the prototype system using concept indexing developed to facilitate knowledge acquisition, extraction and organisation. The solutions proposed in the thesis are illustrated with examples based on a prototype system developed in this thesis

    Web news classification using neural networks based on PCA

    Get PDF
    In this paper, we propose a news web page classification method (WPCM). The WPCM uses a neural network with inputs obtained by both the principal components and class profile-based features (CPBF). The fixed number of regular words from each class will be used as a feature vectors with the reduced features from the PCA. These feature vectors are then used as the input to the neural networks for classification. The experimental evaluation demonstrates that the WPCM provides acceptable classification accuracy with the sports news datasets

    Analyzing transfer learning impact in biomedical cross lingual named entity recognition and normalization

    Get PDF
    Background The volume of biomedical literature and clinical data is growing at an exponential rate. Therefore, efficient access to data described in unstructured biomedical texts is a crucial task for the biomedical industry and research. Named Entity Recognition (NER) is the first step for information and knowledge acquisition when we deal with unstructured texts. Recent NER approaches use contextualized word representations as input for a downstream classification task. However, distributed word vectors (embeddings) are very limited in Spanish and even more for the biomedical domain. Methods In this work, we develop several biomedical Spanish word representations, and we introduce two Deep Learning approaches for pharmaceutical, chemical, and other biomedical entities recognition in Spanish clinical case texts and biomedical texts, one based on a Bi-STM-CRF model and the other on a BERT-based architecture. Results Several Spanish biomedical embeddigns together with the two deep learning models were evaluated on the PharmaCoNER and CORD-19 datasets. The PharmaCoNER dataset is composed of a set of Spanish clinical cases annotated with drugs, chemical compounds and pharmacological substances; our extended Bi-LSTM-CRF model obtains an F-score of 85.24% on entity identification and classification and the BERT model obtains an F-score of 88.80% . For the entity normalization task, the extended Bi-LSTM-CRF model achieves an F-score of 72.85% and the BERT model achieves 79.97%. The CORD-19 dataset consists of scholarly articles written in English annotated with biomedical concepts such as disorder, species, chemical or drugs, gene and protein, enzyme and anatomy. Bi-LSTM-CRF model and BERT model obtain an F-measure of 78.23% and 78.86% on entity identification and classification, respectively on the CORD-19 dataset. Conclusion These results prove that deep learning models with in-domain knowledge learned from large-scale datasets highly improve named entity recognition performance. Moreover, contextualized representations help to understand complexities and ambiguity inherent to biomedical texts. Embeddings based on word, concepts, senses, etc. other than those for English are required to improve NER tasks in other languages.This work was partially supported by the Research Program of the Ministry of Economy and Competitiveness - Government of Spain, (DeepEMR project TIN2017-87548-C2-1-R)
    corecore