613 research outputs found

    Semantic based Text Summarization for Single Document on Android Mobile Device

    Get PDF
    The explosion of information in the World Wide Web is overwhelming readers with limitless information. Large internet articles or journals are often cumbersome to read as well as comprehend. More often than not, readers are immersed in a pool of information with limited time to assimilate all of the articles. It leads to information overload whereby readers are trying to deal with more information than they can process. Hence, there is an apparent need for an automatic text summarizer as to produce summaries quicker than humans. The text summarization research on mobile platform has been inspired by the new paradigm shift in accessing information ubiquitously at anytime and anywhere on Smartphones or smart devices. In this research, a semantic and syntactic based summarization is implemented in a text summarizer to solve the overload problem whilst providing a more coherent summary. Additionally, WordNet is used as the lexical database to semantically extract the text document which provides a more efficient and accurate algorithm than the existing summary system. The objective of the paper is to integrate WordNet into the proposed system called TextSumIt which condenses lengthy documents into shorter summarized text that gives a higher readability to Android mobile users. The experimental results are done using recall, precision and F-Score to evaluate on the summary output, in comparison with the existing automated summarizer. Human-generated summaries from Document Understanding Conference (DUC) are taken as the reference summaries for the evaluation. The evaluation of experimental results shows satisfactory results

    Mixed Reality Interfaces for Augmented Text and Speech

    Get PDF
    While technology plays a vital role in human communication, there still remain many significant challenges when using them in everyday life. Modern computing technologies, such as smartphones, offer convenient and swift access to information, facilitating tasks like reading documents or communicating with friends. However, these tools frequently lack adaptability, become distracting, consume excessive time, and impede interactions with people and contextual information. Furthermore, they often require numerous steps and significant time investment to gather pertinent information. We want to explore an efficient process of contextual information gathering for mixed reality (MR) interfaces that provide information directly in the user’s view. This approach allows for a seamless and flexible transition between language and subsequent contextual references, without disrupting the flow of communication. ’Augmented Language’ can be defined as the integration of language and communication with mixed reality to enhance, transform, or manipulate language-related aspects and various forms of linguistic augmentations (such as annotation/referencing, aiding social interactions, translation, localization, etc.). In this thesis, our broad objective is to explore mixed reality interfaces and their potential to enhance augmented language, particularly in the domains of speech and text. Our aim is to create interfaces that offer a more natural, generalizable, on-demand, and real-time experience of accessing contextually relevant information and providing adaptive interactions. To better address this broader objective, we systematically break it down to focus on two instances of augmented language. First, enhancing augmented conversation to support on-the-fly, co-located in-person conversations using embedded references. And second, enhancing digital and physical documents using MR to provide on-demand reading support in the form of different summarization techniques. To examine the effectiveness of these speech and text interfaces, we conducted two studies in which we asked the participants to evaluate our system prototype in different use cases. The exploratory usability study for the first exploration confirms that our system decreases distraction and friction in conversation compared to smartphone search while providing highly useful and relevant information. For the second project, we conducted an exploratory design workshop to identify categories of document enhancements. We later conducted a user study with a mixed-reality prototype to highlight five board themes to discuss the benefits of MR document enhancement

    Semantic based Text Summarization for Single Document on Android Mobile Device

    Get PDF
    The explosion of information in the World Wide Web is overwhelming readers with limitless information. Large internet articles or journals are often cumbersome to read as well as comprehend. More often than not, readers are immersed in a pool of information with limited time to assimilate all of the articles. It leads to information overload whereby readers are trying to deal with more information than they can process. Hence, there is an apparent need for an automatic text summarizer as to produce summaries quicker than humans. The text summarization research on mobile platform has been inspired by the new paradigm shift in accessing information ubiquitously at anytime and anywhere on Smartphones or smart devices. In this research, a semantic and syntactic based summarization is implemented in a text summarizer to solve the overload problem whilst providing a more coherent summary. Additionally, WordNet is used as the lexical database to semantically extract the text document which provides a more efficient and accurate algorithm than the existing summary system. The objective of the paper is to integrate WordNet into the proposed system called TextSumIt which condenses lengthy documents into shorter summarized text that gives a higher readability to Android mobile users. The experimental results are done using recall, precision and F-Score to evaluate on the summary output, in comparison with the existing automated summarizer. Human-generated summaries from Document Understanding Conference (DUC) are taken as the reference summaries for the evaluation. The evaluation of experimental results shows satisfactory results

    An Ontology based Text-to-Picture Multimedia m-Learning System

    Get PDF
    Multimedia Text-to-Picture is the process of building mental representation from words associated with images. From the research aspect, multimedia instructional message items are illustrations of material using words and pictures that are designed to promote user realization. Illustrations can be presented in a static form such as images, symbols, icons, figures, tables, charts, and maps; or in a dynamic form such as animation, or video clips. Due to the intuitiveness and vividness of visual illustration, many text to picture systems have been proposed in the literature like, Word2Image, Chat with Illustrations, and many others as discussed in the literature review chapter of this thesis. However, we found that some common limitations exist in these systems, especially for the presented images. In fact, the retrieved materials are not fully suitable for educational purposes. Many of them are not context-based and didn’t take into consideration the need of learners (i.e., general purpose images). Manually finding the required pedagogic images to illustrate educational content for learners is inefficient and requires huge efforts, which is a very challenging task. In addition, the available learning systems that mine text based on keywords or sentences selection provide incomplete pedagogic illustrations. This is because words and their semantically related terms are not considered during the process of finding illustrations. In this dissertation, we propose new approaches based on the semantic conceptual graph and semantically distributed weights to mine optimal illustrations that match Arabic text in the children’s story domain. We combine these approaches with best keywords and sentences selection algorithms, in order to improve the retrieval of images matching the Arabic text. Our findings show significant improvements in modelling Arabic vocabulary with the most meaningful images and best coverage of the domain in discourse. We also develop a mobile Text-to-Picture System that has two novel features, which are (1) a conceptual graph visualization (CGV) and (2) a visual illustrative assessment. The CGV shows the relationship between terms associated with a picture. It enables the learners to discover the semantic links between Arabic terms and improve their understanding of Arabic vocabulary. The assessment component allows the instructor to automatically follow up the performance of learners. Our experiments demonstrate the efficiency of our multimedia text-to-picture system in enhancing the learners’ knowledge and boost their comprehension of Arabic vocabulary

    Language report for Catalan (English version)

    Get PDF
    The central objective of the Metanet4u project is to contribute to the establishment of a pan-European digital platform that makes available language resources and services, encompassing both datasets and software tools, for speech and language processing, and supports a new generation of exchange facilities for them.Peer ReviewedPreprin

    Natural Language Processing in-and-for Design Research

    Full text link
    We review the scholarly contributions that utilise Natural Language Processing (NLP) methods to support the design process. Using a heuristic approach, we collected 223 articles published in 32 journals and within the period 1991-present. We present state-of-the-art NLP in-and-for design research by reviewing these articles according to the type of natural language text sources: internal reports, design concepts, discourse transcripts, technical publications, consumer opinions, and others. Upon summarizing and identifying the gaps in these contributions, we utilise an existing design innovation framework to identify the applications that are currently being supported by NLP. We then propose a few methodological and theoretical directions for future NLP in-and-for design research
    • …
    corecore