10 research outputs found

    An Investigation into Information Navigation via Diverse Keyword-based Facets

    Get PDF
    Abstract. In the age of information overload, it is necessary to provide effective information navigation tools that operate over unstructured textual data. Current state-of-the-art methods are limited in terms of providing effective exploration capabilities for various information seeking tasks, especially those arising in domains such as online journalism. Here we argue for improvements in faceted search systems, via new strategies for identifying keyword-based facets. Our proposed technique utilises a PageRank model operating over the graph of terms appearing in documents, while also employing novel methods for biasing significant terms and named entities. In addition, we consider the notion of diversity within extracted keywords in an effort to maximize coverage over a range of topics. We perform experimental evaluations over issues relevant to the Irish General Elections 2016, demonstrating the superior performance of our proposed technique

    Text-to-picture tools, systems, and approaches: a survey

    Get PDF
    Text-to-picture systems attempt to facilitate high-level, user-friendly communication between humans and computers while promoting understanding of natural language. These systems interpret a natural language text and transform it into a visual format as pictures or images that are either static or dynamic. In this paper, we aim to identify current difficulties and the main problems faced by prior systems, and in particular, we seek to investigate the feasibility of automatic visualization of Arabic story text through multimedia. Hence, we analyzed a number of well-known text-to-picture systems, tools, and approaches. We showed their constituent steps, such as knowledge extraction, mapping, and image layout, as well as their performance and limitations. We also compared these systems based on a set of criteria, mainly natural language processing, natural language understanding, and input/output modalities. Our survey showed that currently emerging techniques in natural language processing tools and computer vision have made promising advances in analyzing general text and understanding images and videos. Furthermore, important remarks and findings have been deduced from these prior works, which would help in developing an effective text-to-picture system for learning and educational purposes. - 2019, The Author(s).This work was made possible by NPRP grant #10-0205-170346 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors

    Pre-Service EFL Teachers’ Experiences in Teaching Practicum in Rural Schools in Indonesia

    Get PDF
    A teaching practicum is officially offered to pre-service English as Foreign Language (EFL) teachers, randomly selected for either urban or rural schools. The study aims to describe and disseminate the obstacles experienced by those teaching English in rural schools during their Teaching Practicum Program (TTP). Seventeen pre-service teachers participated in the qualitative study. Interviews and observations were the main methods of data collection. The results reveal that the obstacles faced by the pre-service EFL teachers were around classroom management, learning materials or resources, teaching aids or media, teaching methods, learners\u27 English skills, choice of language use, slow internet connectivity, learners\u27 motivation, evaluation technique and parental support. The discussion includes implication for need to reorganize future teaching practicum

    Hipervinculación de documentos con Máquinas de Soporte Vectorial

    Get PDF
    En la actualidad el acceso a la información se da por medio de hipervínculos, los cuales interconectan los textos entre si únicamente si contienen una relación. Varios investigadores han estudiado la forma en que los humanos crean los hipervínculos y han tratado de replicar el modo de trabajo específicamente de la colección de Wikipedia. El uso de hipervínculos se ha pensado como un prometedor recurso para la recuperación de información, que fue inspirado por el análisis de citas de la literatura (Merlino-Santesteban, 2003). Según Dreyfus (Dreyfus, 2003) la hipervinculación no tiene ningún criterio específico, ni tampoco jerarquías. Por ello cuando todo puede vincularse indiscriminadamente y sin obedecer un propósito o significado en particular, el tamaño de la red y la arbitrariedad entre sus hipervínculos, hacen extremadamente difícil para un usuario encontrar exactamente el tipo de información que busca. En las organizaciones, la familiaridad y la confianza durante mucho tiempo han sido identificadas como las dimensiones de credibilidad de la fuente de información en publicidad (Eric Haley, 1996). Un hipervínculo, como una forma de información, puede, por lo tanto, tener un mayor impacto cuando se presenta por un objetivo conocido (Stewart & Zhang, 2003). Mientras tanto, los hipervínculos entre los sitios web pueden generan confianza en el remitente y el receptor del enlace, por lo que estas interacciones tienen efectos positivos de reputación para el destinatario (Stewart, 2006) (Lee, Lee, & Hwang, 2014). El estudio de documentos por medio de los hipervínculos es un área importante de investigación en minería de datos, en una red social a menudo lleva una gran cantidad de información estructural formada por los hipervínculos creando nodos compartidos dentro de la comunidad. Algunas importantes aplicaciones de los métodos de minería de datos para redes sociales son la recomendación social mediante las experiencias similares de los usuarios (Alhajj & Rokne, 2014). En marketing y publicidad se aprovechan las cascadas en las redes sociales y se obtienen beneficios sobre modelos de propagación de la información (Domingos & Richardson, 2001). Las empresas de publicidad están interesados en cuantificar el valor de un solo nodo en la red, tomando en cuenta que sus acciones pueden desencadenar cascadas a sus nodos vecinos. Los resultados de (Allan, 1997) (Bellot et al., 2013) (Agosti, Crestani, & Melucci, 1997) (Blustein, Webber, & Tague-Sutcliffe, 1997) sugieren que el descubrimiento de hipervínculos automatizado no es un problema resuelto y que cualquier evaluación de los sistemas de descubrimiento de Hipervínculos de Wikipedia debe basarse en la evaluación manual, no en los hipervínculos existentes

    Mining Meaning from Wikipedia

    Get PDF
    Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced.Comment: An extensive survey of re-using information in Wikipedia in natural language processing, information retrieval and extraction and ontology building. Accepted for publication in International Journal of Human-Computer Studie

    Applying Wikipedia to Interactive Information Retrieval

    Get PDF
    There are many opportunities to improve the interactivity of information retrieval systems beyond the ubiquitous search box. One idea is to use knowledge bases—e.g. controlled vocabularies, classification schemes, thesauri and ontologies—to organize, describe and navigate the information space. These resources are popular in libraries and specialist collections, but have proven too expensive and narrow to be applied to everyday webscale search. Wikipedia has the potential to bring structured knowledge into more widespread use. This online, collaboratively generated encyclopaedia is one of the largest and most consulted reference works in existence. It is broader, deeper and more agile than the knowledge bases put forward to assist retrieval in the past. Rendering this resource machine-readable is a challenging task that has captured the interest of many researchers. Many see it as a key step required to break the knowledge acquisition bottleneck that crippled previous efforts. This thesis claims that the roadblock can be sidestepped: Wikipedia can be applied effectively to open-domain information retrieval with minimal natural language processing or information extraction. The key is to focus on gathering and applying human-readable rather than machine-readable knowledge. To demonstrate this claim, the thesis tackles three separate problems: extracting knowledge from Wikipedia; connecting it to textual documents; and applying it to the retrieval process. First, we demonstrate that a large thesaurus-like structure can be obtained directly from Wikipedia, and that accurate measures of semantic relatedness can be efficiently mined from it. Second, we show that Wikipedia provides the necessary features and training data for existing data mining techniques to accurately detect and disambiguate topics when they are mentioned in plain text. Third, we provide two systems and user studies that demonstrate the utility of the Wikipedia-derived knowledge base for interactive information retrieval
    corecore