10 research outputs found

    A Mobile-Health Information Access System

    Get PDF
    Patients using the Mobile-Health Information System can send SMS requests to a Frequently Asked Questions (FAQ) web server with the expectation of receiving an appropriate feedback on issues that relate to their health. The accuracy of such feedback is paramount to the mobile search user. However, automating SMS-based information search and retrieval poses significant challenges because of the inherent noise in SMS communication. First, in this paper an architecture is proposed for the implementation of the retrieval process, and second, an algorithm is developed for the best-ranked question-answer pair retrieval. We present an algorithm that assists in the selection of the best FAQ-query after the ranking of the query-answer pair. Results are generated based on the ranking of the FAQ-query. Our algorithm gives a better result in terms of average precision and recall when compared with the naıve retrieval algorithm.Southern Africa Telecommunication Networks and Applications Conference (SATNAC)Department of HE and Training approved lis

    Text messaging and retrieval techniques for a mobile health information system

    Get PDF
    Mobile phones have been identified as one of the technologies that can be used to overcome the challenges of information dissemination regarding serious diseases. Short message services, a much used function of cell phones, for example, can be turned into a major tool for accessing databases. This paper focuses on the design and development of a short message services-based information access algorithm to carefully screen information on human immunodeficiency virus/acquired immune deficiency syndrome within the context of a frequently asked questions system. However, automating the short message services-based information search and retrieval poses significant challenges because of the inherent noise in its communications. The developed algorithm was used to retrieve the best-ranked question–answer pair. Results were evaluated using three metrics: average precision, recall and computational time. The retrieval efficacy was measured and it was confirmed that there was a significant improvement in the results of the proposed algorithm when compared with similar retrieval algorithms

    A semi-automated FAQ retrieval system for HIV/AIDS

    Get PDF
    This thesis describes a semi-automated FAQ retrieval system that can be queried by users through short text messages on low-end mobile phones to provide answers on HIV/AIDS related queries. First we address the issue of result presentation on low-end mobile phones by proposing an iterative interaction retrieval strategy where the user engages with the FAQ retrieval system in the question answering process. At each iteration, the system returns only one question-answer pair to the user and the iterative process terminates after the user's information need has been satisfied. Since the proposed system is iterative, this thesis attempts to reduce the number of iterations (search length) between the users and the system so that users do not abandon the search process before their information need has been satisfied. Moreover, we conducted a user study to determine the number of iterations that users are willing to tolerate before abandoning the iterative search process. We subsequently used the bad abandonment statistics from this study to develop an evaluation measure for estimating the probability that any random user will be satisfied when using our FAQ retrieval system. In addition, we used a query log and its click-through data to address three main FAQ document collection deficiency problems in order to improve the retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system. Conclusions are derived concerning whether we can reduce the rate at which users abandon their search before their information need has been satisfied by using information from previous searches to: Address the term mismatch problem between the users' SMS queries and the relevant FAQ documents in the collection; to selectively rank the FAQ document according to how often they have been previously identified as relevant by users for a particular query term; and to identify those queries that do not have a relevant FAQ document in the collection. In particular, we proposed a novel template-based approach that uses queries from a query log for which the true relevant FAQ documents are known to enrich the FAQ documents with additional terms in order to alleviate the term mismatch problem. These terms are added as a separate field in a field-based model using two different proposed enrichment strategies, namely the Term Frequency and the Term Occurrence strategies. This thesis thoroughly investigates the effectiveness of the aforementioned FAQ document enrichment strategies using three different field-based models. Our findings suggest that we can improve the overall recall and the probability that any random user will be satisfied by enriching the FAQ documents with additional terms from queries in our query log. Moreover, our investigation suggests that it is important to use an FAQ document enrichment strategy that takes into consideration the number of times a term occurs in the query when enriching the FAQ documents. We subsequently show that our proposed enrichment approach for alleviating the term mismatch problem generalise well on other datasets. Through the evaluation of our proposed approach for selectively ranking the FAQ documents, we show that we can improve the retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system by incorporating the click popularity score of a query term t on an FAQ document d into the scoring and ranking process. Our results generalised well on a new dataset. However, when we deploy the click popularity score of a query term t on an FAQ document d on an enriched FAQ document collection, we saw a decrease in the retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system. Furthermore, we used our query log to build a binary classifier for detecting those queries that do not have a relevant FAQ document in the collection (Missing Content Queries (MCQs))). Before building such a classifier, we empirically evaluated several feature sets in order to determine the best combination of features for building a model that yields the best classification accuracy in identifying the MCQs and the non-MCQs. Using a different dataset, we show that we can improve the overall retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system by deploying a MCQs detection subsystem in our FAQ retrieval system to filter out the MCQs. Finally, this thesis demonstrates that correcting spelling errors can help improve the retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system. We tested our FAQ retrieval system with two different testing sets, one containing the original SMS queries and the other containing the SMS queries which were manually corrected for spelling errors. Our results show a significant improvement in the retrieval performance and the probability that any random user will be satisfied when using our FAQ retrieval system

    Short message service normalization for communication with a health information system

    Get PDF
    Philosophiae Doctor - PhDShort Message Service (SMS) is one of the most popularly used services for communication between mobile phone users. In recent times it has also been proposed as a means for information access. However, there are several challenges to be overcome in order to process an SMS, especially when it is used as a query in an information retrieval system.SMS users often tend deliberately to use compacted and grammatically incorrect writing that makes the message difficult to process with conventional information retrieval systems. To overcome this, a pre-processing step known as normalization is required. In this thesis an investigation of SMS normalization algorithms is carried out. To this end,studies have been conducted into the design of algorithms for translating and normalizing SMS text. Character-based, unsupervised and rule-based techniques are presented. An investigation was also undertaken into the design and development of a system for information access via SMS. A specific system was designed to access information related to a Frequently Asked Questions (FAQ) database in healthcare, using a case study. This study secures SMS communication, especially for healthcare information systems. The proposed technique is to encipher the messages using the secure shell (SSH) protocol

    Photosynthetic Activity and Survival of Foliage During Winter for Two Bunchgrass Species in a Cold-winter Steppe Environment

    No full text
    Abstract. The paper describes an SMS-based FAQ retrieval system. The goal of this task is to find a question Q * from corpora of FAQs (Frequently Asked Questions) that best answers or matches the SMS query S. The test corpus used in this paper contained FAQs in three languages: English, Hindi and Malayalam. The FAQs were from several domains, including railway enquiry, telecom, health and banking. We first checked the SMS using the Bing spell-checker. Then we used the unigram matching, bigram matching, and 1-skip bigram matching modules for monolingual FAQ retrieval. For cross-lingual system, we used the following three modules: an SMS-to-English query translation system, an English-to-Hindi translation system, and cross-lingual FAQ retrieval

    Concept Mapping Strategy For Academic Writing Tutorial In Open And Distant Learning Higher Institution

    Get PDF
    Universitas Terbuka (UT) an open and distant higher education institution of Indonesia conducts the in-service teacher education program. In order to complete the program, the students – mostly teachers - have to submit the final academic paper. In fact, most of the UT students have difficulty to write this academic paper. UT offers an academic writing course to solve this writing program. Most of the student view academic writing still as a difficult assignment. Most of the students view academic writing as a difficult assignment to complete. UT has to find an appropriate instructional strategy that can facilitate student to write the academic writing assignment. One of the instructional strategy that can be selected to solve the academic writing problems is concept mapping. The aim of this study is to elaborate the implementation of concept map as an instructional strategy to facilitate the open and distance learning students io complete academic writing assignments. A design based research was applied to measure the effectiveness of using concept mapping strategy in helping students to gain academic writing skills. The steps of research and development model from Borg, Gall and Gall which consist of instructional design and development phases were implemented in this study. The result of this study indicated that students were facilitated and enjoyed the process of academic writing used the concept map strategy

    ICT-oriented Strategic Extension for Responsible Fisheries Management

    Get PDF
    The Course Manual is developed as a part of the ICAR funded Short Course on “ICT -oriented Strategic Extension for Responsible Fisheries Management” held at Central Marine Fisheries Research Institute, Cochin during 05-25 November, 2013

    Localización e internacionalización de software: puntos de encuentro entre el localizador y el programador

    Get PDF
    [ES] Por la manera en que se ha desarrollado la industria de localización de programas informáticos, el proceso de localización siempre ha funcionado como una caja negra que trabaja con su propio equipo de gestores y traductores, desvinculada de las metodologías y técnicas que se utilizan para desarrollar software. En muchos casos, y para muchas de las principales editoriales de software y desarrolladores de plataformas de programación, es, incluso, un proceso marginal que se invoca solo cuando hace falta traducir cadenas de texto. Esta inclusión tardía no solo genera enormes problemas en el proceso de la traducción de esas cadenas de texto, sino que, en muchas ocasiones, imposibilita que se pueda lanzar el producto en diversos mercados cuyos idiomas y culturas no encajan con las prestaciones que han sido tomadas en consideración durante el desarrollo y programadas al software. En esta investigación haremos un resumen histórico del desarrollo de las computadoras y de cómo han surgido las tres principales plataformas de programación: las mainframes, las minicomputadoras y las computadoras personales. Veremos que, a la par con los equipos o hardware, han surgido una variedad de lenguajes de programación y de estrategias para desarrollar programas informáticos. Según la aplicación y uso de estos equipos se ha ido ampliando, la necesidad de dar orden a las estrategias y procesos que se llevan a cabo en el desarrollo de programas informáticos sirve de base para la creación y desarrollo de estrategias y metodologías de desarrollo de programas. Junto con estos desarrollos, el surgimiento de las computadoras personales favorece la creación de productos que no solo sirvan a los mercados (principalmente) estadounidenses, sino a otros mercados que se comunican en distintos idiomas y tienen necesidades particulares. Esta es la génesis de la industria de la localización y el punto en donde comienza a confluir la traducción con la informática. Hasta este momento, todas las investigaciones que se han hecho sobre la localización de software toman como punto de partida el programador, un programador que investiga qué son programas multilingües y qué hay que tomar en consideración para crearlos. Nuestra propuesta nace del otro “lado”, del lado del localizador que se acerca a la programación como experto en lenguas y en gestión multicultural. Este experto conoce los problemas que se enfrenta el que está dentro de la caja negra de la localización, pero también está preparado para participar en los procesos anteriores, los que se llevan a cabo para crear programas nuevos. El “internacionalizador” posee los conocimientos y destrezas necesarias para poder formar parte del equipo de desarrollo de una aplicación desde sus comienzos hasta las etapas finales y ayudará a integrar en este proceso los requisitos necesarios para lograr que el software pueda ser localizado con facilidad llegado el momento de viabilizar su lanzamiento en otros mercados con necesidades lingüísticas, legales y culturales diversas. En el siglo de las comunicaciones es imposible pensar que se desarrolle software que no atienda las necesidades de más de un mercado. El internacionalizador puede ayudar al equipo de desarrollo de software a lograr esto.[EN] The localization process has always been regarded as a black box that functions on its own, with a team of project managers and translators who never get involved in software development methodologies or technologies. This is mainly due to the way in which the software localization industry has developed. In many cases, and for many of the main software publishers and platform developers, localization is a peripheral process that is only invoked when text strings need translation. Including localization late in the development process not only brings enormous problems when translating strings, on many occasions it becomes impossible to launch the product in other markets which have the need to accommodate languages and cultures that do not fit within the features that were developed and included in the software. This dissertation begins by making a historical account of the development of computers and describes how the three main programming platforms—namely mainframes, minicomputers, and personal computers—came into being. We shall see that, as hardware developed and new features were added, a variety of programming languages and strategies for developing software emerged. As features, application, and use of hardware further expanded, the need for organizing strategies and processes for creating programs became the basis for formulating and establishing software development strategies and methodologies. Along with these developments, the introduction of personal computers eventually promoted the need for creating products that serve not only markets in the United States, but other markets that communicate in different languages and exhibit particular needs of their own. Thus, the localization industry was born. It is at this point that translation and computers begin to come together and interact. Up until now, research regarding software localization has had programmers as a starting point. These programmers have researched what multilingual programs are and what needs to be done to create them. Our proposal comes from the other “side,” from the localizer’s point of view; a localizer that approaches programming as an expert in languages and intercultural mediation. This expert knows the problems within localization’s black box, but is also prepared to participate in the processes that take place before translation, the processes that take place in order to create new software. These “internationalizers,” as we refer to them, have all the necessary knowledge and skills to enable them to become part of a software development team from the beginning all the way through to the final stages. Their knowledge and presence will help integrate into this process the necessary requirements that will allow for smooth software localization, when the decision is made to launch the application into other markets that have diverse linguistic, legal, and cultural needs. In this century, mainly guided by communication, it is unthinkable to develop software that only attends to the needs of a single market. The internationalizer can help the software development team to accomplish this
    corecore