405 research outputs found

    Open- vs. Restricted-Domain QA Systems in the Biomedical Field

    Get PDF
    Question Answering Systems (hereinafter QA systems) stand as a new alternative for Information Retrieval Systems. We conducted a study to evaluate the efficiency of QA systems as terminological sources for physicians, specialized translators, and users in general. To this end we analysed the performance of two open-domain and two restricted domain QA systems. The research entailed a collection of one hundred fifty definitional questions from WebMed. We studied the sources that QA systems used to retrieve the answers, and later applied a range of evaluation measures to mark the quality of answers. Through analysing the results obtained by asking the 150 questions in the QA systems MedQA, START, QuALiM and HONqa, it was possible to evaluate the systems’ operation through applying specific metrics. Despite the limitations demonstrated by these systems, as they are not accessible to everyone and they are not always completely developed, it has been confirmed that these four QA systems are valid and useful for obtaining definitional medical information in that they offer coherent and precise answers. The results are encouraging because they present this type of tool as a new possibility for gathering precise, reliable and specific information in a short period of time

    Open- vs. Restricted-Domain QA Systems in the Biomedical Field

    Get PDF
    Question Answering Systems (hereinafter QA systems) stand as a new alternative for Information Retrieval Systems. We conducted a study to evaluate the efficiency of QA systems as terminological sources for physicians, specialized translators, and users in general. To this end we analysed the performance of two open-domain and two restricted domain QA systems. The research entailed a collection of one hundred fifty definitional questions from WebMed. We studied the sources that QA systems used to retrieve the answers, and later applied a range of evaluation measures to mark the quality of answers. Through analysing the results obtained by asking the 150 questions in the QA systems MedQA, START, QuALiM and HONqa, it was possible to evaluate the systems’ operation through applying specific metrics. Despite the limitations demonstrated by these systems, as they are not accessible to everyone and they are not always completely developed, it has been confirmed that these four QA systems are valid and useful for obtaining definitional medical information in that they offer coherent and precise answers. The results are encouraging because they present this type of tool as a new possibility for gathering precise, reliable and specific information in a short period of time

    Multilingual Question-Answering System in Biomedical Domain on the Web: An Evaluation

    Get PDF
    Question-answering systems (QAS) are presented as an alternative to traditional systems of information retrieval, intended to offer precise responses to factual questions. An analysis has been made of the results offered by the QA multilingual biomedical system HONqa, available on the Web. The study has used a set of 120 biomedical definitional questions (What is...?), taken from the medical website WebMD, which were formulated in English, French, and Italian. The answers have been analysed using a serie of specific measures (MRR, TRR, FHS, precision, MAP). The study confirms that for all the languages analysed the functioning effectiveness needs to be improved, although in the multilingual context analysed the questions in the English language achieve better results for retrieving definitional information than in French and Italian

    Multilingual Question-Answering System in Biomedical Domain on the Web: An Evaluation

    Get PDF
    Question-answering systems (QAS) are presented as an alternative to traditional systems of information retrieval, intended to offer precise responses to factual questions. An analysis has been made of the results offered by the QA multilingual biomedical system HONqa, available on the Web. The study has used a set of 120 biomedical definitional questions (What is...?), taken from the medical website WebMD, which were formulated in English, French, and Italian. The answers have been analysed using a serie of specific measures (MRR, TRR, FHS, precision, MAP). The study confirms that for all the languages analysed the functioning effectiveness needs to be improved, although in the multilingual context analysed the questions in the English language achieve better results for retrieving definitional information than in French and Italian

    A comparative analysis of 21 literature search engines

    Get PDF
    With increasing number of bibliographic software, scientists and health professionals either make a subjective choice of tool(s) that could suit their needs or face a challenge of analyzing multiple features of a plethora of search programs. There is an urgent need for a thorough comparative analysis of the available bio-literature scanning tools, from the user’s perspective. We report results of the first time semi-quantitative comparison of 21 programs, which can search published (partial or full text) documents in life science areas. The observations can assist life science researchers and medical professionals to make an informed selection among the programs, depending on their search objectives. 
Some of the important findings are: 
1. Most of the hits obtained from Scopus, ReleMed, EBImed, CiteXplore, and HighWire Press were usually relevant (i.e. these tools show a better precision than other tools). 
2. But a very high number of relevant citations were retrieved by HighWire Press, Google Scholar, CiteXplore and Pubmed Central (they had better recall). 
3. HWP and CiteXplore seemed to have a good balance of precision and recall efficiencies. 
4. PubMed Central, PubMed and Scopus provided the most useful query systems. 
5. GoPubMed, BioAsk, EBIMed, ClusterMed could be more useful among the tools that can automatically process the retrieved citations for further scanning of bio-entities such as proteins, diseases, tissues, molecular interactions, etc. 
The authors suggest the use of PubMed, Scopus, Google Scholar and HighWire Press - for better coverage, and GoPubMed - to view the hits categorized based on the MeSH and gene ontology terms. The article is relavant to all life science subjects.
&#xa

    Contextual question answering for the health domain

    Get PDF
    Studies have shown that natural language interfaces such as question answering and conversational systems allow information to be accessed and understood more easily by users who are unfamiliar with the nuances of the delivery mechanisms (e.g., keyword-based search engines) or have limited literacy in certain domains (e.g., unable to comprehend health-related content due to terminology barrier). In particular, the increasing use of the web for health information prompts us to reexamine our existing delivery mechanisms. We present enquireMe, which is a contextual question answering system that provides lay users with the ability to obtain responses about a wide range of health topics by vaguely expressing at the start and gradually refining their information needs over the course of an interaction session using natural language. enquireMe allows the users to engage in 'conversations' about their health concerns, a process that can be therapeutic in itself. The system uses community-driven question-answer pairs from the web together with a decay model to deliver the top scoring answers as responses to the users' unrestricted inputs. We evaluated enquireMe using benchmark data from WebMD and TREC to assess the accuracy of system-generated answers. Despite the absence of complex knowledge acquisition and deep language processing, enquireMe is comparable to the state-of-the-art question answering systems such as START as well as those interactive systems from TREC

    Evaluación de los sistemas QA de dominio abierto frente a los de dominio especializado en el ámbito biomédico

    Get PDF
    Los sistemas QA se presentan como una alternativa a los sistemas tradicionales de recuperación de información tratando de ofrecer respuestas precisas a preguntas factuales. Hemos realizado un estudio para evaluar la eficiencia de estos sistemas como fuentes terminológicas para los especialistas y para usuarios en general. Con este fin, se ha evaluado el funcionamiento de cuatro sistemas QA, dos especializados en el dominio biomédico (MedQA y HONqa) y dos de dominio general (START y QuALiM). El estudio ha utilizado una colección de 150 preguntas biomédicas definicionales (What is…?), obtenidas del sitio web médico WebMD. Para determinar el funcionamiento, se han evaluado las respuestas ofrecidas utilizando una serie de medidas específicas (precisión, MRR, TRR, FHS). El estudio permite confirmar que los cuatro sistemas son útiles para la recuperación de información definicional en este ámbito, ya que han proporcionado respuestas coherentes y precisas con un grado de aceptabilidad adecuado.Grupo de Investigación: Acceso y evaluación de la información científica (HUM466)

    Comparative evaluation of web search engines in health information retrieval

    Get PDF
    Purpose: With this work we intend to evaluate several generalist and health-specific search engines on retrieval of health information by consumers. We compare the retrieval effectiveness of these engines in different types of clinical queries, medical specialties and condition's severity. Finally, we compare the use of evaluation metrics for binary relevance scales and for graded ones. Design/methodology/approach: We conducted a user study in which users evaluated the relevance of documents retrieved by 4 search engines in 2 different health information needs. Users could choose between generalist (Bing, Google, Sapo and Yahoo!) and health-specific search engines (MedlinePlus, SapoSaúde and WebMD). We then analyse the differences between search engines and groups of information needs with six different measures: graded average precision (gap), average precision (ap), gap@5, gap@10, ap@5 and ap@10. Findings: Results show that generalist web search engines surpass the precision of health-specific engines. Google has the better performance, mainly on the top-10 results. We found that information needs associated with severe conditions are associated with higher precision just like overview and psychiatry questions. Originality/value: Our study is one of the first studies to use a recently proposed measure to evaluate the effectiveness of retrieval systems with graded relevance scales. It includes tasks of several medical specialties, types of clinical questions and different levels of severity what, to the best of our knowledge, has not been done before. Also, it is a study in which users have a large involvement in the experiment. Results are useful to understand how search engines differ in their responses to health information needs, to inform about what types of online health information are more common on the Web and to infer ways to improve this type of search. Keywords: Evaluation, Health information retrieval, User study, Graded-relevance, Web search engines Paper type: Research paper INTRODUCTION Patients, their family and friends, commonly designated by health consumers, are increasingly using the Web to search for health information. The last Pew Internet report on health information According to This study evaluates the performance of 4 generalist search engines (Google, Bing, Yahoo! and Sapo) and 3 health-specific search engines (MedlinePlus, WebMD and SapoSaúde). The evaluation is based on the data collected in a user study with undergraduate students and work tasks defined according to the framework proposed by Borlun
    • …
    corecore