110,538 research outputs found

    Reflecting on the past and the present with temporal graph-based models

    Get PDF
    Self-adaptive systems (SAS) need to reflect on the current environment conditions, their past and current behaviour to support decision making. Decisions may have different effects depending on the context. On the one hand, some adaptations may have run into difficulties. On the other hand, users or operators may want to know why the system evolved in a certain direction. Users may just want to know why the system is showing a given behaviour or has made a decision as the behaviour may be surprising or not expected. We argue that answering emerging questions related to situations like these requires storing execution trace models in a way that allows for travelling back and forth in time, qualifying the decision making against available evidence. In this paper, we propose temporal graph databases as a useful representation for trace models to support self-explanation, interactive diagnosis or forensic analysis. We define a generic meta-model for structuring execution traces of SAS, and show how a sequence of traces can be turned into a temporal graph model. We present a first version of a query language for these temporal graphs through a case study, and outline the potential applications for forensic analysis (after the system has finished in a potentially abnormal way), self-explanation, and interactive diagnosis at runtime

    Using ontology in query answering systems: Scenarios, requirements and challenges

    Get PDF
    Equipped with the ultimate query answering system, computers would finally be in a position to address all our information needs in a natural way. In this paper, we describe how Language and Computing nv (L&C), a developer of ontology-based natural language understanding systems for the healthcare domain, is working towards the ultimate Question Answering (QA) System for healthcare workers. L&C’s company strategy in this area is to design in a step-by-step fashion the essential components of such a system, each component being designed to solve some one part of the total problem and at the same time reflect well-defined needs on the prat of our customers. We compare our strategy with the research roadmap proposed by the Question Answering Committee of the National Institute of Standards and Technology (NIST), paying special attention to the role of ontology

    Finding Structured and Unstructured Features to Improve the Search Result of Complex Question

    Get PDF
    -Recently, search engine got challenge deal with such a natural language questions. Sometimes, these questions are complex questions. A complex question is a question that consists several clauses, several intentions or need long answer. In this work we proposed that finding structured features and unstructured features of questions and using structured data and unstructured data could improve the search result of complex questions. According to those, we will use two approaches, IR approach and structured retrieval, QA template. Our framework consists of three parts. Question analysis, Resource Discovery and Analysis The Relevant Answer. In Question Analysis we used a few assumptions, and tried to find structured and unstructured features of the questions. Structured feature refers to Structured data and unstructured feature refers to unstructured data. In the resource discovery we integrated structured data (relational database) and unstructured data (webpage) to take the advantaged of two kinds of data to improve and reach the relevant answer. We will find the best top fragments from context of the webpage In the Relevant Answer part, we made a score matching between the result from structured data and unstructured data, then finally used QA template to reformulate the question. In the experiment result, it shows that using structured feature and unstructured feature and using both structured and unstructured data, using approach IR and QA template could improve the search result of complex questions

    SQL Query Completion for Data Exploration

    Full text link
    Within the big data tsunami, relational databases and SQL are still there and remain mandatory in most of cases for accessing data. On the one hand, SQL is easy-to-use by non specialists and allows to identify pertinent initial data at the very beginning of the data exploration process. On the other hand, it is not always so easy to formulate SQL queries: nowadays, it is more and more frequent to have several databases available for one application domain, some of them with hundreds of tables and/or attributes. Identifying the pertinent conditions to select the desired data, or even identifying relevant attributes is far from trivial. To make it easier to write SQL queries, we propose the notion of SQL query completion: given a query, it suggests additional conditions to be added to its WHERE clause. This completion is semantic, as it relies on the data from the database, unlike current completion tools that are mostly syntactic. Since the process can be repeated over and over again -- until the data analyst reaches her data of interest --, SQL query completion facilitates the exploration of databases. SQL query completion has been implemented in a SQL editor on top of a database management system. For the evaluation, two questions need to be studied: first, does the completion speed up the writing of SQL queries? Second , is the completion easily adopted by users? A thorough experiment has been conducted on a group of 70 computer science students divided in two groups (one with the completion and the other one without) to answer those questions. The results are positive and very promising

    Negative Statements Considered Useful

    No full text
    Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, while they abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In query-log-based text extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.1M statements for 100K popular Wikidata entities

    Barriers and Facilitators to Use of a Clinical Evidence Technology in the Management of Skin Problems in Primary Care: Insights from Mixed Methods

    Get PDF
    Objective: Few studies have examined the impact of a single clinical evidence technology (CET) on provider practice or patient outcomes from the provider’s perspective. A previous cluster-randomized controlled trial with patient-reported data tested the effectiveness of a CET (i.e., VisualDx) in improving skin problem outcomes but found no significant effect. The objectives of this follow-up study were to identify barriers and facilitators to the use of the CET from the perspective of primary care providers (PCPs) and to identify reasons why the CET did not affect outcomes in the trial. Methods: Using a convergent mixed methods design, PCPs completed a post-trial survey and participated in interviews about using the CET for the management of patients’ skin problems. Data from both methods were integrated. Results: PCPs found the CET somewhat easy to use but only occasionally useful. Less experienced PCPs used the CET more frequently. Data from interviews revealed barriers and facilitators at four steps of evidence-based practice: clinical question recognition, information acquisition, appraisal of relevance, and application with patients. Facilitators included uncertainty in dermatology, intention for use, convenience of access, diagnosis and treatment support, and patient communication. Barriers included confidence in dermatology, preference for other sources, interface difficulties, presence of irrelevant information, and lack of decision impact. Conclusion: PCPs found the CET useful for diagnosis, treatment support, and patient communication. However, the barriers of interface difficulties, irrelevant search results, and preferred use of other sources limited its positive impact on patient skin problem management

    The Database of Abstracts of Reviews of Effects (DARE)

    Get PDF
    Systematic reviews are useful tools for busy decision-makers because they identify, appraise and synthesise the available research evidence on a particular topic. Many thousands of systematic reviews relevant to health care have been published. However, they can be difficult to locate and their quality is variable. DARE (the Database of Abstracts of Reviews of Effects) contains summaries of systematic reviews which have met strict quality criteria. Each summary also provides a critical commentary on the quality of the review. DARE covers a broad range of health care related topics and can be used for answering questions about the effects of health care interventions, as well as for developing clinical guidelines and policy making. DARE is available free of charge on the internet (http://nhscrd.york.ac.uk), and as part of the Cochrane Library. Alternatively, DARE can be searched, on your behalf, by CRD information staff (tel: 01904 433707 or email [email protected])

    Using a Logic Programming Framework to Control Database Query Dialogues in Natural Language

    Get PDF
    We present a natural language question/answering system to interface the University of Évora databases that uses clarification dialogs in order to clarify user questions. It was developed in an integrated logic programming framework, based on constraint logic programming using the GnuProlog(-cx) language [2,11] and the ISCO framework [1]. The use of this LP framework allows the integration of Prolog-like inference mechanisms with classes and inheritance, constraint solving algorithms and provides the connection with relational databases, such as PostgreSQL. This system focus on the questions’ pragmatic analysis, to handle ambiguity, and on an efficient dialogue mechanism, which is able to place relevant questions to clarify the user intentions in a straightforward manner. Proper Nouns resolution and the pp-attachment problem are also handled. This paper briefly presents this innovative system focusing on its ability to correctly determine the user intention through its dialogue capability
    • …
    corecore