4,460 research outputs found

    A survey on the use of relevance feedback for information access systems

    Get PDF
    Users of online search engines often find it difficult to express their need for information in the form of a query. However, if the user can identify examples of the kind of documents they require then they can employ a technique known as relevance feedback. Relevance feedback covers a range of techniques intended to improve a user's query and facilitate retrieval of information relevant to a user's information need. In this paper we survey relevance feedback techniques. We study both automatic techniques, in which the system modifies the user's query, and interactive techniques, in which the user has control over query modification. We also consider specific interfaces to relevance feedback systems and characteristics of searchers that can affect the use and success of relevance feedback systems

    PPP - personalized plan-based presenter

    Get PDF

    In defense of compilation: A response to Davis' form and content in model-based reasoning

    Get PDF
    In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning

    A revised terminology for vegetated rooftops based on function and vegetation

    Get PDF
    The proliferation of vegetated, or green roofs, warrant a revisit of the terminology used in order to efficiently, and without confusion, convey information among scientists, policy makers and practitioners. A Web of Science and Google Scholar search (from 1996 to 2018) showed a steady increase in green roof articles, reaching close to 300 per year in WOS and ca. 2500 in Google Scholar, with approximately 10-20%, and up to 40 % of all articles using the terms extensive and/or intensive, especially in recent years. We evaluated the use of these terms, including 'green roof, and 'intensive and extensive roof', found that they are used in confusing ways, and provide compelling evidence that there is a need for revising the terminology. Acknowledging that most, if not all, vegetated roofs are multifunctional, we propose a new classification system based on the roof's primary function(s) and vegetation, such as "stormwater meadow roof", "biodiversity meadow roof", "biodiversity forest roof", or even "multifunctional meadow roof". This new terminological sphere is not meant to be rigid, but should be allowed to evolve so that useful combinations survive the scrutiny of academia and practitioners, while less useful ones go extinct. A clear and standardized terminology will serve to avoid confusion, allow for generalizations and aid in the development of this rapidly-expanding field.Peer reviewe

    DFKI publications : the first four years ; 1990 - 1993

    Get PDF

    Reducing Global Environmental Uncertainties in Reports of Tropical Forest Carbon Fluxes to REDD+ and the Paris Agreement Global Stocktake

    Get PDF
    The magnitude of net carbon dioxide emissions resulting from global forest carbon change, and hence the contribution of forests to global climate change, is highly uncertain, owing to the lack of direct measurement by Earth observation and ground data collection. This paper uses a new method to evaluate this uncertainty with greater precision than before. Sources of uncertainty are divided into conceptualization and measurement categories and distributed between the spatial, vertical and temporal dimensions of Earth observation. The method is applied to Forest Reference Emission Level (FREL) reports and National Greenhouse Gas Inventories (NGGIs) submitted to the UN Framework Convention on Climate Change (UNFCCC) by 12 countries containing half of tropical forest area. The two sets of estimates are typical of those to be submitted to the Reducing Emissions from Deforestation and Degradation (REDD+) mechanism of the UNFCCC and the 2023 Global Stocktake of its Paris Agreement, respectively. Assembling the Uncertainty Fingerprint of each estimate shows that Uncertainty Scores are between 10 and 14 for the NGGIs and 5 and 10 for the FREL reports, and so both exceed the threshold of 2 when it is advisable to evaluate uncertainty by standard statistical methods. Conceptualization uncertainties account for 60% of all uncertainties in the NGGIs and 47% in the FREL reports, e.g., there is incomplete coverage of forest carbon fluxes, and limited disaggregation of fluxes between different ecosystem types and forest carbon pools. Of the measurement uncertainties, all FREL reports base forest area estimates on at least medium resolution satellite data, compared with only 3 NGGIs; after REDD+ Readiness schemes, mean area mapping frequency has fallen to 2.3 years in Latin America and 3.0 years in Asia, but only 8.3 years in Africa; and carbon density estimates are based on national forest inventory data in all FREL reports but only 4 NGGIs. The effectiveness of the Global Stocktake and REDD+ monitoring will therefore be constrained by considerable uncertainties, and to reduce these requires a new phase of REDD+ Readiness to ensure more frequent national forest inventories and forest carbon mapping

    Empowering Knowledge Bases: a Machine Learning Perspective

    Get PDF
    The construction of Knowledge Bases requires quite often the intervention of knowledge engineering and domain experts, resulting in a time consuming task. Alternative approaches have been developed for building knowledge bases from existing sources of information such as web pages and crowdsourcing; seminal examples are NELL, DBPedia, YAGO and several others. With the goal of building very large sources of knowledge, as recently for the case of Knowledge Graphs, even more complex integration processes have been set up, involving multiple sources of information, human expert intervention, crowdsourcing. Despite signi - cant e orts for making Knowledge Graphs as comprehensive and reliable as possible, they tend to su er of incompleteness and noise, due to the complex building process. Nevertheless, even for highly human curated knowledge bases, cases of incompleteness can be found, for instance with disjointness axioms missing quite often. Machine learning methods have been proposed with the purpose of re ning, enriching, completing and possibly raising potential issues in existing knowledge bases while showing the ability to cope with noise. The talk will concentrate on classes of mostly symbol-based machine learning methods, speci cally focusing on concept learning, rule learning and disjointness axioms learning problems, showing how the developed methods can be exploited for enriching existing knowledge bases. During the talk it will be highlighted as, a key element of the illustrated solutions, is represented by the integration of: background knowledge, deductive reasoning and the evidence coming from the mass of the data. The last part of the talk will be devoted to the presentation of an approach for injecting background knowledge into numeric-based embedding models to be used for predictive tasks on Knowledge Graphs

    A step towards understanding paper documents

    Get PDF
    This report focuses on analysis steps necessary for a paper document processing. It is divided in three major parts: a document image preprocessing, a knowledge-based geometric classification of the image, and a expectation-driven text recognition. It first illustrates the several low level image processing procedures providing the physical document structure of a scanned document image. Furthermore, it describes a knowledge-based approach, developed for the identification of logical objects (e.g., sender or the footnote of a letter) in a document image. The logical identifiers provide a context-restricted consideration of the containing text. While using specific logical dictionaries, a expectation-driven text recognition is possible to identify text parts of specific interest. The system has been implemented for the analysis of single-sided business letters in Common Lisp on a SUN 3/60 Workstation. It is running for a large population of different letters. The report also illustrates and discusses examples of typical results obtained by the system
    corecore