1,999 research outputs found

    The Role of Graduality for Referring Expression Generation in Visual Scenes

    No full text
    International audienceReferring Expression Generation (reg) algorithms, a core component of systems that generate text from non-linguistic data, seek to identify domain objects using natural language descriptions. While reg has often been applied to visual domains, very few approaches deal with the problem of fuzziness and gradation. This paper discusses these problems and how they can be accommodated to achieve a more realistic view of the task of referring to objects in visual scenes

    The role of graduality for referring expression generation in visual scenes

    Get PDF
    Referring Expression Generation (reg) algorithms, a core component of systems that generate text from non-linguistic data, seek to identify domain objects using natural language descriptions. While reg has often been applied to visual domains, very few approaches deal with the problem of fuzziness and gradation. This paper discusses these problems and how they can be accommodated to achieve a more realistic view of the task of referring to objects in visual scenes.peer-reviewe

    A Knowledge Graph Based Approach to Social Science Surveys

    Get PDF
    Recent success of knowledge graphs has spurred interest in applying them in open science, such as on intelligent survey systems for scientists. However, efforts to understand the quality of candidate survey questions provided by these methods have been limited. Indeed, existing methods do not consider the type of on-the-fly content planning that is possible for face-to-face surveys and hence do not guarantee that selection of subsequent questions is based on response to previous questions in a survey. To address this limitation, we propose a dynamic and informative solution for an intelligent survey system that is based on knowledge graphs. To illustrate our proposal, we look into social science surveys, focusing on ordering the questions of a questionnaire component by their level of acceptance, along with conditional triggers that further customise participants' experience. Our main findings are: (i) evaluation of the proposed approach shows that the dynamic component can be beneficial in terms of lowering the number of questions asked per variable, thus allowing more informative data to be collected in a survey of equivalent length; and (ii) a primary advantage of the proposed approach is that it enables grouping of participants according to their responses, so that participants are not only served appropriate follow-up questions, but their responses to these questions may be analysed in the context of some initial categorisation. We believe that the proposed approach can easily be applied to other social science surveys based on grouping definitions in their contexts. The knowledge-graph-based intelligent survey approach proposed in our work allows online questionnaires to approach face-to-face interaction in their level of informativity and responsiveness, as well as duplicating certain advantages of interview-based data collection

    Knowledge-Driven Intelligent Survey Systems Towards Open Science

    Get PDF
    Open Access via Springer Compact Agreement. Acknowledgements: We are grateful to all of our survey participants, and to Anne Eschenbruecher, Sally Lamond, and Evelyn Williams for their assistance in participant recruitment. We are also grateful to Patrik Bansky for his work on refinement of the survey system.Peer reviewedPublisher PD

    I believe it's possible it might be so.... : Exploiting Lexical Clues for the Automatic Generation of Evidentiality Weights for Information Extracted from English Text

    Get PDF
    Information formulated in natural language is being created at an incredible pace, far more quickly than we can make sense of it. Thus, computer algorithms for various kinds of text analytics have been developed to try to find nuggets of new, pertinent and useful information. However, information extracted from text is not always credible or reliable; often buried in sentences are lexical and grammatical structures that indicate the uncertainty of the proposition. Such clues include hedges such as modal adverbs and adjectives, as well as hearsay markers, indicators of inference or belief (”mindsay”), and verb forms identifying future actions which may not take place. In this thesis, we demonstrate how analysis of these lexical and grammatical forms of uncertainty can be automatically analyzed to provide a method of determining an evidential weight to the proposition, which can be used to assess the credibility of the information extracted from English text

    The meaning of meaning-fallibilism

    Get PDF
    Much discussion of meaning by philosophers over the last 300 years has been predicated on a Cartesian first-person authority (i.e. ‘infallibilism’) with respect to what one’s terms mean. However this has problems making sense of the way the meanings of scientific terms develop, an increase in scientific knowledge over and above scientists’ ability to quantify over new entities. Although a recent conspicuous embrace of rigid designation has broken up traditional meaning-infallibilism to some extent, this new dimension to the meaning of terms such as ‘water’ is yet to receive a principled epistemological undergirding (beyond the deliverances of ‘intuition’ with respect to certain somewhat unusual possible worlds). Charles Peirce’s distinctive, naturalistic philosophy of language is mined to provide a more thoroughly fallibilist, and thus more realist, approach to meaning, with the requisite epistemology. Both his pragmatism and his triadic account of representation, it is argued, produce an original approach to meaning, analysing it in processual rather than objectual terms, and opening a distinction between ‘meaning for us’, the meaning a term has at any given time for any given community and ‘meaning simpliciter’, the way use of a given term develops over time (often due to a posteriori input from the world which is unable to be anticipated in advance). This account provocatively undermines a certain distinction between ‘semantics’ and ‘ontology’ which is often taken for granted in discussions of realism

    On the Merits and Limits of Replication and Negation for IS Research

    Get PDF
    A simple idea underpins the scientific process: All results should be subject to continued testing and questioning. Given the particularities of our international IS discipline, different viewpoints seem to be required to develop a picture of the merits and limits of testing and replication. Hence, the authors of this paper approach the topic from different perspectives. Following the ongoing discourse in neighbouring disciplines, we start by highlighting the significance of testing, replication and negation for scientific discourse as well as for the sponsors of research initiatives. Next, we discuss types of replication research and the challenges associated with each. In the third section, challenging questions are raised in the light of the ability of IS research for self-correction. Then, we address publication issues related to types of replications that require shifting editorial behaviors. The fifth section reflects on the possible use and interpretation of replication results in the light of contingency. As a key takeaway, the paper suggests ways to identify studies worth replicating in our field and reflects on possible roles of replication and testing for future IS research
    • 

    corecore