184,726 research outputs found

    Intuitive querying of e-Health data repositories

    Get PDF
    At the centre of the Clinical e-Science Framework (CLEF) project is a repository of well organised, detailed clinical histories, encoded as data that will be available for use in clinical care and in-silico medical experiments. An integral part of the CLEF workbench is a tool to allow biomedical researchers and clinicians to query ā€“ in an intuitive way ā€“ the repository of patient data. This paper describes the CLEF query editing interface, which makes use of natural language generation techniques in order to alleviate some of the problems generally faced by natural language and graphical query interfaces. The query interface also incorporates an answer renderer that dynamically generates responses in both natural language text and graphics

    Simplifying NASA Earth Science Data and Information Access Through Natural Language Processing Based Data Analysis and Visualization

    Get PDF
    NASA Earth science data collected from satellites, model assimilation, airborne missions, and field campaigns, are large, complex and evolving. Such characteristics pose great challenges for end users (e.g., Earth science and applied science users, students, citizen scientists), particularly for those who are unfamiliar with NASA's EOSDIS and thus unable to access and utilize datasets effectively. For example, a novice user may simply ask: what is the total rainfall for a flooding event in my county yesterday? For an experienced user (e.g., algorithm developer), a question can be: how did my rainfall product perform, compared to ground observations, during a flooding event? Nonetheless, with rapid information technology development such as natural language processing, it is possible to develop simplified Web interfaces and back-end processing components to handle such questions and deliver answers in terms of text, data, or graphic results directly to users.In this presentation, we describe the main challenges for end users with different levels of expertise in accessing and utilizing NASA Earth science data. Surveys reveal that most non-professional users normally do not want to download and handle raw data as well as conduct heavy-duty data processing tasks. Often they just want some simple graphics or data for various purposes. To them, simple and intuitive user interfaces are sufficient because complicated ones can be difficult and time-consuming to learn. Professionals also want such interfaces to answer many questions from datasets. One solution is to develop a natural language based search box like Google and the search results can be text, data, graphics and more. Now the challenge is, with natural language processing, can we design a system to process a scientific question typed in by a user? In this presentation, we describe our plan for such a prototype. The workflow is: 1) extract needed information (e.g., variables, spatial and temporal information, processing methods, etc.) from the input, 2) process the data in the backend, and 3) deliver the results (data or graphics) to the user

    Unsupervised grounding of textual descriptions of object features and actions in video

    Get PDF
    We propose a novel method for learning visual concepts and their correspondence to the words of a natural language. The concepts and correspondences are jointly inferred from video clips depicting simple actions involving multiple objects, together with corresponding natural language commands that would elicit these actions. Individual objects are first detected, together with quantitative measurements of their colour, shape, location and motion. Visual concepts emerge from the co-occurrence of regions within a measurement space and words of the language. The method is evaluated on a set of videos generated automatically using computer graphics from a database of initial and goal configurations of objects. Each video is annotated with multiple commands in natural language obtained from human annotators using crowd sourcing

    Unsupervised grounding of textual descriptions of object features and actions in video

    Get PDF
    We propose a novel method for learning visual concepts and their correspondence to the words of a natural language. The concepts and correspondences are jointly inferred from video clips depicting simple actions involving multiple objects, together with corresponding natural language commands that would elicit these actions. Individual objects are first detected, together with quantitative measurements of their colour, shape, location and motion. Visual concepts emerge from the co-occurrence of regions within a measurement space and words of the language. The method is evaluated on a set of videos generated automatically using computer graphics from a database of initial and goal configurations of objects. Each video is annotated with multiple commands in natural language obtained from human annotators using crowd sourcing

    The crustal dynamics intelligent user interface anthology

    Get PDF
    The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has, as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI is to develop a friendly and intelligent user interface service based on expert systems and natural language processing technologies. The purpose of such a service is to support the large number of potential scientific and engineering users that have need of space and land-related research and technical data, but have little or no experience in query languages or understanding of the information content or architecture of the databases of interest. This document presents the design concepts, development approach and evaluation of the performance of a prototype IUI system for the Crustal Dynamics Project Database, which was developed using a microcomputer-based expert system tool (M. 1), the natural language query processor THEMIS, and the graphics software system GSS. The IUI design is based on a multiple view representation of a database from both the user and database perspective, with intelligent processes to translate between the views

    Interact: A Mixed Reality Virtual Survivor for Holocaust Testimonies

    Get PDF
    In this paper we present Interact---a mixed reality virtual survivor for Holocaust education. It was created to preserve the powerful and engaging experience of listening to, and interacting with, Holocaust survivors, allowing future generations of audience access to their unique stories. Interact demonstrates how advanced filming techniques, 3D graphics and natural language processing can be integrated and applied to specially-recorded testimonies to enable users to ask questions and receive answers from that virtualised individuals. This provides a new and rich interactive narratives of remembrance to engage with primary testimony. We discuss the design and development of Interact, and argue that this new form of mixed reality is promising media to overcome the uncanny valley

    Generating Explanatory Captions for Information Graphics

    Get PDF
    Graphical presentations can be used to communicate information in relational data sets succinctly and effectively. However, novel graphical presentations about numerous attributes and their relationships are often difficult to understand completely until explained. Automatically generated graphical presentations must therefore either be limited to simple, conventional ones, or risk incomprehensibility. One way of alleviating this problem is to design graphical presentation systems that can work in conjunction with a natural language generator to produce "explanatory captions." This paper presents three strategies for generating explanatory captions to accompany information graphics based on: (1) a representation of the structure of the graphical presentation (2) a framework for identifyingthe perceptual complexity of graphical elements, and (3) the structure of the data expressed in the graphic. We describe an implemented system and illustrate how it is used to generate explanatory cap..
    • ā€¦
    corecore