58,613 research outputs found
The Bionic Radiologist: avoiding blurry pictures and providing greater insights
Radiology images and reports have long been digitalized. However, the potential of the more than 3.6 billion radiology
examinations performed annually worldwide has largely gone unused in the effort to digitally transform health care. The Bionic
Radiologist is a concept that combines humanity and digitalization for better health care integration of radiology. At a practical
level, this concept will achieve critical goals: (1) testing decisions being made scientifically on the basis of disease probabilities and
patient preferences; (2) image analysis done consistently at any time and at any site; and (3) treatment suggestions that are closely
linked to imaging results and are seamlessly integrated with other information. The Bionic Radiologist will thus help avoiding missed
care opportunities, will provide continuous learning in the work process, and will also allow more time for radiologists’ primary
roles: interacting with patients and referring physicians. To achieve that potential, one has to cope with many implementation
barriers at both the individual and institutional levels. These include: reluctance to delegate decision making, a possible decrease in
image interpretation knowledge and the perception that patient safety and trust are at stake. To facilitate implementation of the
Bionic Radiologist the following will be helpful: uncertainty quantifications for suggestions, shared decision making, changes in
organizational culture and leadership style, maintained expertise through continuous learning systems for training, and role
development of the involved experts. With the support of the Bionic Radiologist, disparities are reduced and the delivery of care is
provided in a humane and personalized fashion
A formal foundation for ontology alignment interaction models
Ontology alignment foundations are hard to find in the literature. The abstract nature of the topic and the diverse means of practice makes it difficult to capture it in a universal formal foundation. We argue that such a lack of formality hinders further development and convergence of practices, and in particular, prevents us from achieving greater levels of automation. In this article we present a formal foundation for ontology alignment that is based on interaction models between heterogeneous agents on the Semantic Web. We use the mathematical notion of information flow in a distributed system to ground our three hypotheses of enabling semantic interoperability and we use a motivating example throughout the article: how to progressively align two ontologies of research quality assessment through meaning coordination. We conclude the article with the presentation---in an executable specification language---of such an ontology-alignment interaction model
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
Natural language processing
Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems
Three Approaches to Generating Texts in Different Styles
Natural Language Generation (nlg) systems generate texts in English and other human languages from non-linguistic input data. Usually there are a large number of possible texts that can communicate the input data, and nlg systems must choose one of these. We argue that style can be used by nlg systems to choose between possible texts, and explore how this can be done by (1) explicit stylistic parameters, (2) imitating a genre style, and (3) imitating an individual’s style
- …