34,441 research outputs found
Cross-concordances: terminology mapping and its effectiveness for information retrieval
The German Federal Ministry for Education and Research funded a major
terminology mapping initiative, which found its conclusion in 2007. The task of
this terminology mapping initiative was to organize, create and manage
'cross-concordances' between controlled vocabularies (thesauri, classification
systems, subject heading lists) centred around the social sciences but quickly
extending to other subject areas. 64 crosswalks with more than 500,000
relations were established. In the final phase of the project, a major
evaluation effort to test and measure the effectiveness of the vocabulary
mappings in an information system environment was conducted. The paper reports
on the cross-concordance work and evaluation results.Comment: 19 pages, 4 figures, 11 tables, IFLA conference 200
Providing Multilingual Access to Health-Oriented Content
Finding health-related content is not an easy task. People have to know what to search for, which medical terms to use, and where to find accurate information. This task becomes even harder when people such as immigrants wish to find information in their country of residence and do not speak the national language very well. In this paper, we present a new health information system that allows users to search for health information using natural language queries composed of multiple languages. We present the technical details of the system and outline the results of a preliminary user study to demonstrate the usability of the system
Natural Language Dialogue Service for Appointment Scheduling Agents
Appointment scheduling is a problem faced daily by many individuals and
organizations. Cooperating agent systems have been developed to partially
automate this task. In order to extend the circle of participants as far as
possible we advocate the use of natural language transmitted by e-mail. We
describe COSMA, a fully implemented German language server for existing
appointment scheduling agent systems. COSMA can cope with multiple dialogues in
parallel, and accounts for differences in dialogue behaviour between human and
machine agents. NL coverage of the sublanguage is achieved through both
corpus-based grammar development and the use of message extraction techniques.Comment: 8 or 9 pages, LaTeX; uses aclap.sty, epsf.te
Hybrid robust deep and shallow semantic processing for creativity support in document production
The research performed in the DeepThought project (http://www.project-deepthought.net) aims at demonstrating the potential of deep linguistic processing if added to existing shallow methods that ensure robustness. Classical information retrieval is extended by high precision concept indexing and relation detection. We use this approach to demonstrate the feasibility of three ambitious applications, one of which is a tool for creativity support in document production and collective brainstorming. This application is described in detail in this paper. Common to all three applications, and the basis for their development is a platform for integrated linguistic processing. This platform is based on a generic software architecture that combines multiple NLP components and on robust minimal recursive semantics (RMRS) as a uniform representation language
A Comparative Study of the Application of Different Learning Techniques to Natural Language Interfaces
In this paper we present first results from a comparative study. Its aim is
to test the feasibility of different inductive learning techniques to perform
the automatic acquisition of linguistic knowledge within a natural language
database interface. In our interface architecture the machine learning module
replaces an elaborate semantic analysis component. The learning module learns
the correct mapping of a user's input to the corresponding database command
based on a collection of past input data. We use an existing interface to a
production planning and control system as evaluation and compare the results
achieved by different instance-based and model-based learning algorithms.Comment: 10 pages, to appear CoNLL9
An integrated architecture for shallow and deep processing
We present an architecture for the integration of shallow and deep NLP components which is aimed at flexible combination of different language technologies for a range of practical current and future applications. In particular, we describe the integration of a high-level HPSG parsing system with different high-performance shallow components, ranging from named entity recognition to chunk parsing and shallow clause recognition. The NLP components enrich a representation of natural language text with layers of new XML meta-information using a single shared data structure, called the text chart. We describe details of the integration methods, and show how information extraction and language checking applications for realworld German text benefit from a deep grammatical analysis
- …