91,611 research outputs found
Recommended from our members
Learning from AI : new trends in database technology
Recently some researchers in the areas of database data modelling and knowledge representations in artificial intelligence have recognized that they share many common goals. In this survey paper we show the relationship between database and artificial intelligence research. We show that there has been a tendency for data models to incorporate more modelling techniques developed for knowledge representations in artificial intelligence as the desire to incorporate more application oriented semantics, user friendliness, and flexibility has increased. Increasing the semantics of the representation is the key to capturing the "reality" of the database environment, increasing user friendliness, and facilitating the support of multiple, possibly conflicting, user views of the information contained in a database
Estimating rainfall and water balance over the Okavango River Basin for hydrological applications
A historical database for use in rainfall-runoff modeling of the Okavango River Basin in Southwest Africa is presented. The work has relevance for similar data-sparse regions. The parameters of main concern are rainfall and catchment water balance which are key variables for subsequent studies of the hydrological impacts of development and climate change. Rainfall estimates are based on a combination of in-situ gauges and satellite sources. Rain gauge measurements are most extensive from 1955 to 1972, after which they are drastically reduced due to the Angolan civil war. The sensitivity of the rainfall fields to spatial interpolation techniques and the density of gauges was evaluated. Satellite based rainfall estimates for the basin are developed for the period from 1991 onwards, based on the Tropical Rainfall Measuring Mission (TRMM) and Special Sensor Microwave Imager (SSM/I) data sets. The consistency between the gauges and satellite estimates was considered. A methodology was developed to allow calibration of the rainfall-runoff hydrological model against rain gauge data from 1960-1972, with the prerequisite that the model should be driven by satellite derived rainfall products for the 1990s onwards. With the rain gauge data, addition of a single rainfall station (Longa) in regions where stations earlier were lacking was more important than the chosen interpolation method. Comparison of satellite and gauge rainfall outside the basin indicated that the satellite overestimates rainfall by 20%. A non-linear correction was derived used by fitting the rainfall frequency characteristics to those of the historical rainfall data. This satellite rainfall dataset was found satisfactory when using the Pitman rainfall-runoff model (Hughes et al., this issue). Intensive monitoring in the region is recommended to increase accuracy of the comprehensive satellite rainfall estimate calibration procedur
New Methods, Current Trends and Software Infrastructure for NLP
The increasing use of `new methods' in NLP, which the NeMLaP conference
series exemplifies, occurs in the context of a wider shift in the nature and
concerns of the discipline. This paper begins with a short review of this
context and significant trends in the field. The review motivates and leads to
a set of requirements for support software of general utility for NLP research
and development workers. A freely-available system designed to meet these
requirements is described (called GATE - a General Architecture for Text
Engineering). Information Extraction (IE), in the sense defined by the Message
Understanding Conferences (ARPA \cite{Arp95}), is an NLP application in which
many of the new methods have found a home (Hobbs \cite{Hob93}; Jacobs ed.
\cite{Jac92}). An IE system based on GATE is also available for research
purposes, and this is described. Lastly we review related work.Comment: 12 pages, LaTeX, uses nemlap.sty (included
The Semantic Web: Apotheosis of annotation, but what are its semantics?
This article discusses what kind of entity the proposed Semantic Web (SW) is, principally by reference to the relationship of natural language structure to knowledge representation (KR). There are three distinct views on this issue. The first is that the SW is basically a renaming of the traditional AI KR task, with all its problems and challenges. The second view is that the SW will be, at a minimum, the World Wide Web with its constituent documents annotated so as to yield their content, or meaning structure, more directly. This view makes natural language processing central as the procedural bridge from texts to KR, usually via some form of automated information extraction. The third view is that the SW is about trusted databases as the foundation of a system of Web processes and services. There's also a fourth view, which is much more difficult to define and discuss: If the SW just keeps moving as an engineering development and is lucky, then real problems won't arise. This article is part of a special issue called Semantic Web Update
AiGERM: A logic programming front end for GERM
AiGerm (Artificially Intelligent Graphical Entity Relation Modeler) is a relational data base query and programming language front end for MCC (Mission Control Center)/STP's (Space Test Program) Germ (Graphical Entity Relational Modeling) system. It is intended as an add-on component of the Germ system to be used for navigating very large networks of information. It can also function as an expert system shell for prototyping knowledge-based systems. AiGerm provides an interface between the programming language and Germ
- …