823 research outputs found
Binary Relation Database BIRD: Issues of Representation and Implementation
This thesis presents a study of two issues, integrity and homogeneity of information representation, within the area of databases. Treatments of these issues were studied within the standard and semantic database models, leading to the proposal of a new model, the Binary Relation Database, BIRD. The BIRD model uses the binary relationship as the basis for the representation of all database data and meta-data. The inadequacy of integrity definition faculties within current database technology areelaborated in this thesis and were taken into account in the BIRD system. The effects of inhomogeneity of database data and meta-data in current databases are described and the benefits of the homogeneity of information representation in BIRD demonstrated. BIRD was implemented as a prototype database system; using Modula-2, - the implementation and subsequent evaluation of the system are included in this thesis. A simple user menu driven user interface to BIRD was constructed, - the user may manipulate information at any conceptual level in the system in a homogeneous manner. The user is free to manipulate information from any conceptual level at anytime; - BIRD ensures that the database is returned to a consistent state before the next operation may take place. The new model proposed in this thesis fulfilled its objectives, - suggestions for further and implementation oriented work are presented at the end of the thesis
Query Modification in Object-oriented Database Federation
We discuss the modification of queries against an integrated view in a federation of object-oriented databases. We present a generalisation of existing algorithms for simple global query processing that works for arbitrarily defined integration classes. We then extend this algorithm to deal with object-oriented features such as queries involving path expressions and nesting. We show how properties of the OO-style of modelling relationships through object references can be exploited to reduce the number of subqueries necessary to evaluate such querie
Just below the surface: developing knowledge management systems using the paradigm of the noetic prism
In this paper we examine how the principles embodied in the paradigm of the noetic prism can illuminate the construction of knowledge management systems. We draw on the formalism of the prism to examine three successful tools: frames, spreadsheets and databases, and show how their power and also their shortcomings arise from their domain representation, and how any organisational system based on integration of these tools and conversion between them is inevitably lossy. We suggest how a late-binding, hybrid knowledge based management system (KBMS) could be designed that draws on the lessons learnt from these tools, by maintaining noetica at an atomic level and storing the combinatory processes necessary to create higher level structure as the need arises. We outline the âjust-below-the-surfaceâ systems design, and describe its implementation in an enterprise-wide knowledge-based system that has all of the conventional office automation features
Final report on the farmer's aid in plant disease diagnoses
This report is the final report on the FAD project. The FAD project was initiated in september 1985 to test the expert system shell Babylon by developing a prototype crop disease diagnosis system in it. A short overview of the history of the project and the main problems encountered is given in chapter 1. Chapter 2 describes the result of an attempt to integrate JSD with modelling techniques like generalisation and aggregation and chapter 3 concentrates on the method we used to elicit phytopathological knowledge from specialists. Chapter 4 gives the result of knowledge acquisition for the 10 wheat diseases most commonly occurring in the Netherlands. The user interface is described briefly in chapter 5 and chapter 6 gives an overview of the additions to the implementation we made to the version of FAD reported in our second report. Chapter 7, finally, summarises the conclusions of the project and gives recommendations for follow-up projects
Environmental information systems : the development and implementation of the Lake Rukwa Basin integrated project environmental information system (LRBIP-EIS) database, Tanzania
Bibliography: leaves 91-97.The quest for sustenance inevitably forces mankind to exploit natural resources found within their environs. In many cases, the exploitation results in massive environmental degradation that disrupts the ecosystem and causes loss of bio-diversity. There is generally a lack of information systems to monitor and provide quantitative information on the state of the affected environment. Decision-makers usually fail to make informed decisions with regard to conservation strategies. The need to provide decision-makers with quantitative environmental information formed the basis of this thesis. An integrated environmental information system (EIS) database was developed according to the Software Development Methodology for three of the identified environmental sectors. This involved detailed user needs assessment to identify the information requirements (both spatial and textual) for each sector. The results were used to design separate data models that were later merged to create an integrated data model for the database application. A fisheries application prototype was developed to implement the proposed database design. The prototype has three major components. The Geographic Information System (GIS) handles the spatial data such as rivers, settlements, roads, and lakes. A relational database management system (RDBMS) was used to store and maintain the non-spatial data such as fisherman ' s personal details and fish catch data. Customized graphical user interfaces were designed to handle the data visualization and restricted access to the GIS and RDBMS environments
Recommended from our members
Active database behaviour: the REFLEX approach
Modern day and new generation applications have more demanding requirements than traditional database management systems (DBMS) are able to support. Two of these requirements, timely responses to the change of database state and application domain knowledge stored within the database, are embodied within active database technology.
Currently, there are a number of research prototype active database systems throughout the world. In order for an organisation to use any such prototype system, it may have to forsake existing products and resources and embark on substantial reinvestment in the new database products and associated resources and retraining costs. This approach would clearly be unfavourable as it is expensive both in terms of time and money.
A more suitable approach would be to allow active behaviour to be added onto their existing systems. This scenario is addressed within this research. It investigates how best active behaviour can be augmented to existing DBMSs, so as to preserve the investments in an organisation's resources, by examining the following issues, (i.) what form the knowledge model should take, (ii.) should rules and events be modelled as first class objects, (iii.) how will the triggering events be specified, (iv.) how the user will interact with the system.
Various design decisions were taken, which were investigated by implementation of a series of working prototypes, on the ONTOS DBMS platform. The resultant REFLEX model was successfully ported and adapted onto a second POET platform. The porting process uncovered some interesting issues regarding preconceived ideas about the portability of open systems
Migrating relational databases into object-based and XML databases
Rapid changes in information technology, the emergence of object-based and WWW applications, and the interest of organisations in securing benefits from new technologies have made information systems re-engineering in general and database migration in particular an active research area. In order to improve the functionality and performance of existing systems, the re-engineering process requires identifying and understanding all of the components of such systems. An underlying database is one of the most important component of information systems. A considerable body of data is stored in relational databases (RDBs), yet they have limitations to support complex structures and user-defined data types provided by relatively recent databases such as object-based and XML databases. Instead of throwing away the large amount of data stored in RDBs, it is more appropriate to enrich and convert such data to be used by new systems. Most researchers into the migration of RDBs into object-based/XML databases have concentrated on schema translation, accessing and publishing RDB data using newer technology, while few have paid attention to the conversion of data, and the preservation of data semantics, e.g., inheritance and integrity constraints. In addition, existing work does not appear to provide a solution for more than one target database. Thus, research on the migration of RDBs is not fully developed. We propose a solution that offers automatic migration of an RDB as a source into the recent database technologies as targets based on available standards such as ODMG 3.0, SQL4 and XML Schema. A canonical data model (CDM) is proposed to bridge the semantic gap between an RDB and the target databases. The CDM preserves and enhances the metadata of existing RDBs to fit in with the essential characteristics of the target databases. The adoption of standards is essential for increased portability, flexibility and constraints preservation. This thesis contributes a solution for migrating RDBs into object-based and XML databases. The solution takes an existing RDB as input, enriches its metadata representation with the required explicit semantics, and constructs an enhanced relational schema representation (RSR). Based on the RSR, a CDM is generated which is enriched with the RDB's constraints and data semantics that may not have been explicitly expressed in the RDB metadata. The CDM so obtained facilitates both schema translation and data conversion. We design sets of rules for translating the CDM into each of the three target schemas, and provide algorithms for converting RDB data into the target formats based on the CDM. A prototype of the solution has been implemented, which generates the three target databases. Experimental study has been conducted to evaluate the prototype. The experimental results show that the target schemas resulting from the prototype and those generated by existing manual mapping techniques were comparable. We have also shown that the source and target databases were equivalent, and demonstrated that the solution, conceptually and practically, is feasible, efficient and correct
The advantages and cost effectiveness of database improvement methods
Relational databases have proved inadequate for supporting new classes of
applications, and as a consequence, a number of new approaches have been taken
(Blaha 1998), (Harrington 2000). The most salient alternatives are denormalisation
and conversion to an object-oriented database (Douglas 1997). Denormalisation
can provide better performance but has deficiencies with respect to
data modelling. Object-oriented databases can provide increased performance
efficiency but without the deficiencies in data modelling (Blaha 2000).
Although there have been various benchmark tests reported, none of these
tests have compared normalised, object oriented and de-normalised databases.
This research shows that a non-normalised database for data containing type
code complexity would be normalised in the process of conversion to an objectoriented
database. This helps to correct badly organised data and so gives the
performance benefits of de-normalisation while improving data modelling.
The costs of conversion from relational databases to object oriented databases
were also examined. Costs were based on published benchmark tests, a
benchmark carried out during this study and case studies. The benchmark tests
were based on an engineering database benchmark. Engineering problems such as
computer-aided design and manufacturing have much to gain from conversion to
object-oriented databases. Costs were calculated for coding and development, and
also for operation. It was found that conversion to an object-oriented database was
not usually cost effective as many of the performance benefits could be achieved
by the far cheaper process of de-normalisation, or by using the performance
improving facilities provided by many relational database systems such as
indexing or partitioning or by simply upgrading the system hardware.
It is concluded therefore that while object oriented databases are a better
alternative for databases built from scratch, the conversion of a legacy relational
database to an object oriented database is not necessarily cost effective
EâARK Dissemination Information Package (DIP) Final Specification
The primary aim of this report is to present the final version of the E-ARK Dissemination Information Package (DIP) formats. The secondary aim is to describe the access scenarios in which these DIP formats will be rendered for use
Evaluation of Functional Data Models for Database Design and Use
The problems of design, operation, and maintenance of databases using the three most
popular database management systems (Hierarchical, CQDASYL/DBTG, and Relational) are
well known. Users wishing to use these systems have to make conscious and often complex
mappings between the real-world structures and the data structuring options (data models)
provided by these systems. In addition, much of the semantics associated with the data
either does not get expressed at all or gets embedded procedurally in application programs in
an ad-hoc way.
In recent years, a large number of data models (called semantic data models) have been
proposed with the aim of simplifying database design and use. However, the lack of usable
implementations of these proposals has so far inhibited the widespread use of these concepts.
The present work reports on an effort to evaluate and extend one such semantic model by
means of an implementation. It is based on the functional data model proposed earlier by
Shipman[SHIP81). We call this 'Extended Functional Data Model' (EFDM).
EFDM, like Shipman's proposals, is a marriage of three of the advanced modelling concepts
found in both database and artificial intelligence research: the concept of entity to represent
an object in the real world, the concept of type hierarchy among entity types, and the
concept of derived data for modelling procedural knowledge. The functional notation of the
model lends itself to high level data manipulation languages. The data selection in these
languages is expressed simply as function application. Further, the functional approach makes
it possible to incorporate general purpose computation facilities in the data languages without
having to embed them in procedural languages. In addition to providing the usual database
facilities, the implementation also provides a mechanism to specify multiple user views of the
database
- âŚ