11 research outputs found
The ballarat incremental knowledge engine
Ripple Down Rules (RDR) is a maturing collection of methodologies for the incremental development and maintenance of medium to large rule-based knowledge systems. While earlier knowledge based systems relied on extensive modeling and knowledge engineering, RDR instead takes a simple no-model approach that merges the development and maintenance stages. Over the last twenty years RDR has been significantly expanded and applied in numerous domains. Until now researchers have generally implemented their own version of the methodologies, while commercial implementations are not made available. This has resulted in much duplicated code and the advantages of RDR not being available to a wider audience. The aim of this project is to develop a comprehensive and extensible platform that supports current and future RDR technologies, thereby allowing researchers and developers access to the power and versatility of RDR. This paper is a report on the current status of the project and marks the first release of the software. © 2010 Springer-Verlag Berlin Heidelberg
Learning and discovery in incremental knowledge acquisition
Knowledge Based Systems (KBS) have been actively investigated since the early period of AI. There are four common methods of building expert systems: modeling approaches, programming approaches, case-based approaches and machine-learning approaches. One particular technique is Ripple Down Rules (RDR) which may be classified as an incremental case-based approach. Knowledge needs to be acquired from experts in the context of individual cases viewed by them. In the RDR framework, the expert adds a new rule based on the context of an individual case. This task is simple and only affects the expert s workflow minimally. The rule added fixes an incorrect interpretation made by the KBS but with minimal impact on
the KBS's previous correct performance. This provides incremental improvement. Despite these strengths of RDR, there are some limitations including rule redundancy, lack of intermediate features and lack of models. This thesis addresses these RDR limitations by applying automatic learning algorithms to reorganize the knowledge base, to learn intermediate features and possibly to discover domain models. The redundancy problem occurs because rules created in particular contexts which should have more general application. We address this limitation by reorganizing the knowledge base and removing redundant rules. Removal of redundant rules should also reduce the number of future knowledge acquisition sessions. Intermediate features improve modularity, because the expert can deal with features in groups rather than individually. In addition to the manual creation of
intermediate features for RDR, we propose the automated discovery of intermediate features to speed up the knowledge acquisition process by generalizing existing rules. Finally, the Ripple Down Rules approach facilitates rapid knowledge acquisition as it can be initialized with a minimal ontology. Despite minimal modeling, we propose that a more developed knowledge model can be extracted from an existing RDR KBS. This may be useful in using RDR KBS for other applications. The most useful of these three developments was the automated discovery of intermediate features. This made a significant difference to the number of knowledge acquisition sessions required
Document management and retrieval for specialised domains : an evolutionary user-based approach
Browsing marked-up documents by traversing hyperlinks has become probably the most
important means by which documents are accessed, both via the World Wide Web (WWW) and
organisational Intranets. However, there is a pressing demand for document management and
retrieval systems to deal appropriately with the massive number of documents available. There
are two classes of solution: general search engines, whether for the WWW or an Intranet, which
make little use of specific domain knowledge or hand-crafted specialised systems which are
costly to build and maintain.
The aim of this thesis was to develop a document management and retrieval system suitable for
small communities as well as individuals in specialised domains on the Web. The aim was to
allow users to easily create and maintain their own organisation of documents while ensuring
continual improvement in the retrieval performance of the system as it evolves. The system
developed is based on the free annotation of documents by users and is browsed using the
concept lattice of Formal Concept Analysis (FCA). A number of annotation support tools were
developed to aid the annotation process so that a suitable system evolved. Experiments were
conducted in using the system to assist in finding staff and student home pages at the School of
Computer Science and Engineering, University of New South Wales.
Results indicated that the annotation tools provided a good level of assistance so that documents
were easily organised and a lattice-based browsing structure that evolves in an ad hoc fashion
provided good efficiency in retrieval performance. An interesting result suggested that although
an established external taxonomy can be useful in proposing annotation terms, users appear to
be very selective in their use of terms proposed. Results also supported the hypothesis that the
concept lattice of FCA helped take users beyond a narrow search to find other useful
documents. In general, lattice-based browsing was considered as a more helpful method than
Boolean queries or hierarchical browsing for searching a specialised domain.
We conclude that the concept lattice of Formal Concept Analysis, supported by annotation
techniques is a useful way of supporting the flexible open management of documents required
by individuals, small communities and in specialised domains. It seems likely that this approach
can be readily integrated with other developments such as further improvements in search
engines and the use of semantically marked-up documents, and provide a unique advantage in
supporting autonomous management of documents by individuals and groups - in a way that is
closely aligned with the autonomy of the WWW
The development of an environmental noise decision support system
The methodology of noise control exists to solve many noise problems. The most cost
effective methods of controlling noise are those that can employed in advance to
prevent any potential noise problem occurring. The majority of architects are
overlooking to consider any noise problem during the process of the site planning.
This is because the existing methods need long procedure to arrive at the required
conclusion regarding the noise performance of the site. The existing methods depend
of predicting the noise level at the reception point that effected by the noise source.
This depends on many variables, including the distance from source; the propagation
effects include screening and ground attenuation ...etc. The second stage comes to
establish the kind of the building that the receiver will use. At this stage, the noise
control expert can establish the required noise performance to solve the problem.
Another complicated procedure comes by using the methodology of the noise control,
which branches to many options. The chosen option depends on a certain priority and
the condition of the architect.
On the basis of the above, the thesis concerns with developing another method using
the existing techniques, but to use by the architects or the novice users. This method
gives the prediction of the noise level at the reception point, establishing the required
noise performance and finally gives the suitable advice to solve the problem.
The Artificial Intelligence (AI) techniques in general and Expert System (ES) in
particular have been employed to develop this method. The most important part in
this technique is the knowledge base that will be used to fulfil the desired objective This knowledge has to proceed within many stages, which called knowledge
acquisition process. This process consists of 5 main stages, which concerns with
identifying the objectives of the system with drawing a relationship between the
different factors that effect the desired conclusion. This knowledge should be taken to
establish the main concept that will be used to be represented in an expert system. The
next stage comes to formalise the collected knowledge followed by the
implementation stage and finally the evaluation of the system
Acquisition of Search Knowledge
. The development of highly effective heuristics for search problems is a difficult and time-consuming task. We present a knowledge acquisition approach to incrementally model expert search processes. Though, experts do not normally have introspective access to that knowledge, their explanations of actual search considerations seems very valuable in constructing a knowledge level model of their search processes. The incremental method was inspired by the work on Ripple-Down Rules which allows knowledge acquisition and maintenance without analysis or a knowledge engineer. We substantially extend Ripple Down Rules to allow undefined terms in the conditions. These undefined terms in turn become defined by Ripple Down Rules. The resulting framework is called Nested Ripple Down Rules. Our system SmS1.2 (SmS for Smart Searcher), has been employed for the acquisition of expert chess knowledge for performing a highly pruned tree search. Our first experimental results in the chess domain are ev..
Incremental acquisition of search knowledge
The development of highly e!ective heuristics for search problems is a di$cult and time-consuming task. We present a knowledge acquisition approach to incrementally model expert search processes. Though, experts do not normally have complete introspective access to that knowledge, their explanations of actual search considerations seem very valuable in constructing a knowledge-level model of their search processes