9 research outputs found
A heuristic information retrieval study : an investigation of methods for enhanced searching of distributed data objects exploiting bidirectional relevance feedback
A thesis submitted for the degree of Doctor of Philosophy of the University of LutonThe primary aim of this research is to investigate methods of improving the effectiveness of current information retrieval systems. This aim can be achieved by accomplishing numerous supporting objectives.
A foundational objective is to introduce a novel bidirectional, symmetrical fuzzy logic theory which may prove valuable to information retrieval, including internet searches of distributed data objects. A further objective is to design, implement and apply the novel theory to an experimental information retrieval system called ANACALYPSE, which automatically computes the relevance of a large number of unseen documents from expert relevance feedback on a small number of documents read.
A further objective is to define a methodology used in this work as an experimental information retrieval framework consisting of multiple tables including various formulae which anow a plethora of syntheses of similarity functions, ternl weights, relative term frequencies, document weights, bidirectional relevance feedback and history adjusted term weights.
The evaluation of bidirectional relevance feedback reveals a better correspondence between system ranking of documents and users' preferences than feedback free system ranking. The assessment of similarity functions reveals that the Cosine and Jaccard functions perform significantly better than the DotProduct and Overlap functions. The evaluation of history tracking of the documents visited from a root page reveals better system ranking of documents than tracking free information retrieval. The assessment of stemming reveals that system information retrieval performance remains unaffected, while stop word removal does not appear to be beneficial and can sometimes be harmful. The overall evaluation of the experimental information retrieval system in comparison to a leading edge commercial information retrieval system and also in comparison to the expert's golden standard of judged relevance according to established statistical correlation methods reveal enhanced system information retrieval effectiveness
A novel fuzzy first-order logic learning system.
Tse, Ming Fun.Thesis submitted in: December 2001.Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.Includes bibliographical references (leaves 142-146).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Problem Definition --- p.2Chapter 1.2 --- Contributions --- p.3Chapter 1.3 --- Thesis Outline --- p.4Chapter 2 --- Literature Review --- p.6Chapter 2.1 --- Representing Inexact Knowledge --- p.7Chapter 2.1.1 --- Nature of Inexact Knowledge --- p.7Chapter 2.1.2 --- Probability Based Reasoning --- p.8Chapter 2.1.3 --- Certainty Factor Algebra --- p.11Chapter 2.1.4 --- Fuzzy Logic --- p.13Chapter 2.2 --- Machine Learning Paradigms --- p.13Chapter 2.2.1 --- Classifications --- p.14Chapter 2.2.2 --- Neural Networks and Gradient Descent --- p.15Chapter 2.3 --- Related Learning Systems --- p.21Chapter 2.3.1 --- Relational Concept Learning --- p.21Chapter 2.3.2 --- Learning of Fuzzy Concepts --- p.24Chapter 2.4 --- Fuzzy Logic --- p.26Chapter 2.4.1 --- Fuzzy Set --- p.27Chapter 2.4.2 --- Basic Notations in Fuzzy Logic --- p.29Chapter 2.4.3 --- Basic Operations on Fuzzy Sets --- p.29Chapter 2.4.4 --- "Fuzzy Relations, Projection and Cylindrical Extension" --- p.31Chapter 2.4.5 --- Fuzzy First Order Logic and Fuzzy Prolog --- p.34Chapter 3 --- Knowledge Representation and Learning Algorithm --- p.43Chapter 3.1 --- Knowledge Representation --- p.44Chapter 3.1.1 --- Fuzzy First-order Logic ´ؤ A Powerful Language --- p.44Chapter 3.1.2 --- Literal Forms --- p.48Chapter 3.1.3 --- Continuous Variables --- p.50Chapter 3.2 --- System Architecture --- p.61Chapter 3.2.1 --- Data Reading --- p.61Chapter 3.2.2 --- Preprocessing and Postprocessing --- p.67Chapter 4 --- Global Evaluation of Literals --- p.71Chapter 4.1 --- Existing Closeness Measures between Fuzzy Sets --- p.72Chapter 4.2 --- The Error Function and the Normalized Error Functions --- p.75Chapter 4.2.1 --- The Error Function --- p.75Chapter 4.2.2 --- The Normalized Error Functions --- p.76Chapter 4.3 --- The Nodal Characteristics and the Error Peaks --- p.79Chapter 4.3.1 --- The Nodal Characteristics --- p.79Chapter 4.3.2 --- The Zero Error Line and the Error Peaks --- p.80Chapter 4.4 --- Quantifying the Nodal Characteristics --- p.85Chapter 4.4.1 --- Information Theory --- p.86Chapter 4.4.2 --- Applying the Information Theory --- p.88Chapter 4.4.3 --- Upper and Lower Bounds of CE --- p.89Chapter 4.4.4 --- The Whole Heuristics of FF99 --- p.93Chapter 4.5 --- An Example --- p.94Chapter 5 --- Partial Evaluation of Literals --- p.99Chapter 5.1 --- Importance of Covering in Inductive Learning --- p.100Chapter 5.1.1 --- The Divide-and-conquer Method --- p.100Chapter 5.1.2 --- The Covering Method --- p.101Chapter 5.1.3 --- Effective Pruning in Both Methods --- p.102Chapter 5.2 --- Fuzzification of FOIL --- p.104Chapter 5.2.1 --- Analysis of FOIL --- p.104Chapter 5.2.2 --- Requirements on System Fuzzification --- p.107Chapter 5.2.3 --- Possible Ways in Fuzzifing FOIL --- p.109Chapter 5.3 --- The α Covering Method --- p.111Chapter 5.3.1 --- Construction of Partitions by α-cut --- p.112Chapter 5.3.2 --- Adaptive-α Covering --- p.112Chapter 5.4 --- The Probabistic Covering Method --- p.114Chapter 6 --- Results and Discussions --- p.119Chapter 6.1 --- Experimental Results --- p.120Chapter 6.1.1 --- Iris Plant Database --- p.120Chapter 6.1.2 --- Kinship Relational Domain --- p.122Chapter 6.1.3 --- The Fuzzy Relation Domain --- p.129Chapter 6.1.4 --- Age Group Domain --- p.134Chapter 6.1.5 --- The NBA Domain --- p.135Chapter 6.2 --- Future Development Directions --- p.137Chapter 6.2.1 --- Speed Improvement --- p.137Chapter 6.2.2 --- Accuracy Improvement --- p.138Chapter 6.2.3 --- Others --- p.138Chapter 7 --- Conclusion --- p.140Bibliography --- p.142Chapter A --- C4.5 to FOIL File Format Conversion --- p.147Chapter B --- FF99 example --- p.15
The ciao prolog system
Ciao is a public domain, next generation multi-paradigm programming environment with a unique set of features: Ciao offers a complete Prolog system, supporting ISO-Prolog, but its novel modular design allows both restricting and extending the language. As a result, it allows working with
fully declarative subsets of Prolog and also to extend these subsets (or ISO-Prolog) both syntactically and semantically. Most importantly, these restrictions and extensions can be activated separately on each program module so that several extensions can coexist in the same application for different modules. Ciao also supports (through such extensions) programming with functions, higher-order (with predicate abstractions), constraints, and objects, as well as feature terms (records), persistence, several control rules (breadth-first search, iterative deepening, ...), concurrency (threads/engines), a good base for distributed execution (agents), and parallel execution. Libraries also support WWW programming, sockets, external interfaces (C, Java, TclTk, relational databases, etc.), etc. Ciao offers support for programming in the large with a robust module/object system, module-based separate/incremental compilation (automatically -no need for makefiles), an assertion language for declaring (optional) program properties (including types and modes, but also determinacy, non-failure, cost, etc.), automatic static inference and static/dynamic checking of such assertions, etc. Ciao also offers support for programming in the small producing small executables (including only those builtins used by the program) and support for writing scripts in Prolog. The Ciao programming environment includes a classical top-level and a rich emacs interface with an embeddable source-level debugger and a number of execution visualization tools. The Ciao compiler (which can be run outside the top level shell) generates several forms of architecture-independent and stand-alone executables, which run with speed, efficiency and executable size which are very competive with other commercial and academic Prolog/CLP systems. Library modules can be compiled into compact bytecode or C source files, and linked statically, dynamically, or autoloaded. The novel modular design of Ciao enables, in addition to modular program development,
effective global program analysis and static debugging and optimization via source to source program transformation. These tasks are performed by the Ciao preprocessor ( ciaopp,
distributed separately). The Ciao programming environment also includes lpdoc, an automatic documentation generator
for LP/CLP programs. It processes Prolog files adorned with (Ciao) assertions and machine-readable comments and generates manuals in many formats including postscript, pdf, texinfo, info, HTML, man, etc. , as well as on-line help, ascii README files, entries for indices of manuals (info, WWW, ...), and maintains WWW distribution sites
Recommended from our members
Aspects of Qualitative Consciousness: A Computer Science Perspective
The domain of artificial intelligence (AI) has been characterised by John Searle [Sear84] by distinguishing between iveak AI, according to which computers are useful tools for studying mind, and strong AI, according to which an equivalence is made between mind and programs such that computers executing programs actually possess minds. This dissertation explores a third alternative, namely: the prospects and promise of m ild AI, according to which a suitable computer is capable of possessing species of mentality that may differ from or be weaker than ordinary human mentality, but qualify as “mentality” nonetheless. The purpose of this dissertation is to explore the prospects and promise of mild AI.
The approach adopted explores whether mind can be replicated, as opposed to merely simulated, in digital machines. This requires a definition of mind in order to judge success. James Fetzer [Fetz90] has suggested minds can be defined as sign using systems in the sense of Charles Peirce’s semiotic (theory of signs) and, on this basis, argues convincingly against strong AI. Determining if his negative conclusion applies to mild AI requires rejoining Fetzer’s analysis of the analogical argument for strong AI and redressing his laws of human beings and digital machines. This is tackled by focusing on the nature and form of the operational relationship between the physical machine and mind, and suggesting some operational requirements for a minimal semiotic system independently of any underlying physical implementation. This involves four steps.
Firstly, as a formal foundation, a characterisation of systems is developed in terms of the causal structure and ontological levels in the system, where an ontological level is individuated by the laws that are in effect. This is in contrast to levels of organisation, such as levels of software abstraction. This exploration suggests the necessity — as a matter of natural law — for a mediating level between the physical machine and mind that is or, at least, appears to be necessary for producing forms of mentality. The lawful structure that appears to be required within this level and between levels is examined with respect to the prospects for implementing a semiotic system.
Secondly, how a system can operate in terms of semiotic processes based on a network of instantiated dispositions is explored. These are modelled as the temporal counterparts of state-transitions and stationary-representations, which are termed causal-flows and temporal-representations, respectively. They highlight the varying interactive structure of temporal patterns of causal activity in time. For the purposes of replicating mind, preserving the causal-flow structure of mental processes arises as an important requirement.
Thirdly, the system structure sufficient for generating consciousness is explored — a necessary condition for a cognitive semiotic system. This suggests a requirement relating to the causal accessibility of the contents of consciousness. This structuring is driven by the system’s need to signify reality by categorising these aspects as operational entities upon which decisions can be made. Consciousness arises through the manner in which the signified reality is generated. This makes mind and consciousness the result of a co-ordinated occurrent system wide activity.
Fourthly, in a mathematical sense, brains and computers can be classified as types of numeric and symbolic systems, respectively. These systems are compared and conditions formulated under which they may give rise to equivalent ontological levels. Peirce’s triadic sign relation is analysed in terms of ontological levels and the results used to clarify the nature of the ground relation in machine forms of mentality.
According to the theorems developed, the introduction of a dispositional mediating level might effectively enable a suitable computer to replicate species of mentality. An important factor in determining whether a computer is suitable for this purpose is its performance capacity and thus some estimates are calculated in this respect. It is shown how these requirements, along with a number of others, can help in the development of semiotic systems and variants, such as the iconic state machine of Igor Aleksander [Alek96]
The ciao system
Abstract is not available
LDS - Labelled Deductive Systems: Volume 1 - Foundations
Traditional logics manipulate formulas. The message of this book is to manipulate pairs; formulas and labels. The labels annotate the formulas. This sounds very simple but it turned out to be a big step, which makes a serious difference, like the difference between using one hand only or allowing for the coordinated use of two hands. Of course the idea has to be made precise, and its advantages and limitations clearly demonstrated. `Precise' means a good mathematical definition and `advantages demonstrated' means case studies and applications in pure logic and in AI. To achieve that we need to address the following: \begin{enumerate} \item Define the notion of {\em LDS}, its proof theory and semantics and relate it to traditional logics. \item Explain what form the traditional concepts of cut elimination, deduction theorem, negation, inconsistency, update, etc.\ take in {\em LDS}. \item Formulate major known logics in {\em LDS}. For example, modal and temporal logics, substructural logics, default, nonmonotonic logics, etc. \item Show new results and solve long-standing problems using {\em LDS}. \item Demonstrate practical applications. \end{enumerate} This is what I am trying to do in this book. Part I of the book is an intuitive presentation of {\em LDS} in the context of traditional current views of monotonic and nonmonotonic logics. It is less oriented towards the pure logician and more towards the practical consumer of logic. It has two tasks, addressed in two chapters. These are: \begin{itemlist}{Chapter 1:} \item [Chapter1:] Formally motivate {\em LDS} by starting from the traditional notion of `What is a logical system' and slowly adding features to it until it becomes essentially an {\em LDS}. \item [Chapter 2:] Intuitively motivate {\em LDS} by showing many examples where labels are used, as well as some case studies of familiar logics (e.g.\ modal logic) formulated as an {\em LDS}. \end{itemlist} The second part of the book presents the formal theory of {\em LDS} for the formal logician. I have tried to avoid the style of definition-lemma-theorem and put in some explanations. What is basically needed here is the formulation of the mathematical machinery capable of doing the following. \begin{itemize} \item Define {\em LDS} algebra, proof theory and semantics. \item Show how an arbitrary (or fairly general) logic, presented traditionally, say as a Hilbert system or as a Gentzen system, can be turned into an {\em LDS} formulation. \item Show how to obtain a traditional formulations (e.g.\ Hilbert) for an arbitrary {\em LDS} presented logic. \item Define and study major logical concepts intrinsic to {\em LDS} formalisms. \item Give detailed study of the {\em LDS} formulation of some major known logics (e.g.\ modal logics, resource logics) and demonstrate its advantages. \item Translate {\em LDS} into classical logic (reduce the `new' to the `old'), and explain {\em LDS} in the context of classical logic (two sorted logic, metalevel aspects, etc). \end{itemize} \begin{itemlist}{Chapter 1:} \item [Chapter 3:] Give fairly general definitions of some basic concepts of {\em LDS} theory, mainly to cater for the needs of the practical consumer of logic who may wish to apply it, with a detailed study of the metabox system. The presentation of Chapter 3 is a bit tricky. It may be too formal for the intuitive reader, but not sufficiently clear and elegant for the mathematical logician. I would be very grateful for comments from the readers for the next draft. \item [Chapter 4:] Presents the basic notions of algebraic {\em LDS}. The reader may wonder how come we introduce algebraic {\em LDS} in chapter 3 and then again in chapter 4. Our aim in chapter 3 is to give a general definition and formal machinery for the applied consumer of logic. Chapter 4 on the other hand studies {\em LDS} as formal logics. It turns out that to formulate an arbitrary logic as an {\em LDS} one needs some specific labelling algebras and these need to be studied in detail (chapter 4). For general applications it is more convenient to have general labelling algebras and possibly mathematically redundant formulations (chapter 3). In a sense chapter 4 continues the topic of the second section of chapter 3. \item [Chapter 5:] Present the full theory of {\em LDS} where labels can be databases from possibly another {\em LDS}. It also presents Fibred Semantics for {\em LDS}. \item [Chapter 6:] Presents a theory of quantifers for {\em LDS}. The material for this chapter is still under research. \item [Chapter 7:] Studies structured consequence relations. These are logical system swhere the structure is not described through labels but through some geometry like lists, multisets, trees, etc. Thus the label of a wff is implicit, given by the place of in the structure. \item [Chapter 8:] Deals with metalevel features of {\em LDS} and its translation into two sorted classical logic. \end{itemlist} Parts 3 and 4 of the book deals in detail with some specific families of logics. Chapters 9--11 essentailly deal with substructural logics and their variants. \begin{itemlist}{Chapter10:} \item [Chapter 9:] Studies resource and substructural logics in general. \item [Chapter 10:] Develops detailed proof theory for some systems as well as studying particular features such as negation. \item [Chapter 11:] Deals with many valued logics. \item [Chapter 12:] Studies the Curry Howard formula as type view and how it compres with labelling. \item [Chapter 13:] Deals with modal and temporal logics. \end{itemlist} Part 5 of the book deals with {\em LDS} metatheory. \begin{itemlist}{Chapter15:} \item [Chapter 14:] Deals with labelled tableaux. \item [Chapter 15:] Deals with combining logics. \item [Chapter 16:] Deals with abduction. \end{itemlist