12 research outputs found

    Reasoning with uncertainty using Nilsson's probabilistic logic and the maximum entropy formalism

    Get PDF
    An expert system must reason with certain and uncertain information. This thesis is concerned with the process of Reasoning with Uncertainty. Nilsson's elegant model of "Probabilistic Logic" has been chosen as the framework for this investigation, and the information theoretical aspect of the maximum entropy formalism as the inference engine. These two formalisms, although semantically compelling, offer major complexity problems to the implementor. Probabilistic Logic models the complete uncertainty space, and the maximum entropy formalism finds the least commitment probability distribution within the uncertainty space. The main finding in this thesis is that Nilsson's Probabilistic Logic can be successfully developed beyond the structure proposed by Nilsson. Some deficiencies in Nilsson's model have been uncovered in the area of probabilistic representation, making Probabilistic Logic less powerful than Bayesian Inference techniques. These deficiencies are examined and a new model of entailment is presented which overcomes these problems, allowing Probabilistic Logic the full representational power of Bayesian Inferencing. The new model also preserves an important extension which Nilsson's Probabilistic Logic has over Bayesian Inference: the ability to use uncertain evidence. Traditionally, the probabilistic, solution proposed by the maximum entropy formalism is arrived at by solving non-linear simultaneous equations for the aggregate factors of the non- linear terms. In the new model the maximum entropy algorithms are shown to have the highly desirable property of tractability. Although these problems have been solved for probabilistic entailment the problems of complexity are still prevalent in large databases of expert rules. This thesis also considers the use of heuristics and meta level reasoning in a complex knowledge base. Finally, a description of an expert system using these techniques is given

    Modular neural networks applied to pattern recognition tasks

    Get PDF
    Pattern recognition has become an accessible tool in developing advanced adaptive products. The need for such products is not diminishing but on the contrary, requirements for systems that are more and more aware of their environmental circumstances are constantly growing. Feed-forward neural networks are used to learn patterns in their training data without the need to discover by hand the relationships present in the data. However, the problem of estimating the required size of the neural network is still not solved. If we choose a neural network that is too small for a particular given task, the network is unable to "comprehend" the intricacies of the data. On the other hand if we choose a network size that is too big for the given task, we will observe that there are too many parameters to be tuned for the network, or we can fall in the "Curse of dimensionality" or even worse, the training algorithm can easily be trapped in local minima of the error surface. Therefore, we choose to investigate possible ways to find the 'Goldilocks' size for a feed-forward neural network (which is just right in some sense), being given a training set. Furthermore, we used a common paradigm used by the Roman Empire and employed on a wide scale in computer programming, which is the "Divide-et-Impera" approach, to divide a given dataset in multiple sub-datasets, solve the problem for each of the sub-dataset and fuse the results of all the sub-problems to form the result for the initial problem as a whole. To this effect we investigated modular neural networks and their performance

    Intelligent systems: towards a new synthetic agenda

    Get PDF

    Knowledge based approach to process engineering design

    Get PDF
    corecore