3,603 research outputs found
Enhancements Of Fuzzy Q-Learning Algorithm
Fuzzy Q-Learning algorithm combines reinforcement learning techniques with fuzzy modelling. It provides a ïŹexible solution for automatic discovery of rules for fuzzy systems inthe process of reinforcement learning. In this paper we propose several enhancements tothe original algorithm to make it more performant and more suitable for problems withcontinuous-input continuous-output space. Presented improvements involve generalizationof the set of possible rule conclusions. The aim is not only to automatically discover anappropriate rule-conclusions assignment, but also to automatically deïŹne the actual conclusions set given the all possible rules conclusions. To improve algorithm performance whendealing with environments with inertness, a special rule selection policy is proposed
Final report : a software diagnostic tool for evaluating distress mechanisms in bridges exposed to aggressive environments
Durability issues of reinforced concrete construction cost millions of dollars in repair
or demolition. Identification of the causes of degradation and a prediction of service
life based on experience, judgement and local knowledge has limitations in
addressing all the associated issues. The objective of this CRC CI research project is
to develop a tool that will assist in the interpretation of the symptoms of degradation
of concrete structures, estimate residual capacity and recommend cost effective
solutions. This report is a documentation of the research undertaken in connection
with this project.
The primary focus of this research is centred on the case studies provided by
Queensland Department of Main Roads (QDMR) and Brisbane City Council (BCC).
These organisations are endowed with the responsibility of managing a huge volume
of bridge infrastructure in the state of Queensland, Australia. The main issue to be
addressed in managing these structures is the deterioration of bridge stock leading to
a reduction in service life. Other issues such as political backlash, public
inconvenience, approach land acquisitions are crucial but are not within the scope of
this project. It is to be noted that deterioration is accentuated by aggressive
environments such as salt water, acidic or sodic soils. Carse, 2005, has noted that
the road authorities need to invest their first dollars in understanding their local
concretes and optimising the durability performance of structures and then look at
potential remedial strategies
An intelligent mobile vehicle navigator based on fuzzy logic and reinforcement learning
In this paper, an alternative training approach to the EEM-based training method is presented and a fuzzy reactive navigation architecture is described. The new training method is 270 times faster in learning speed; and is only 4% of the learning cost of the EEM method. It also has very reliable convergence of learning; very high number of learned rules (98.8%); and high adaptability. Using the rule base learned from the new method, the proposed fuzzy reactive navigator fuses the obstacle avoidance behavior and goal seeking behavior to determine its control actions, where adaptability is achieved with the aid of an environment evaluator. A comparison of this navigator using the rule bases obtained from the new training method and the EEM method, shows that the new navigator guarantees a solution and its solution is more acceptable. © 1999 IEEE.published_or_final_versio
Learning Membership Functions in a Function-Based Object Recognition System
Functionality-based recognition systems recognize objects at the category
level by reasoning about how well the objects support the expected function.
Such systems naturally associate a ``measure of goodness'' or ``membership
value'' with a recognized object. This measure of goodness is the result of
combining individual measures, or membership values, from potentially many
primitive evaluations of different properties of the object's shape. A
membership function is used to compute the membership value when evaluating a
primitive of a particular physical property of an object. In previous versions
of a recognition system known as Gruff, the membership function for each of the
primitive evaluations was hand-crafted by the system designer. In this paper,
we provide a learning component for the Gruff system, called Omlet, that
automatically learns membership functions given a set of example objects
labeled with their desired category measure. The learning algorithm is
generally applicable to any problem in which low-level membership values are
combined through an and-or tree structure to give a final overall membership
value.Comment: See http://www.jair.org/ for any accompanying file
Reinforcement Learning
Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field
Fuzzy and neural control
Fuzzy logic and neural networks provide new methods for designing control systems. Fuzzy logic controllers do not require a complete analytical model of a dynamic system and can provide knowledge-based heuristic controllers for ill-defined and complex systems. Neural networks can be used for learning control. In this chapter, we discuss hybrid methods using fuzzy logic and neural networks which can start with an approximate control knowledge base and refine it through reinforcement learning
- âŠ