7 research outputs found

    Evolutionary program induction directed by logic grammars.

    Get PDF
    by Wong Man Leung.Thesis (Ph.D.)--Chinese University of Hong Kong, 1995.Includes bibliographical references (leaves 227-236).List of Figures --- p.iiiList of Tables --- p.viChapter Chapter 1 : --- Introduction --- p.1Chapter 1.1. --- Automatic programming and program induction --- p.1Chapter 1.2. --- Motivation --- p.6Chapter 1.3. --- Contributions of the research --- p.8Chapter 1.4. --- Outline of the thesis --- p.11Chapter Chapter 2 : --- An Overview of Evolutionary Algorithms --- p.13Chapter 2.1. --- Evolutionary algorithms --- p.13Chapter 2.2. --- Genetic Algorithms (GAs) --- p.15Chapter 2.2.1. --- The canonical genetic algorithm --- p.16Chapter 2.2.1.1. --- Selection methods --- p.21Chapter 2.2.1.2. --- Recombination methods --- p.24Chapter 2.2.1.3. --- Inversion and Reordering --- p.27Chapter 2.2.2. --- Implicit parallelism and the building block hypothesis --- p.28Chapter 2.2.3. --- Steady state genetic algorithms --- p.32Chapter 2.2.4. --- Hybrid algorithms --- p.33Chapter 2.3. --- Genetic Programming (GP) --- p.34Chapter 2.3.1. --- Introduction to the traditional GP --- p.34Chapter 2.3.2. --- Automatic Defined Function (ADF) --- p.41Chapter 2.3.3. --- Module Acquisition (MA) --- p.44Chapter 2.3.4. --- Strongly Typed Genetic Programming (STGP) --- p.49Chapter 2.4. --- Evolution Strategies (ES) --- p.50Chapter 2.5. --- Evolutionary Programming (EP) --- p.55Chapter Chapter 3 : --- Inductive Logic Programming --- p.59Chapter 3.1. --- Inductive concept learning --- p.59Chapter 3.2. --- Inductive Logic Programming (ILP) --- p.62Chapter 3.2.1. --- Interactive ILP --- p.64Chapter 3.2.2. --- Empirical ILP --- p.65Chapter 3.3. --- Techniques and methods of ILP --- p.67Chapter Chapter 4 : --- Genetic Logic Programming and Applications --- p.74Chapter 4.1. --- Introduction --- p.74Chapter 4.2. --- Representations of logic programs --- p.76Chapter 4.3. --- Crossover of logic programs --- p.81Chapter 4.4. --- Genetic Logic Programming System (GLPS) --- p.87Chapter 4.5. --- Applications --- p.90Chapter 4.5.1. --- The Winston's arch problem --- p.91Chapter 4.5.2. --- The modified Quinlan's network reachability problem --- p.92Chapter 4.5.3. --- The factorial problem --- p.95Chapter Chapter 5 : --- The logic grammars based genetic programming system (LOGENPRO) --- p.100Chapter 5.1. --- Logic grammars --- p.101Chapter 5.2. --- Representations of programs --- p.103Chapter 5.3. --- Crossover of programs --- p.111Chapter 5.4. --- Mutation of programs --- p.126Chapter 5.5. --- The evolution process of LOGENPRO --- p.130Chapter 5.6. --- Discussion --- p.132Chapter Chapter 6 : --- Applications of LOGENPRO --- p.134Chapter 6.1. --- Learning functional programs --- p.134Chapter 6.1.1. --- Learning S-expressions using LOGENPRO --- p.134Chapter 6.1.2. --- The DOT PRODUCT problem --- p.137Chapter 6.1.2. --- Learning sub-functions using explicit knowledge --- p.143Chapter 6.2. --- Learning logic programs --- p.148Chapter 6.2.1. --- Learning logic programs using LOGENPRO --- p.148Chapter 6.2.2. --- The Winston's arch problem --- p.151Chapter 6.2.3. --- The modified Quinlan's network reachability problem --- p.153Chapter 6.2.4. --- The factorial problem --- p.154Chapter 6.2.5. --- Discussion --- p.155Chapter 6.3. --- Learning programs in C --- p.155Chapter Chapter 7 : --- Knowledge Discovery in Databases --- p.159Chapter 7.1. --- Inducing decision trees using LOGENPRO --- p.160Chapter 7.1.1. --- Decision trees --- p.160Chapter 7.1.2. --- Representing decision trees as S-expressions --- p.164Chapter 7.1.3. --- The credit screening problem --- p.166Chapter 7.1.4. --- The experiment --- p.168Chapter 7.2. --- Learning logic program from imperfect data --- p.174Chapter 7.2.1. --- The chess endgame problem --- p.177Chapter 7.2.2. --- The setup of experiments --- p.178Chapter 7.2.3. --- Comparison of LOGENPRO with FOIL --- p.180Chapter 7.2.4. --- Comparison of LOGENPRO with BEAM-FOIL --- p.182Chapter 7.2.5. --- Comparison of LOGENPRO with mFOILl --- p.183Chapter 7.2.6. --- Comparison of LOGENPRO with mFOIL2 --- p.184Chapter 7.2.7. --- Comparison of LOGENPRO with mFOIL3 --- p.185Chapter 7.2.8. --- Comparison of LOGENPRO with mFOIL4 --- p.186Chapter 7.2.9. --- Comparison of LOGENPRO with mFOIL5 --- p.187Chapter 7.2.10. --- Discussion --- p.188Chapter 7.3. --- Learning programs in Fuzzy Prolog --- p.189Chapter Chapter 8 : --- An Adaptive Inductive Logic Programming System --- p.192Chapter 8.1. --- Adaptive Inductive Logic Programming --- p.192Chapter 8.2. --- A generic top-down ILP algorithm --- p.196Chapter 8.3. --- Inducing procedural search biases --- p.200Chapter 8.3.1. --- The evolution process --- p.201Chapter 8.3.2. --- The experimentation setup --- p.202Chapter 8.3.3. --- Fitness calculation --- p.203Chapter 8.4. --- Experimentation and evaluations --- p.204Chapter 8.4.1. --- The member predicate --- p.205Chapter 8.4.2. --- The member predicate in a noisy environment --- p.205Chapter 8.4.3. --- The multiply predicate --- p.206Chapter 8.4.4. --- The uncle predicate --- p.207Chapter 8.5. --- Discussion --- p.208Chapter Chapter 9 : --- Conclusion and Future Work --- p.210Chapter 9.1. --- Conclusion --- p.210Chapter 9.2. --- Future work --- p.217Chapter 9.2.1. --- Applying LOGENPRO to discover knowledge from databases --- p.217Chapter 9.2.2. --- Learning recursive programs --- p.218Chapter 9.2.3. --- Applying LOGENPRO in engineering design --- p.220Chapter 9.2.4. --- Exploiting parallelism of evolutionary algorithms --- p.222Reference --- p.227Appendix A --- p.23

    Attribute grammar evolution

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/11499305_19Proceedings of First International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2005, Las Palmas, Canary Islands, Spain, June 15-18, 2005This paper describes Attribute Grammar Evolution (AGE), a new Automatic Evolutionary Programming algorithm that extends standard Grammar Evolution (GE) by replacing context-free grammars by attribute grammars. GE only takes into account syntactic restrictions to generate valid individuals. AGE adds semantics to ensure that both semantically and syntactically valid individuals are generated. Attribute grammars make it possible to semantically describe the solution. The paper shows empirically that AGE is as good as GE for a classical problem, and proves that including semantics in the grammar can improve GE performance. An important conclusion is that adding too much semantics can make the search difficult

    Medical data mining using evolutionary computation.

    Get PDF
    by Ngan Po Shun.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 109-115).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Data Mining --- p.1Chapter 1.2 --- Motivation --- p.4Chapter 1.3 --- Contributions of the research --- p.5Chapter 1.4 --- Organization of the thesis --- p.6Chapter 2 --- Related Work in Data Mining --- p.9Chapter 2.1 --- Decision Tree Approach --- p.9Chapter 2.1.1 --- ID3 --- p.10Chapter 2.1.2 --- C4.5 --- p.11Chapter 2.2 --- Classification Rule Learning --- p.13Chapter 2.2.1 --- AQ algorithm --- p.13Chapter 2.2.2 --- CN2 --- p.14Chapter 2.2.3 --- C4.5RULES --- p.16Chapter 2.3 --- Association Rule Mining --- p.16Chapter 2.3.1 --- Apriori --- p.17Chapter 2.3.2 --- Quantitative Association Rule Mining --- p.18Chapter 2.4 --- Statistical Approach --- p.19Chapter 2.4.1 --- Chi Square Test and Bayesian Classifier --- p.19Chapter 2.4.2 --- FORTY-NINER --- p.21Chapter 2.4.3 --- EXPLORA --- p.22Chapter 2.5 --- Bayesian Network Learning --- p.23Chapter 2.5.1 --- Learning Bayesian Networks using the Minimum Descrip- tion Length (MDL) Principle --- p.24Chapter 2.5.2 --- Discretizating Continuous Attributes while Learning Bayesian Networks --- p.26Chapter 3 --- Overview of Evolutionary Computation --- p.29Chapter 3.1 --- Evolutionary Computation --- p.29Chapter 3.1.1 --- Genetic Algorithm --- p.30Chapter 3.1.2 --- Genetic Programming --- p.32Chapter 3.1.3 --- Evolutionary Programming --- p.34Chapter 3.1.4 --- Evolution Strategy --- p.37Chapter 3.1.5 --- Selection Methods --- p.38Chapter 3.2 --- Generic Genetic Programming --- p.39Chapter 3.3 --- Data mining using Evolutionary Computation --- p.43Chapter 4 --- Applying Generic Genetic Programming for Rule Learning --- p.45Chapter 4.1 --- Grammar --- p.46Chapter 4.2 --- Population Creation --- p.49Chapter 4.3 --- Genetic Operators --- p.50Chapter 4.4 --- Evaluation of Rules --- p.52Chapter 5 --- Learning Multiple Rules from Data --- p.56Chapter 5.1 --- Previous approaches --- p.57Chapter 5.1.1 --- Preselection --- p.57Chapter 5.1.2 --- Crowding --- p.57Chapter 5.1.3 --- Deterministic Crowding --- p.58Chapter 5.1.4 --- Fitness sharing --- p.58Chapter 5.2 --- Token Competition --- p.59Chapter 5.3 --- The Complete Rule Learning Approach --- p.61Chapter 5.4 --- Experiments with Machine Learning Databases --- p.64Chapter 5.4.1 --- Experimental results on the Iris Plant Database --- p.65Chapter 5.4.2 --- Experimental results on the Monk Database --- p.67Chapter 6 --- Bayesian Network Learning --- p.72Chapter 6.1 --- The MDLEP Learning Approach --- p.73Chapter 6.2 --- Learning of Discretization Policy by Genetic Algorithm --- p.74Chapter 6.2.1 --- Individual Representation --- p.76Chapter 6.2.2 --- Genetic Operators --- p.78Chapter 6.3 --- Experimental Results --- p.79Chapter 6.3.1 --- Experiment 1 --- p.80Chapter 6.3.2 --- Experiment 2 --- p.82Chapter 6.3.3 --- Experiment 3 --- p.83Chapter 6.3.4 --- Comparison between the GA approach and the greedy ap- proach --- p.91Chapter 7 --- Medical Data Mining System --- p.93Chapter 7.1 --- A Case Study on the Fracture Database --- p.95Chapter 7.1.1 --- Results of Causality and Structure Analysis --- p.95Chapter 7.1.2 --- Results of Rule Learning --- p.97Chapter 7.2 --- A Case Study on the Scoliosis Database --- p.100Chapter 7.2.1 --- Results of Causality and Structure Analysis --- p.100Chapter 7.2.2 --- Results of Rule Learning --- p.102Chapter 8 --- Conclusion and Future Work --- p.106Bibliography --- p.109Chapter A --- The Rule Sets Discovered --- p.116Chapter A.1 --- The Best Rule Set Learned from the Iris Database --- p.116Chapter A.2 --- The Best Rule Set Learned from the Monk Database --- p.116Chapter A.2.1 --- Monkl --- p.116Chapter A.2.2 --- Monk2 --- p.117Chapter A.2.3 --- Monk3 --- p.119Chapter A.3 --- The Best Rule Set Learned from the Fracture Database --- p.120Chapter A.3.1 --- Type I Rules: About Diagnosis --- p.120Chapter A.3.2 --- Type II Rules : About Operation/Surgeon --- p.120Chapter A.3.3 --- Type III Rules : About Stay --- p.122Chapter A.4 --- The Best Rule Set Learned from the Scoliosis Database --- p.123Chapter A.4.1 --- Rules for Classification --- p.123Chapter A.4.2 --- Rules for Treatment --- p.126Chapter B --- The Grammar used for the fracture and Scoliosis databases --- p.128Chapter B.1 --- The grammar for the fracture database --- p.128Chapter B.2 --- The grammar for the Scoliosis database --- p.12

    Adaptive Operator Mechanism for Genetic Programming

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2013. 8. Robert Ian McKay.their performances are competitive with systems without an adaptive operator mechanism. However they showed some drawbacks, which we discuss. To overcome them, we suggest three variants on operator selection, which performed somewhat better. We have investigated evaluation of operator impact in adaptive operator mechanism, which measures the impact of operator applications on improvement of solution. Hence the impact guides operator rates, evaluation of operator impact is very important in adaptive operator mechanism. There are two issues in evaluation of operator impact: the resource and the method. Basically all history information of run are able to be used as resources for the operator impact, but fitness value which is directly related with the improvement of solution, is usually used as a resource. By using a variety of problems, we used two kinds of resources: accuracy and structure in this thesis. On the other hand, although we used same resources, the evaluated impacts are different by methods. We suggested several methods of the evaluation of operator impact. Although they require only small change, they have a large effect on performance. Finally, we verified adaptive operator mechanism by applying it to a real-world applicationa modeling of algal blooms in the Nakdong River. The objective of this application is a model that describes and predicts the ecosystem of the Nakdong River. We verified it with two researches: fitting the parameters of an expert-derived model for the Nakdong River with a GA, and modeling by extending the expert-derived model with TAG3P.์œ ์ „ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์€ ๋ชจ๋ธ ํ•™์Šต์— ํšจ๊ณผ์ ์ธ ์ง„ํ™” ์—ฐ์‚ฐ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด๋‹ค. ์œ ์ „ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์€ ๋‹ค์–‘ํ•œ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”๋ฐ, ์ด๋“ค ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ๊ฐ’์€ ๋Œ€์ฒด๋กœ ์ฃผ์–ด์ง„ ๋ฌธ์ œ์— ๋งž์ถฐ ์‚ฌ์šฉ์ž๊ฐ€ ์ง์ ‘ ์กฐ์ •ํ•œ๋‹ค. ์œ ์ „ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์˜ ์„ฑ๋Šฅ์€ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ๊ฐ’์— ๋”ฐ๋ผ ํฌ๊ฒŒ ์ขŒ์šฐ๋˜๊ธฐ ๋•Œ๋ฌธ์— ํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ •์— ๋Œ€ํ•œ ์—ฐ๊ตฌ๋Š” ์ง„ํ™” ์—ฐ์‚ฐ์—์„œ ๋งŽ์€ ์ฃผ๋ชฉ์„ ๋ฐ›๊ณ  ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ์•„์ง๊นŒ์ง€ ํšจ๊ณผ์ ์œผ๋กœ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณดํŽธ์ ์ธ ์ง€์นจ์ด ์—†์œผ๋ฉฐ, ๋งŽ์€ ์‹คํ—˜์„ ํ†ตํ•œ ์‹œํ–‰์ฐฉ์˜ค๋ฅผ ๊ฑฐ์น˜๋ฉด์„œ ์ ์ ˆํ•œ ํŒŒ๋ผ๋ฏธํ„ฐ ๊ฐ’์„ ์ฐพ๋Š” ๋ฐฉ๋ฒ•์ด ์ผ๋ฐ˜์ ์œผ๋กœ ์“ฐ์ด๊ณ  ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์‹œํ•˜๋Š” ์ ์‘ ์—ฐ์‚ฐ์ž ๋ฉ”์ปค๋‹ˆ์ฆ˜์€ ์—ฌ๋Ÿฌ ํŒŒ๋ผ๋ฏธํ„ฐ ์ค‘ ์œ ์ „ ์—ฐ์‚ฐ์ž์˜ ์ ์šฉ๋ฅ ์„ ์„ค์ •ํ•ด ์ฃผ๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ, ํ•™์Šต ์ค‘๊ฐ„์ค‘๊ฐ„์˜ ์ƒํ™ฉ์— ๋งž์ถฐ ์—ฐ์‚ฐ์ž ์ ์šฉ๋ฅ ์„ ์ž๋™์ ์œผ๋กœ ์กฐ์ •ํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š”, ๊ธฐ์กด์˜ ์ ์‘ ์—ฐ์‚ฐ์ž ๋ฐฉ๋ฒ•์„ ๋‹ค์–‘ํ•œ ์œ ์ „ ์—ฐ์‚ฐ์ž๋ฅผ ๊ฐ€์ง„ ๋ฌธ๋ฒ• ๊ธฐ๋ฐ˜์˜ ์œ ์ „ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์ธ TAG3P์— ์ ์šฉํ•˜๊ณ  ์ƒˆ๋กœ์šด ์ ์‘ ์—ฐ์‚ฐ์ž ๋ฐฉ๋ฒ•์„ ๊ฐœ๋ฐœํ•จ์œผ๋กœ์จ, ์ ์‘ ์—ฐ์‚ฐ์ž ๋ฉ”์ปค๋‹ˆ์ฆ˜์˜ ์ ์šฉ ๋ฒ”์œ„๋ฅผ ์œ ์ „ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์˜์—ญ๊นŒ์ง€ ํ™•์žฅํ•˜์˜€๋‹ค. ๊ธฐ์กด์˜ ์ ์‘ ์—ฐ์‚ฐ์ž ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ TAG3P์— ์ ์šฉ์‹œํ‚ค๋Š” ์—ฐ๊ตฌ๋Š” ์„ฑ๊ณต์ ์œผ๋กœ ์ด๋ฃจ์–ด์กŒ์œผ๋‚˜ ๋ช‡ ๊ฐ€์ง€ ๋ฌธ์ œ์ ์„ ๋“œ๋Ÿฌ๋‚ด์—ˆ๋‹ค. ์ด ๋ฌธ์ œ์ ์€ ๋ณธ๋ฌธ์—์„œ ํ›„์ˆ ํ•œ๋‹ค. ์ด ๋ฌธ์ œ์ ์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์œ ์ „์ž ์„ ํƒ์— ๋Œ€ํ•œ ์ƒˆ๋กœ์šด ๋ณ€ํ˜• ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์‹œํ•˜์˜€๊ณ , ์ด๋Š” ๊ธฐ์กด ์•Œ๊ณ ๋ฆฌ์ฆ˜๊ณผ ๋น„๊ตํ•˜์—ฌ ๋” ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ํ•œํŽธ์œผ๋กœ ์œ ์ „ ์—ฐ์‚ฐ์ž๊ฐ€ ํ•ด์˜ ํ–ฅ์ƒ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ์ธก์ •ํ•˜๋Š” ์—ฐ์‚ฐ์ž ์˜ํ–ฅ๋ ฅ ํ‰๊ฐ€์— ๋Œ€ํ•œ ์—ฐ๊ตฌ๋„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ์ ์‘ ์—ฐ์‚ฐ์ž ๋ฉ”์ปค๋‹ˆ์ฆ˜์—์„œ๋Š” ์ธก์ •๋œ ์˜ํ–ฅ๋ ฅ์„ ๋ฐ”ํƒ•์œผ๋กœ ์—ฐ์‚ฐ์ž์˜ ์ ์šฉ๋ฅ ์„ ๋ณ€ํ™”์‹œํ‚ค๊ธฐ ๋•Œ๋ฌธ์— ์˜ํ–ฅ๋ ฅ ํ‰๊ฐ€๋Š” ์ ์‘ ์—ฐ์‚ฐ์ž ๋ฉ”์ปค๋‹ˆ์ฆ˜์—์„œ ๋งค์šฐ ์ค‘์š”ํ•˜๋‹ค. ์ด ์—ฐ๊ตฌ์—์„œ๋Š” ์–ด๋–ค ์ •๋ณด๋ฅผ ์ด์šฉํ•˜์—ฌ ์˜ํ–ฅ๋ ฅ์„ ์ธก์ •ํ•  ๊ฒƒ์ธ์ง€, ๊ทธ๋ฆฌ๊ณ  ์–ด๋–ค ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ์˜ํ–ฅ๋ ฅ์„ ์ธก์ •ํ•  ๊ฒƒ์ธ์ง€์˜ ๋‘ ๊ฐ€์ง€ ์ฃผ์š” ์Ÿ์ ์„ ๋‹ค๋ฃฌ๋‹ค. ์—ฐ์‚ฐ์ž ์˜ํ–ฅ๋ ฅ ํ‰๊ฐ€์—๋Š” ํ•™์Šต ๊ณผ์ •์˜ ๋ชจ๋“  ์ •๋ณด๊ฐ€ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋Œ€์ฒด๋กœ ํ•ด์˜ ํ–ฅ์ƒ๊ณผ ์ง์ ‘์ ์ธ ๊ด€๋ จ์ด ์žˆ๋Š” ์ ํ•ฉ๋„๋ฅผ ์ด์šฉํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋‹ค์–‘ํ•œ ๋ฌธ์ œ๋ฅผ ์ด์šฉํ•˜์—ฌ ์ •ํ™•๋„์™€ ๊ตฌ์กฐ์— ๊ด€๋ จ๋œ ๋‘ ์ง€ํ‘œ๋ฅผ ์˜ํ–ฅ๋ ฅ ํ‰๊ฐ€์— ์ด์šฉํ•ด๋ณด์•˜๋‹ค. ํ•œํŽธ์œผ๋กœ ๊ฐ™์€ ์ •๋ณด๋ฅผ ์ด์šฉํ•˜๋”๋ผ๋„ ๊ทธ๊ฒƒ์„ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋”ฐ๋ผ ์ธก์ •๋˜๋Š” ์˜ํ–ฅ๋ ฅ์ด ๋‹ฌ๋ผ์ง€๋Š”๋ฐ, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ž‘์€ ๋ณ€ํ™”๋ฅผ ํ†ตํ•ด์„œ๋„ ํฐ ์„ฑ๋Šฅ ๋ณ€ํ™”๋ฅผ ์•ผ๊ธฐ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ์˜ํ–ฅ๋ ฅ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋ช‡๊ฐ€์ง€ ์†Œ๊ฐœํ•œ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ์ ์‘ ์—ฐ์‚ฐ์ž ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ์‹ค์ œ ๋ฌธ์ œ์— ์ ์šฉํ•จ์œผ๋กœ์จ ์œ ์šฉ์„ฑ์„ ํ™•์ธํ•˜์˜€๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ์‚ฌ์šฉ๋œ ์‹ค์ œ ๋ฌธ์ œ๋Š” ๋‚™๋™๊ฐ•์˜ ๋…น์กฐ ํ˜„์ƒ์— ๋Œ€ํ•œ ์˜ˆ์ธก์œผ๋กœ, ๋‚™๋™๊ฐ•์˜ ์ƒํƒœ ์‹œ์Šคํ…œ์„ ๋ฌ˜์‚ฌํ•˜๊ณ  ์˜ˆ์ธกํ•˜๋Š” ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜๋Š” ๊ฒƒ์„ ๋ชฉ์ ์œผ๋กœ ํ•œ๋‹ค. 2๊ฐ€์ง€ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ์œ ์šฉ์„ฑ์„ ํ™•์ธํ•˜์˜€๋‹ค. ์šฐ์„  ์ „๋ฌธ๊ฐ€์— ์˜ํ•ด ๋งŒ๋“ค์–ด์ง„ ๊ธฐ๋ณธ ๋ชจ๋ธ์„ ๋ฐ”ํƒ•์œผ๋กœ, ์œ ์ „ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ตœ์ ํ™” ํ•˜์˜€๊ณ , ๊ทธ๋ฆฌ๊ณ  TAG3P๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ธฐ๋ณธ ๋ชจ๋ธ์˜ ํ™•์žฅํ•˜๊ณ  ์ด๋ฅผ ํ†ตํ•ด ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ๋งŒ๋“ค์–ด ๋ณด์•˜๋‹ค.Genetic programming (GP) is an effective evolutionary algorithm for many problems, especially suited to model learning. GP has many parameters, usually defined by the user according to the problem. The performance of GP is sensitive to their values. Parameter setting has been a major focus of study in evolutionary computation. However there is still no general guideline for choosing efficient settings. The usual method for parameter setting is trial and error. The method used in this thesis, adaptive operator mechanism, replaces the user's action in setting rates of application of genetic operators. adaptive operator mechanism autonomously controls the genetic operators during a run. This thesis extends adaptive operator mechanism to genetic programming, applying existing adaptive operator algorithms and developing them for TAG3P, a grammar-guided GP which supports a wide variety of useful genetic operators. Existing adaptive operator selection algorithms are successfully applied to TAG3P1 Introduction 1 1.1 Background and Motivation 1 1.2 Our Approach and Its Contributions 2 1.3 Outline 4 2 Related Works 5 2.1 Evolutionary Algorithms 5 2.1.1 Genetic Algorithm 5 2.1.2 Genetic Programming 8 2.1.3 Tree Adjoining Grammar based Genetic Programming 9 3 Adaptive Mechanism and Adaptive Operator Selection 16 3.1 Adaptive Mechanism 16 3.2 Adaptive Operator Selection 18 3.2.1 Operator Selection 18 3.2.2 Evaluation of Operator Impact 19 3.3 Algorithms of Adaptive Operator Selection 20 3.3.1 Probability Matching 21 3.3.2 Adaptive Pursuit 22 3.3.3 Multi-Armed Bandits 25 4 Preliminary Experiment for Adaptive Operator Mechanism 28 4.1 Test Problems 28 4.2 Experimental Design 30 4.2.1 Search Space 31 4.2.2 General Parameter Settings 32 4.3 Results and Discussion 34 5 Operator Selection 39 5.1 Operator Selection Algorithms for GP 39 5.1.1 Powered Probability Matching 39 5.1.2 Adaptive Probability Matching 41 5.1.3 Recursive Adaptive Pursuit 41 5.2 Experiments and Results 43 5.2.1 Test Problems 43 5.2.2 Experimental Design 44 5.2.3 Results and Discussion 46 6 Evaluation of Operator Impact 56 6.1 Rates for the Amount of Individual Usage 57 6.1.1 Denition of Rates for the Amount of Individual Usage 57 6.1.2 Results and Discussion 58 6.2 Ratio for the Improvement of Fitness 63 6.2.1 Pairs and Group 64 6.2.2 Ratio and Children Fitness 65 6.2.3 Experimental Design 65 6.2.4 Result and Discussion 66 6.3 Ranking Point 73 6.3.1 Denition of Ranking Point 73 6.3.2 Experimental Design 74 6.3.3 Result and Discussion 74 6.4 Pre-Search Structure 76 6.4.1 Denition of Pre-Search Structure 76 6.4.2 Preliminary Experiment for Sampling 78 6.4.3 Experimental Design 82 6.4.4 Result and Discussion 83 7 Application: Nakdong River Modeling 85 7.1 Problem Description 85 7.1.1 Outline 85 7.1.2 Data Description 86 7.1.3 Model Description 88 7.1.4 Methods 93 7.2 Results 97 7.2.1 Parameter Optimization 97 7.2.2 Modeling 101 7.3 Summary 103 8 Conclusion 104 8.1 Summary 104 8.2 Future Works 108Docto

    Simulators: evolutionary multi-agent system for object recognition in satellite image.

    Get PDF
    Miu, Hoi Shun.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 170-182).Abstracts in English and Chinese.Abstract --- p.iiAcknowledgement --- p.vChapter 1 --- Introduction --- p.1Chapter 1.1 --- Problem Statement --- p.4Chapter 1.2 --- Contributions --- p.5Chapter 1.3 --- Thesis Organization --- p.6Chapter 2 --- Background --- p.8Chapter 2.1 --- Multi-agent Systems --- p.8Chapter 2.1.1 --- Agent Architectures --- p.9Chapter 2.1.2 --- Multi-agent system frameworks --- p.12Chapter 2.1.3 --- The Advantages and Disadvantages of Multi-agent Systems --- p.15Chapter 2.2 --- Evolutionary Computation --- p.16Chapter 2.2.1 --- Genetic Algorithms --- p.17Chapter 2.2.2 --- Genetic Programming --- p.18Chapter 2.2.3 --- Evolutionary Strategies --- p.19Chapter 2.2.4 --- Evolutionary Programming --- p.19Chapter 2.3 --- Object Recognition --- p.19Chapter 2.3.1 --- Knowledge Representation --- p.20Chapter 2.3.2 --- Object Recognition Methods --- p.21Chapter 2.4 --- Evolutionary Multi-agent Systems --- p.25Chapter 2.4.1 --- Competitive Coevolutionary Agents --- p.26Chapter 2.4.2 --- Cooperative Coevolutionary Agents --- p.26Chapter 2.4.3 --- Cellular Automata --- p.27Chapter 2.4.4 --- Emergent Behavior --- p.28Chapter 2.4.5 --- Evolutionary Agents for Image processing and Pattern Recog- nition --- p.29Chapter 3 --- System Architecture and Agent Behaviors in SIMULATORS --- p.33Chapter 3.1 --- Organization of the System --- p.34Chapter 3.1.1 --- General Architecture of Object Recognition System --- p.34Chapter 3.1.2 --- Introduction to SIMULATORS --- p.35Chapter 3.1.3 --- System Flow of SIMULATORS --- p.37Chapter 3.1.4 --- Layered Digital Image Environment --- p.39Chapter 3.2 --- Architecture of Autonomous Agents --- p.41Chapter 3.2.1 --- Internal Object Model in an Agent --- p.41Chapter 3.2.2 --- Current State of an Agent --- p.46Chapter 3.2.3 --- Local Information Sensor --- p.46Chapter 3.2.4 --- Direction Density Vector --- p.47Chapter 3.3 --- Agent Behaviors --- p.48Chapter 3.3.1 --- Feature Target Marking --- p.49Chapter 3.3.2 --- Reproduction --- p.49Chapter 3.3.3 --- Diffusion --- p.52Chapter 3.3.4 --- Vanishing --- p.54Chapter 3.4 --- Clustering for Autonomous Agent Training --- p.56Chapter 3.4.1 --- Introduction --- p.56Chapter 3.4.2 --- Creating the Internal Object Model --- p.58Chapter 3.5 --- Summary --- p.63Chapter 4 --- Evolutionary Algorithms for Multi Agent System --- p.64Chapter 4.1 --- Evolutionary Agent Behaviors in SIMULATORS --- p.65Chapter 4.1.1 --- Overview --- p.65Chapter 4.1.2 --- Evolutionary Autonomous Agents --- p.66Chapter 4.1.3 --- Reproduction --- p.68Chapter 4.1.4 --- Fitness Function --- p.68Chapter 4.1.5 --- Direction Density Vector Propagation --- p.73Chapter 4.1.6 --- Mutation --- p.73Chapter 4.2 --- Agents Voting Mechanism --- p.74Chapter 4.2.1 --- Overview --- p.74Chapter 4.2.2 --- Voting for Cooperative Agents --- p.75Chapter 4.3 --- Evolutionary Multi Agent Object Recognition --- p.79Chapter 4.4 --- Summary --- p.81Chapter 5 --- Experimental Results and Applications --- p.82Chapter 5.1 --- Experiment Methodology --- p.82Chapter 5.1.1 --- Introduction to Fung Shui Woodland --- p.83Chapter 5.1.2 --- Testing Images --- p.83Chapter 5.1.3 --- Creating Internal Object Model --- p.85Chapter 5.1.4 --- Experiment Parameters --- p.86Chapter 5.2 --- Experimental Results of Fung Shui Woodland Recognition --- p.92Chapter 5.2.1 --- Experiment 1: artificial0l --- p.92Chapter 5.2.2 --- Experiment 2: artificial0lยดุคnoise --- p.92Chapter 5.2.3 --- Experiment 3: artificial02 --- p.93Chapter 5.2.4 --- Experiment 4: FungShui0l --- p.93Chapter 5.2.5 --- Experiment 5: FungShui0lยดุคnoise --- p.94Chapter 5.2.6 --- Experiments 6 to 11: FungShui02 to FungShui07 --- p.94Chapter 5.3 --- Discussion --- p.119Chapter 5.4 --- An Example of Eyes Detection --- p.124Chapter 5.4.1 --- Result of the Eyes Detection --- p.128Chapter 5.5 --- Summary --- p.132Chapter 6 --- Conclusion --- p.133Chapter 6.1 --- Summary --- p.133Chapter 6.2 --- Future Work --- p.136Chapter A --- The Figures in the Experiments --- p.13

    Explorations in Parallel Linear Genetic Programming

    No full text
    Linear Genetic Programming (LGP) is a powerful problem-solving technique, but one with several significant weaknesses. LGP programs consist of a linear sequence of instructions, where each instruction may reuse previously computed results. This structure makes LGP programs compact and powerful, however it also introduces the problem of instruction dependencies. The notion of instruction dependencies expresses the concept that certain instructions rely on other instructions. Instruction dependencies are often disrupted during crossover or mutation when one or more instructions undergo modification. This disruption can cause disproportionately large changes in program output resulting in non-viable offspring and poor algorithm performance. Motivated by biological inspiration and the issue of code disruption, we develop a new form of LGP called Parallel LGP (PLGP). PLGP programs consist of n lists of instructions. These lists are executed in parallel, and the resulting vectors are summed to produce the overall program output. PLGP limits the disruptive effects of crossover and mutation, which allows PLGP to significantly outperform regular LGP. We examine the PLGP architecture and determine that large PLGP programs can be slow to converge. To improve the convergence time of large PLGP programs we develop a new form of PLGP called Cooperative Coevolution PLGP (CC PLGP). CC PLGP adapts the concept of cooperative coevolution to the PLGP architecture. CC PLGP optimizes all program components in parallel, allowing CC PLGP to converge significantly faster than conventional PLGP. We examine the CC PLGP architecture and determine that performanc
    corecore