5 research outputs found

    Peningkatan Kemampuan Pengenalan Pola Dari Jaringan Saraf Tiruan Dengan Menggunakan Diskritisasi Chi2

    Full text link
    Proses pelatihan backpropagation membutuhkan waktu yang cukup lama untuk mencapai tahap konvergen. Salah satu penyebabnya adalah dokumen dalam set data terdiri dari campuran antara bilangan kontinu dan diskrit. dalam hal klasifikasi, sebuah set data akan lebih mudah dibedakan dengan nilai atribut yang berbeda jauh. Metode Chi2 berhasil untuk menemukan pola dari set data sintetik yang memiliki pola data miring dan pararel. Penggabungan backpropagation dan Chi2 berhasil mempercepat proses pelatihan dan meningkatkan akurasi klasifikasi. Oleh karena itu, pengujian akan dilanjutkan dengan menggabungkan dua metode tersebut untuk mengklasifikasikan set data dari kasus nyata. Dari hasil pengujian didapatkan kesimpulan bahwa kedua metode tersebut berhasil mempercepat proses pelatihan dan meningkatkan akurasi klasifikasi

    On the Synthesis of fuzzy neural systems.

    Get PDF
    by Chung, Fu Lai.Thesis (Ph.D.)--Chinese University of Hong Kong, 1995.Includes bibliographical references (leaves 166-174).ACKNOWLEDGEMENT --- p.iiiABSTRACT --- p.ivChapter 1. --- Introduction --- p.1Chapter 1.1 --- Integration of Fuzzy Systems and Neural Networks --- p.1Chapter 1.2 --- Objectives of the Research --- p.7Chapter 1.2.1 --- Fuzzification of Competitive Learning Algorithms --- p.7Chapter 1.2.2 --- Capacity Analysis of FAM and FRNS Models --- p.8Chapter 1.2.3 --- Structure and Parameter Identifications of FRNS --- p.9Chapter 1.3 --- Outline of the Thesis --- p.9Chapter 2. --- A Fuzzy System Primer --- p.11Chapter 2.1 --- Basic Concepts of Fuzzy Sets --- p.11Chapter 2.2 --- Fuzzy Set-Theoretic Operators --- p.15Chapter 2.3 --- "Linguistic Variable, Fuzzy Rule and Fuzzy Inference" --- p.19Chapter 2.4 --- Basic Structure of a Fuzzy System --- p.22Chapter 2.4.1 --- Fuzzifier --- p.22Chapter 2.4.2 --- Fuzzy Knowledge Base --- p.23Chapter 2.4.3 --- Fuzzy Inference Engine --- p.24Chapter 2.4.4 --- Defuzzifier --- p.28Chapter 2.5 --- Concluding Remarks --- p.29Chapter 3. --- Categories of Fuzzy Neural Systems --- p.30Chapter 3.1 --- Introduction --- p.30Chapter 3.2 --- Fuzzification of Neural Networks --- p.31Chapter 3.2.1 --- Fuzzy Membership Driven Models --- p.32Chapter 3.2.2 --- Fuzzy Operator Driven Models --- p.34Chapter 3.2.3 --- Fuzzy Arithmetic Driven Models --- p.35Chapter 3.3 --- Layered Network Implementation of Fuzzy Systems --- p.36Chapter 3.3.1 --- Mamdani's Fuzzy Systems --- p.36Chapter 3.3.2 --- Takagi and Sugeno's Fuzzy Systems --- p.37Chapter 3.3.3 --- Fuzzy Relation Based Fuzzy Systems --- p.38Chapter 3.4 --- Concluding Remarks --- p.40Chapter 4. --- Fuzzification of Competitive Learning Networks --- p.42Chapter 4.1 --- Introduction --- p.42Chapter 4.2 --- Crisp Competitive Learning --- p.44Chapter 4.2.1 --- Unsupervised Competitive Learning Algorithm --- p.46Chapter 4.2.2 --- Learning Vector Quantization Algorithm --- p.48Chapter 4.2.3 --- Frequency Sensitive Competitive Learning Algorithm --- p.50Chapter 4.3 --- Fuzzy Competitive Learning --- p.50Chapter 4.3.1 --- Unsupervised Fuzzy Competitive Learning Algorithm --- p.53Chapter 4.3.2 --- Fuzzy Learning Vector Quantization Algorithm --- p.54Chapter 4.3.3 --- Fuzzy Frequency Sensitive Competitive Learning Algorithm --- p.58Chapter 4.4 --- Stability of Fuzzy Competitive Learning --- p.58Chapter 4.5 --- Controlling the Fuzziness of Fuzzy Competitive Learning --- p.60Chapter 4.6 --- Interpretations of Fuzzy Competitive Learning Networks --- p.61Chapter 4.7 --- Simulation Results --- p.64Chapter 4.7.1 --- Performance of Fuzzy Competitive Learning Algorithms --- p.64Chapter 4.7.2 --- Performance of Monotonically Decreasing Fuzziness Control Scheme --- p.74Chapter 4.7.3 --- Interpretation of Trained Networks --- p.76Chapter 4.8 --- Concluding Remarks --- p.80Chapter 5. --- Capacity Analysis of Fuzzy Associative Memories --- p.82Chapter 5.1 --- Introduction --- p.82Chapter 5.2 --- Fuzzy Associative Memories (FAMs) --- p.83Chapter 5.3 --- Storing Multiple Rules in FAMs --- p.87Chapter 5.4 --- A High Capacity Encoding Scheme for FAMs --- p.90Chapter 5.5 --- Memory Capacity --- p.91Chapter 5.6 --- Rule Modification --- p.93Chapter 5.7 --- Inference Performance --- p.99Chapter 5.8 --- Concluding Remarks --- p.104Chapter 6. --- Capacity Analysis of Fuzzy Relational Neural Systems --- p.105Chapter 6.1 --- Introduction --- p.105Chapter 6.2 --- Fuzzy Relational Equations and Fuzzy Relational Neural Systems --- p.107Chapter 6.3 --- Solving a System of Fuzzy Relational Equations --- p.109Chapter 6.4 --- New Solvable Conditions --- p.112Chapter 6.4.1 --- Max-t Fuzzy Relational Equations --- p.112Chapter 6.4.2 --- Min-s Fuzzy Relational Equations --- p.117Chapter 6.5 --- Approximate Resolution --- p.119Chapter 6.6 --- System Capacity --- p.123Chapter 6.7 --- Inference Performance --- p.125Chapter 6.8 --- Concluding Remarks --- p.127Chapter 7. --- Structure and Parameter Identifications of Fuzzy Relational Neural Systems --- p.129Chapter 7.1 --- Introduction --- p.129Chapter 7.2 --- Modelling Nonlinear Dynamic Systems by Fuzzy Relational Equations --- p.131Chapter 7.3 --- A General FRNS Identification Algorithm --- p.138Chapter 7.4 --- An Evolutionary Computation Approach to Structure and Parameter Identifications --- p.139Chapter 7.4.1 --- Guided Evolutionary Simulated Annealing --- p.140Chapter 7.4.2 --- An Evolutionary Identification (EVIDENT) Algorithm --- p.143Chapter 7.5 --- Simulation Results --- p.146Chapter 7.6 --- Concluding Remarks --- p.158Chapter 8. --- Conclusions --- p.159Chapter 8.1 --- Summary of Contributions --- p.160Chapter 8.1.1 --- Fuzzy Competitive Learning --- p.160Chapter 8.1.2 --- Capacity Analysis of FAM and FRNS --- p.160Chapter 8.1.3 --- Numerical Identification of FRNS --- p.161Chapter 8.2 --- Further Investigations --- p.162Appendix A Publication List of the Candidate --- p.164BIBLIOGRAPHY --- p.16

    Product design selection using online customer reviews

    Get PDF
    Product design selection is heavily constrained by its customer preference data acquisition process. Traditionally, the customer preference data is collected through survey-based methods such as conjoint; sometimes product prototypes are generated and evaluated by focused groups of customers. In this way, the data acquisition process can become costly and require a significant amount of time. The goal of this dissertation is to overcome the limitation of the traditional customer preference data acquisition process by making use of a new type of customer data - online customer reviews. Because online customer reviews are, to a large extent, freely available on the Internet copiously, using them for product design can significantly reduce the cost as well as the time. Of course, the data obtained from online reviews have some disadvantages too. For example, online reviews are freely expressed and can contain a lot of noise. In this dissertation, a new methodology is developed to extract useful data from online customer reviews from a single website, construct customer preference models and select a product design that provides a maximum expected profit. However, online customer reviews from a single website may not represent the market well. Furthermore, different websites may have their own procedures and formats to acquire customer reviews. A new approach is developed to systematically elicit customer data from multiple websites, construct customer preference models by considering website heterogeneity, and select a product design. The model from multiple websites is also extended to account for customer preference heterogeneity. The models obtained from the online customer reviews for single and multiple websites are compared and validated using a set of out-of-sample data. To demonstrate the applicability of the proposed models, a smartphone case study is used throughout the dissertation

    Applications of neural networks in the binary classification problem.

    Get PDF
    by Chan Pak Kei, Bernard.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references (leaves 125-127).Chapter 1 --- Introduction --- p.10Chapter 1.1 --- Overview --- p.10Chapter 1.2 --- Classification Approaches --- p.11Chapter 1.3 --- The Use of Neural Network --- p.12Chapter 1.4 --- Motivations --- p.14Chapter 1.5 --- Organization of Thesis --- p.16Chapter 2 --- Related Work --- p.19Chapter 2.1 --- Overview --- p.19Chapter 2.2 --- Neural Network --- p.20Chapter 2.2.1 --- Backpropagation Feedforward Neural Network --- p.20Chapter 2.2.2 --- Training of a Backpropagation Feedforward Neural Network --- p.22Chapter 2.2.3 --- Single Hidden-layer Model --- p.27Chapter 2.2.4 --- Data Preprocessing --- p.27Chapter 2.3 --- Fuzzy Sets --- p.29Chapter 2.3.1 --- Fuzzy Linear Regression Analysis --- p.29Chapter 2.4 --- Network Architecture Altering Algorithms --- p.31Chapter 2.4.1 --- Pruning Algorithms --- p.32Chapter 2.4.2 --- Constructive/Growing Algorithms --- p.35Chapter 2.5 --- Summary --- p.38Chapter 3 --- Hybrid Classification Systems --- p.39Chapter 3.1 --- Overview --- p.39Chapter 3.2 --- Literature Review --- p.41Chapter 3.2.1 --- Fuzzy Linear Regression(FLR) with Fuzzy Interval Analysis --- p.41Chapter 3.3 --- Data Sample and Methodology --- p.44Chapter 3.4 --- Hybrid Model --- p.46Chapter 3.4.1 --- Construction of Model --- p.46Chapter 3.5 --- Experimental Results --- p.50Chapter 3.5.1 --- Experimental Results on Breast Cancer Database --- p.50Chapter 3.5.2 --- Experimental Results on Synthetic Data --- p.53Chapter 3.6 --- Conclusion --- p.55Chapter 4 --- Searching for Suitable Network Size Automatically --- p.59Chapter 4.1 --- Overview --- p.59Chapter 4.2 --- Literature Review --- p.61Chapter 4.2.1 --- Pruning Algorithm --- p.61Chapter 4.2.2 --- Constructive Algorithms (Growing) --- p.66Chapter 4.2.3 --- Integration of methods --- p.67Chapter 4.3 --- Methodology and Approaches --- p.68Chapter 4.3.1 --- Growing --- p.68Chapter 4.3.2 --- Combinations of Growing and Pruning --- p.69Chapter 4.4 --- Experimental Results --- p.75Chapter 4.4.1 --- Breast-Cancer Cytology Database --- p.76Chapter 4.4.2 --- Tic-Tac-Toe Database --- p.82Chapter 4.5 --- Conclusion --- p.89Chapter 5 --- Conclusion --- p.91Chapter 5.1 --- Recall of Thesis Objectives --- p.91Chapter 5.2 --- Summary of Achievements --- p.92Chapter 5.2.1 --- Data Preprocessing --- p.92Chapter 5.2.2 --- Network Size --- p.93Chapter 5.3 --- Future Works --- p.94Chapter A --- Experimental Results of Ch3 --- p.95Chapter B --- Experimental Results of Ch4 --- p.112Bibliography --- p.12
    corecore