370 research outputs found

    Rival penalized competitive learning for content-based indexing.

    Get PDF
    by Lau Tak Kan.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 100-108).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- Problem Defined --- p.5Chapter 1.3 --- Contributions --- p.5Chapter 1.4 --- Thesis Organization --- p.7Chapter 2 --- Content-based Retrieval Multimedia Database Background and Indexing Problem --- p.8Chapter 2.1 --- Feature Extraction --- p.8Chapter 2.2 --- Nearest-neighbor Search --- p.10Chapter 2.3 --- Content-based Indexing Methods --- p.15Chapter 2.4 --- Indexing Problem --- p.22Chapter 3 --- Data Clustering Methods for Indexing --- p.25Chapter 3.1 --- Proposed Solution to Indexing Problem --- p.25Chapter 3.2 --- Brief Description of Several Clustering Methods --- p.26Chapter 3.2.1 --- K-means --- p.26Chapter 3.2.2 --- Competitive Learning (CL) --- p.27Chapter 3.2.3 --- Rival Penalized Competitive Learning (RPCL) --- p.29Chapter 3.2.4 --- General Hierarchical Clustering Methods --- p.31Chapter 3.3 --- Why RPCL? --- p.32Chapter 4 --- Non-hierarchical RPCL Indexing --- p.33Chapter 4.1 --- The Non-hierarchical Approach --- p.33Chapter 4.2 --- Performance Experiments --- p.34Chapter 4.2.1 --- Experimental Setup --- p.35Chapter 4.2.2 --- Experiment 1: Test for Recall and Precision Performance --- p.38Chapter 4.2.3 --- Experiment 2: Test for Different Sizes of Input Data Sets --- p.45Chapter 4.2.4 --- Experiment 3: Test for Different Numbers of Dimensions --- p.49Chapter 4.2.5 --- Experiment 4: Compare with Actual Nearest-neighbor Results --- p.53Chapter 4.3 --- Chapter Summary --- p.55Chapter 5 --- Hierarchical RPCL Indexing --- p.56Chapter 5.1 --- The Hierarchical Approach --- p.56Chapter 5.2 --- The Hierarchical RPCL Binary Tree (RPCL-b-tree) --- p.58Chapter 5.3 --- Insertion --- p.61Chapter 5.4 --- Deletion --- p.63Chapter 5.5 --- Searching --- p.63Chapter 5.6 --- Experiments --- p.69Chapter 5.6.1 --- Experimental Setup --- p.69Chapter 5.6.2 --- Experiment 5: Test for Different Node Sizes --- p.72Chapter 5.6.3 --- Experiment 6: Test for Different Sizes of Data Sets --- p.75Chapter 5.6.4 --- Experiment 7: Test for Different Data Distributions --- p.78Chapter 5.6.5 --- Experiment 8: Test for Different Numbers of Dimensions --- p.80Chapter 5.6.6 --- Experiment 9: Test for Different Numbers of Database Ob- jects Retrieved --- p.83Chapter 5.6.7 --- Experiment 10: Test with VP-tree --- p.86Chapter 5.7 --- Discussion --- p.90Chapter 5.8 --- A Relationship Formula --- p.93Chapter 5.9 --- Chapter Summary --- p.96Chapter 6 --- Conclusion --- p.97Chapter 6.1 --- Future Works --- p.97Chapter 6.2 --- Conclusion --- p.98Bibliography --- p.10

    Fuzzy clustering for content-based indexing in multimedia databases.

    Get PDF
    Yue Ho-Yin.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 129-137).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Problem Definition --- p.7Chapter 1.2 --- Contributions --- p.8Chapter 1.3 --- Thesis Organization --- p.10Chapter 2 --- Literature Review --- p.11Chapter 2.1 --- "Content-based Retrieval, Background and Indexing Problem" --- p.11Chapter 2.1.1 --- Feature Extraction --- p.12Chapter 2.1.2 --- Nearest-neighbor Search --- p.13Chapter 2.1.3 --- Content-based Indexing Methods --- p.15Chapter 2.2 --- Indexing Problems --- p.25Chapter 2.3 --- Data Clustering Methods for Indexing --- p.26Chapter 2.3.1 --- Probabilistic Clustering --- p.27Chapter 2.3.2 --- Possibilistic Clustering --- p.34Chapter 3 --- Fuzzy Clustering Algorithms --- p.37Chapter 3.1 --- Fuzzy Competitive Clustering --- p.38Chapter 3.2 --- Sequential Fuzzy Competitive Clustering --- p.40Chapter 3.3 --- Experiments --- p.43Chapter 3.3.1 --- Experiment 1: Data set with different number of samples --- p.44Chapter 3.3.2 --- Experiment 2: Data set on different dimensionality --- p.46Chapter 3.3.3 --- Experiment 3: Data set with different number of natural clusters inside --- p.55Chapter 3.3.4 --- Experiment 4: Data set with different noise level --- p.56Chapter 3.3.5 --- Experiment 5: Clusters with different geometry size --- p.60Chapter 3.3.6 --- Experiment 6: Clusters with different number of data instances --- p.67Chapter 3.3.7 --- Experiment 7: Performance on real data set --- p.71Chapter 3.4 --- Discussion --- p.72Chapter 3.4.1 --- "Differences Between FCC, SFCC, and Others Clustering Algorithms" --- p.72Chapter 3.4.2 --- Variations on SFCC --- p.75Chapter 3.4.3 --- Why SFCC? --- p.75Chapter 4 --- Hierarchical Indexing based on Natural Clusters Information --- p.77Chapter 4.1 --- The Hierarchical Approach --- p.77Chapter 4.2 --- The Sequential Fuzzy Competitive Clustering Binary Tree (SFCC- b-tree) --- p.79Chapter 4.2.1 --- Data Structure of SFCC-b-tree --- p.80Chapter 4.2.2 --- Tree Building of SFCC-b-Tree --- p.82Chapter 4.2.3 --- Insertion of SFCC-b-tree --- p.83Chapter 4.2.4 --- Deletion of SFCC-b-Tree --- p.84Chapter 4.2.5 --- Searching in SFCC-b-Tree --- p.84Chapter 4.3 --- Experiments --- p.88Chapter 4.3.1 --- Experimental Setting --- p.88Chapter 4.3.2 --- Experiment 8: Test for different leaf node sizes --- p.90Chapter 4.3.3 --- Experiment 9: Test for different dimensionality --- p.97Chapter 4.3.4 --- Experiment 10: Test for different sizes of data sets --- p.104Chapter 4.3.5 --- Experiment 11: Test for different data distributions --- p.109Chapter 4.4 --- Summary --- p.113Chapter 5 --- A Case Study on SFCC-b-tree --- p.114Chapter 5.1 --- Introduction --- p.114Chapter 5.2 --- Data Collection --- p.115Chapter 5.3 --- Data Pre-processing --- p.116Chapter 5.4 --- Experimental Results --- p.119Chapter 5.5 --- Summary --- p.121Chapter 6 --- Conclusion --- p.122Chapter 6.1 --- An Efficiency Formula --- p.122Chapter 6.1.1 --- Motivation --- p.122Chapter 6.1.2 --- Regression Model --- p.123Chapter 6.1.3 --- Discussion --- p.124Chapter 6.2 --- Future Directions --- p.127Chapter 6.3 --- Conclusion --- p.128Bibliography --- p.12

    Will they take this offer? A machine learning price elasticity model for predicting upselling acceptance of premium airline seating

    Get PDF
    Employing customer information from one of the world's largest airline companies, we develop a price elasticity model (PREM) using machine learning to identify customers likely to purchase an upgrade offer from economy to premium class and predict a customer's acceptable price range. A simulation of 64.3 million flight bookings and 14.1 million email offers over three years mirroring actual data indicates that PREM implementation results in approximately 1.12 million (7.94%) fewer non-relevant customer email messages, a predicted increase of 72,200 (37.2%) offers accepted, and an estimated $72.2 million (37.2%) of increased revenue. Our results illustrate the potential of automated pricing information and targeting marketing messages for upselling acceptance. We also identified three customer segments: (1) Never Upgrades are those who never take the upgrade offer, (2) Upgrade Lovers are those who generally upgrade, and (3) Upgrade Lover Lookalikes have no historical record but fit the profile of those that tend to upgrade. We discuss the implications for airline companies and related travel and tourism industries.© 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    A review of model designs

    Get PDF
    The PAEQANN project aims to review current ecological theories which can help identify suited models that predict community structure in aquatic ecosystems, to select and discuss appropriate models, depending on the type of target community (i.e. empirical vs. simulation models) and to examine how results add to ecological water management objectives. To reach these goals a number of classical statistical models, artificial neural networks and dynamic models are presented. An even higher number of techniques within these groups will tested lateron in the project. This report introduces all of them. The techniques are shortly introduced, their algorithms explained, and the advantages and disadvantages discussed

    Uncovering the structure of clinical EEG signals with self-supervised learning

    Get PDF
    Objective. Supervised learning paradigms are often limited by the amount of labeled data that is available. This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG), where labeling can be costly in terms of specialized expertise and human processing time. Consequently, deep learning architectures designed to learn on EEG data have yielded relatively shallow models and performances at best similar to those of traditional feature-based approaches. However, in most situations, unlabeled data is available in abundance. By extracting information from this unlabeled data, it might be possible to reach competitive performance with deep neural networks despite limited access to labels. Approach. We investigated self-supervised learning (SSL), a promising technique for discovering structure in unlabeled data, to learn representations of EEG signals. Specifically, we explored two tasks based on temporal context prediction as well as contrastive predictive coding on two clinically-relevant problems: EEG-based sleep staging and pathology detection. We conducted experiments on two large public datasets with thousands of recordings and performed baseline comparisons with purely supervised and hand-engineered approaches. Main results. Linear classifiers trained on SSL-learned features consistently outperformed purely supervised deep neural networks in low-labeled data regimes while reaching competitive performance when all labels were available. Additionally, the embeddings learned with each method revealed clear latent structures related to physiological and clinical phenomena, such as age effects. Significance. We demonstrate the benefit of SSL approaches on EEG data. Our results suggest that self-supervision may pave the way to a wider use of deep learning models on EEG data.Peer reviewe

    Machine Learning Meets Mental Training -- A Proof of Concept Applied to Memory Sports

    Full text link
    This work aims to combine these two fields together by presenting a practical implementation of machine learning to the particular form of mental training that is the art of memory, taken in its competitive version called "Memory Sports". Such a fusion, on the one hand, strives to raise awareness about both realms, while on the other it seeks to encourage research in this mixed field as a way to, ultimately, drive forward the development of this seemingly underestimated sport.Comment: 75 pages, 47 figures, 2 tables, 26 code excerpt
    corecore