10 research outputs found

    A Review on Advanced Decision Trees for Efficient & Effective k-NN Classification

    Get PDF
    K Nearest Neighbor (KNN) strategy is a notable classification strategy in data mining and estimations in light of its direct execution and colossal arrangement execution. In any case, it is outlandish for ordinary KNN strategies to select settled k esteem to all tests. Past courses of action assign different k esteems to different test tests by the cross endorsement strategy however are typically tedious. This work proposes new KNN strategies, first is a KTree strategy to learn unique k esteems for different test or new cases, by including a training arrange in the KNN classification. This work additionally proposes a change rendition of KTree technique called K*Tree to speed its test organize by putting additional data of the training tests in the leaf node of KTree, for example, the training tests situated in the leaf node, their KNNs, and the closest neighbor of these KNNs. K*Tree, which empowers to lead KNN arrangement utilizing a subset of the training tests in the leaf node instead of all training tests utilized in the recently KNN techniques. This really reduces the cost of test organize

    Smoothing graphons for modelling exchangeable relational data

    Full text link
    Modelling exchangeable relational data can be described appropriately in graphon theory. Most Bayesian methods for modelling exchangeable relational data can be attributed to this framework by exploiting different forms of graphons. However, the graphons adopted by existing Bayesian methods are either piecewise-constant functions, which are insufficiently flexible for accurate modelling of the relational data, or are complicated continuous functions, which incur heavy computational costs for inference. In this work, we overcome these two shortcomings by smoothing piecewise-constant graphons, which permits continuous intensity values for describing relations, without impractically increasing computational costs. In particular, we focus on the Bayesian Stochastic Block Model (SBM) and demonstrate how to adapt the piecewise-constant SBM graphon to the smoothed version. We first propose the Integrated Smoothing Graphon (ISG) which introduces one smoothing parameter to the SBM graphon to generate continuous relational intensity values. Then, we further develop the Latent Feature Smoothing Graphon (LFSG), which improves the ISG, by introducing auxiliary hidden labels to decompose the calculation of the ISG intensity and enable efficient inference. Experimental results on real-world data sets validate the advantages of applying smoothing strategies to the Stochastic Block Model, demonstrating that smoothing graphons can greatly improve AUC and precision for link prediction without increasing computational complexity

    Hashing for Multimedia Similarity Modeling and Large-Scale Retrieval

    Get PDF
    In recent years, the amount of multimedia data such as images, texts, and videos have been growing rapidly on the Internet. Motivated by such trends, this thesis is dedicated to exploiting hashing-based solutions to reveal multimedia data correlations and support intra-media and inter-media similarity search among huge volumes of multimedia data. We start by investigating a hashing-based solution for audio-visual similarity modeling and apply it to the audio-visual sound source localization problem. We show that synchronized signals in audio and visual modalities demonstrate similar temporal changing patterns in certain feature spaces. We propose to use a permutation-based random hashing technique to capture the temporal order dynamics of audio and visual features by hashing them along the temporal axis into a common Hamming space. In this way, the audio-visual correlation problem is transformed into a similarity search problem in the Hamming space. Our hashing-based audio-visual similarity modeling has shown superior performances in the localization and segmentation of sounding objects in videos. The success of the permutation-based hashing method motivates us to generalize and formally define the supervised ranking-based hashing problem, and study its application to large-scale image retrieval. Specifically, we propose an effective supervised learning procedure to learn optimized ranking-based hash functions that can be used for large-scale similarity search. Compared with the randomized version, the optimized ranking-based hash codes are much more compact and discriminative. Moreover, it can be easily extended to kernel space to discover more complex ranking structures that cannot be revealed in linear subspaces. Experiments on large image datasets demonstrate the effectiveness of the proposed method for image retrieval. We further studied the ranking-based hashing method for the cross-media similarity search problem. Specifically, we propose two optimization methods to jointly learn two groups of linear subspaces, one for each media type, so that features\u27 ranking orders in different linear subspaces maximally preserve the cross-media similarities. Additionally, we develop this ranking-based hashing method in the cross-media context into a flexible hashing framework with a more general solution. We have demonstrated through extensive experiments on several real-world datasets that the proposed cross-media hashing method can achieve superior cross-media retrieval performances against several state-of-the-art algorithms. Lastly, to make better use of the supervisory label information, as well as to further improve the efficiency and accuracy of supervised hashing, we propose a novel multimedia discrete hashing framework that optimizes an instance-wise loss objective, as compared to the pairwise losses, using an efficient discrete optimization method. In addition, the proposed method decouples the binary codes learning and hash function learning into two separate stages, thus making the proposed method equally applicable for both single-media and cross-media search. Extensive experiments on both single-media and cross-media retrieval tasks demonstrate the effectiveness of the proposed method

    IEEE Transactions On Neural Networks And Learning Systems : Vol. 24, No. 8, August 2013

    No full text
    1. Sampled-data exponential synchorinization of complex dynamical networks with time-varying coupling delay. 2. Dictionary learning-based subspace structure identification in spectral clustering. 3. Knowledge-leverage-based TSK Fuzzy System modeling. 4. A Cognitive fault diagnosis system for distributed sensor networks. 5. Boundedness and complete stability of complex-valued neural networks with time delay. 6. Fast neuromimetic object recognition using FPGA outperforms GPU implementations. 7. Improving the quality of self-organizing maps by self-intersection avoidance. 8. Quantum-based algorithm for optimizing artificial neural networks. 9. Hinging hyperplanes for time-series segmentation. 10. Ranking graph embedding for learning to rerank. 11. Feasibility abd finite convergence analysis for accurate on-line v-support vector machine. 12. Exponential synchronization of coupled switched neural networks with mode-dependent impulse effects. 13. Analysis of boudedness and convergence of online gradient method for two-layer feedforward neural networks. 14. Phase-noise-indeced resonance in arrays of coupled excitable neural models

    IEEE Transactions On Neural Networks And Learning Systems : Vol. 24, No. 8, August 2013

    No full text
    1. Sampled-data exponential synchronization of complex dynamical networks with time-varying coupling delay. 2. Dictionary learning-based subspace structure identification in spectral clustering. 3. Knowledge-leverage-based TSK Fuzzy system modeling. 4. A Cognitive fault diagnosis system for distributed sensor networks. 5. Boundedness and complete stability of complex-valued neural networks with time delay. 6. Fast neuromimetric object recognition using FPGA outperforms GPU implementations. 7. Improving the quality of self-organizing maps by self-intersection avoidance. 8. Quantum-based algorithm for optimizing artificial neural networks. 9. Hinging hyperplanes for time-series segmentation. 10. Ranking graph embedding for learning to rerank. 11. Feasibility and finite convergence analysis for accurate on-line v-support vector machine. 12. Exponential synchronization of coupled switched neural networks with mode-dependent impulsive effects, 13.Analysis of boundedness and convergence of online gradient method for two-layer feedforward neural networks. 14. Phase-noise-induced resonance in arrays of coupled excitable neural models. 15. Call for pappers: WCCI 2014 Etc
    corecore