31 research outputs found

    Improved Kernel Alignment Regret Bound for Online Kernel Learning

    Full text link
    In this paper, we improve the kernel alignment regret bound for online kernel learning in the regime of the Hinge loss function. Previous algorithm achieves a regret of O((ATTlnT)14)O((\mathcal{A}_TT\ln{T})^{\frac{1}{4}}) at a computational complexity (space and per-round time) of O(ATTlnT)O(\sqrt{\mathcal{A}_TT\ln{T}}), where AT\mathcal{A}_T is called \textit{kernel alignment}. We propose an algorithm whose regret bound and computational complexity are better than previous results. Our results depend on the decay rate of eigenvalues of the kernel matrix. If the eigenvalues of the kernel matrix decay exponentially, then our algorithm enjoys a regret of O(AT)O(\sqrt{\mathcal{A}_T}) at a computational complexity of O(ln2T)O(\ln^2{T}). Otherwise, our algorithm enjoys a regret of O((ATT)14)O((\mathcal{A}_TT)^{\frac{1}{4}}) at a computational complexity of O(ATT)O(\sqrt{\mathcal{A}_TT}). We extend our algorithm to batch learning and obtain a O(1TE[AT])O(\frac{1}{T}\sqrt{\mathbb{E}[\mathcal{A}_T]}) excess risk bound which improves the previous O(1/T)O(1/\sqrt{T}) bound

    Integration of Multiple Temporal Qualitative Probabilistic Networks in Time Series Environments

    Get PDF
    Abstract: The integration of uncertain information from different time sources is a crucial issue in various applications. In this paper, we propose an integration method of multiple Temporal Qualitative Probabilistic Networks (TQPNs) in time series environments. First, we present the method for learning TQPN from time series data. The TQPN's structure is constructed using Dynamic Bayesian Networks learning based on Markov Chain Monte Carlo. Furthermore, the corresponding qualitative influences are obtained by the conditional probabilities. Secondly, based on rough set theory, we integrate multiple TQPNs into a single QPN that preserves as much information as possible. Specifically, we take the rough-set-based dependency degree as the strength of qualitative influence, and then make the rules to solve the ambiguities reduction and cycles deletion problems which arise from the integration of different TQPNs. Finally, we verify the feasibility of the integration method by the simulation experiments

    Eigenvalues Ratio for Kernel Selection of Kernel Methods

    No full text
    The selection of kernel function which determines the mapping between the input space and the feature space is of crucial importance to kernel methods. Existing kernel selection approaches commonly use some measures of generalization error, which are usually difficult to estimate and have slow convergence rates. In this paper, we propose a novel measure, called eigenvalues ratio (ER), of the tight bound of generalization error for kernel selection. ER is the ration between the sum of the main eigenvalues and that of the tail eigenvalues of the kernel matrix. Defferent from most of existing measures, ER is defined on the kernel matrxi, so it can be estimated easily from the available training data, which makes it usable for kernel selection. We establish tight ER-based generalization error bounds of order O(1n)O(\frac{1}{n}) for several kernel-based methods under certain general conditions, while for most of existing measures, the convergence rate is at most O(1n)O(\frac{1}{\sqrt{n}}). Finally, to guarantee good generalization performance, we propose a novel kernel selection criterion by minimizing the derived tight generalization error bounds. Theoretical analysis and experimental results demonstrate that our kernel selection criterion is a good choice for kernel seletion

    MACT: A Manageable Minimization Allocation System

    No full text
    Background. Minimization is a case allocation method for randomized controlled trials (RCT). Evidence suggests that the minimization method achieves balanced groups with respect to numbers and participant characteristics, and can incorporate more prognostic factors compared to other randomization methods. Although several automatic allocation systems exist (e.g., randoWeb, and MagMin), the minimization method is still difficult to implement, and RCTs seldom employ minimization. Therefore, we developed the minimization allocation controlled trials (MACT) system, a generic manageable minimization allocation system. System Outline. The MACT system implements minimization allocation by Web and email. It has a unified interface that manages trials, participants, and allocation. It simultaneously supports multitrials, multicenters, multigrouping, multiple prognostic factors, and multilevels. Methods. Unlike previous systems, MACT utilizes an optimized database that greatly improves manageability. Simulations and Results. MACT was assessed in a series of experiments and evaluations. Relative to simple randomization, minimization produces better balance among groups and similar unpredictability. Applications. MACT has been employed in two RCTs that lasted three years. During this period, MACT steadily and simultaneously satisfied the requirements of the trial. Conclusions. MACT is a manageable, easy-to-use case allocation system. Its outstanding features are attracting more RCTs to use the minimization allocation method

    Complex Transmission Eigenvalues in One Dimension

    Get PDF
    We consider all of the transmission eigenvalues for one-dimensional media. We give some conditions under which complex eigenvalues exist. In the case when the index of refraction is constant, it is shown that all the transmission eigenvalues are real if and only if the index of refraction is an odd number or reciprocal of an odd number

    Regret Bounds for Online Kernel Selection in Continuous Kernel Space

    No full text
    Regret bounds of online kernel selection in a finite kernel set have been well studied, having at least an order O( √ NT) of magnitude after T rounds, where N is the number of candidate kernels. But it is still an unsolved problem to achieve sublinear regret bounds of online kernel selection in a continuous kernel space under different learning frameworks. In this paper, to represent different learning frameworks of online kernel selection, we divide online kernel selection approaches in a continuous kernel space into two categories according to the order of selection and training at each round. Then we construct a surrogate hypothesis space that contains all the candidate kernels with bounded norms and inner products, representing the continuously varying hypothesis space. Finally, we decompose the regrets of the proposed online kernel selection categories into different types of instantaneous regrets in the surrogate hypothesis space, and derive optimal regret bounds of order O( √ T) of magnitude under mild assumptions, independent of the cardinality of the continuous kernel space. Empirical studies verified the correctness of the theoretical regret analyses

    Generalization Analysis for Ranking Using Integral Operator

    No full text
    The study on generalization performance of ranking algorithms is one of the fundamental issues in ranking learning theory. Although several generalization bounds have been proposed based on different measures, the convergence rates of the existing bounds are usually at most O(√1/n), where n is the size of data set. In this paper, we derive novel generalization bounds for the regularized ranking in reproducing kernel Hilbert space via integral operator of kernel function. We prove that the rates of our bounds are much faster than (√1/n). Specifically, we first introduce a notion of local Rademacher complexity for ranking, called local ranking  Rademacher complexity, which is used to measure the complexity of the space of loss functions of the ranking. Then, we use the local ranking Rademacher complexity to obtain a basic generalization bound. Finally, we establish the relationship between the local Rademacher complexity and the eigenvalues of integral operator, and further derive sharp generalization bounds of faster convergence rate

    Infinite Kernel Learning: Generalization Bounds and Algorithms

    No full text
    Kernel learning is a fundamental problem both in recent research and application of kernel methods. Existing kernel learning methods commonly use some measures of generalization errors to learn the optimal kernel in a convex (or conic) combination of prescribed basic kernels. However, the generalization bounds derived by these measures usually have slow convergence rates, and the basic kernels are finite and should be specified in advance. In this paper, we propose a new kernel learning method based on a novel measure of generalization error, called principal eigenvalue proportion (PEP), which can learn the optimal kernel with sharp generalization bounds over the convex hull of a possibly infinite set of basic kernels. We first derive sharp generalization bounds based on the PEP measure. Then we design two kernel learning algorithms for finite kernels and infinite kernels respectively, in which the derived sharp generalization bounds are exploited to guarantee faster convergence rates, moreover, basic kernels can be learned automatically for infinite kernel learning instead of being prescribed in advance. Theoretical analysis and empirical results demonstrate that the proposed kernel learning method outperforms the state-of-the-art kernel learning methods

    Relationship between Hyperuricemia and Haar-Like Features on Tongue Images

    No full text
    Objective. To investigate differences in tongue images of subjects with and without hyperuricemia. Materials and Methods. This population-based case-control study was performed in 2012-2013. We collected data from 46 case subjects with hyperuricemia and 46 control subjects, including results of biochemical examinations and tongue images. Symmetrical Haar-like features based on integral images were extracted from tongue images. T-tests were performed to determine the ability of extracted features to distinguish between the case and control groups. We first selected features using the common criterion P<0.05, then conducted further examination of feature characteristics and feature selection using means and standard deviations of distributions in the case and control groups. Results. A total of 115,683 features were selected using the criterion P<0.05. The maximum area under the receiver operating characteristic curve (AUC) of these features was 0.877. The sensitivity of the feature with the maximum AUC value was 0.800 and specificity was 0.826 when the Youden index was maximized. Features that performed well were concentrated in the tongue root region. Conclusions. Symmetrical Haar-like features enabled discrimination of subjects with and without hyperuricemia in our sample. The locations of these discriminative features were in agreement with the interpretation of tongue appearance in traditional Chinese and Western medicine

    Approximate Kernel Selection with Strong Approximate Consistency

    No full text
    Kernel selection is fundamental to the generalization performance of kernel-based learning algorithms. Approximate kernel selection is an efficient kernel selection approach that exploits the convergence property of the kernel selection criteria and the computational virtue of kernel matrix approximation. The convergence property is measured by the notion of approximate consistency. For the existing Nyström approximations, whose sampling distributions are independent of the specific learning task at hand, it is difficult to establish the strong approximate consistency. They mainly focus on the quality of the low-rank matrix approximation, rather than the performance of the kernel selection criterion used in conjunction with the approximate matrix. In this paper, we propose a novel Nyström approximate kernel selection algorithm by customizing a criterion-driven adaptive sampling distribution for the Nyström approximation, which adaptively reduces the error between the approximate and accurate criteria. We theoretically derive the strong approximate consistency of the proposed Nyström approximate kernel selection algorithm. Finally, we empirically evaluate the approximate consistency of our algorithm as compared to state-of-the-art methods
    corecore