3,589 research outputs found

    A complexity analysis of statistical learning algorithms

    Full text link
    We apply information-based complexity analysis to support vector machine (SVM) algorithms, with the goal of a comprehensive continuous algorithmic analysis of such algorithms. This involves complexity measures in which some higher order operations (e.g., certain optimizations) are considered primitive for the purposes of measuring complexity. We consider classes of information operators and algorithms made up of scaled families, and investigate the utility of scaling the complexities to minimize error. We look at the division of statistical learning into information and algorithmic components, at the complexities of each, and at applications to support vector machine (SVM) and more general machine learning algorithms. We give applications to SVM algorithms graded into linear and higher order components, and give an example in biomedical informatics

    On the probabilistic continuous complexity conjecture

    Full text link
    In this paper we prove the probabilistic continuous complexity conjecture. In continuous complexity theory, this states that the complexity of solving a continuous problem with probability approaching 1 converges (in this limit) to the complexity of solving the same problem in its worst case. We prove the conjecture holds if and only if space of problem elements is uniformly convex. The non-uniformly convex case has a striking counterexample in the problem of identifying a Brownian path in Wiener space, where it is shown that probabilistic complexity converges to only half of the worst case complexity in this limit

    On Some Integrated Approaches to Inference

    Full text link
    We present arguments for the formulation of unified approach to different standard continuous inference methods from partial information. It is claimed that an explicit partition of information into a priori (prior knowledge) and a posteriori information (data) is an important way of standardizing inference approaches so that they can be compared on a normative scale, and so that notions of optimal algorithms become farther-reaching. The inference methods considered include neural network approaches, information-based complexity, and Monte Carlo, spline, and regularization methods. The model is an extension of currently used continuous complexity models, with a class of algorithms in the form of optimization methods, in which an optimization functional (involving the data) is minimized. This extends the family of current approaches in continuous complexity theory, which include the use of interpolatory algorithms in worst and average case settings

    Relationships among Interpolation Bases of Wavelet Spaces and Approximation Spaces

    Full text link
    A multiresolution analysis is a nested chain of related approximation spaces.This nesting in turn implies relationships among interpolation bases in the approximation spaces and their derived wavelet spaces. Using these relationships, a necessary and sufficient condition is given for existence of interpolation wavelets, via analysis of the corresponding scaling functions. It is also shown that any interpolation function for an approximation space plays the role of a special type of scaling function (an interpolation scaling function) when the corresponding family of approximation spaces forms a multiresolution analysis. Based on these interpolation scaling functions, a new algorithm is proposed for constructing corresponding interpolation wavelets (when they exist in a multiresolution analysis). In simulations, our theorems are tested for several typical wavelet spaces, demonstrating our theorems for existence of interpolation wavelets and for constructing them in a general multiresolution analysis

    On the average uncertainty for systems with nonlinear coupling

    Full text link
    The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale, i.e. at the width of the distribution. The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Renyi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.Comment: 24 pages, including 4 figures and 1 tabl

    Use of the geometric mean as a statistic for the scale of the coupled Gaussian distributions

    Full text link
    The geometric mean is shown to be an appropriate statistic for the scale of a heavy-tailed coupled Gaussian distribution or equivalently the Student's t distribution. The coupled Gaussian is a member of a family of distributions parameterized by the nonlinear statistical coupling which is the reciprocal of the degree of freedom and is proportional to fluctuations in the inverse scale of the Gaussian. Existing estimators of the scale of the coupled Gaussian have relied on estimates of the full distribution, and they suffer from problems related to outliers in heavy-tailed distributions. In this paper, the scale of a coupled Gaussian is proven to be equal to the product of the generalized mean and the square root of the coupling. From our numerical computations of the scales of coupled Gaussians using the generalized mean of random samples, it is indicated that only samples from a Cauchy distribution (with coupling parameter one) form an unbiased estimate with diminishing variance for large samples. Nevertheless, we also prove that the scale is a function of the geometric mean, the coupling term and a harmonic number. Numerical experiments show that this estimator is unbiased with diminishing variance for large samples for a broad range of coupling values.Comment: 17 pages, 5 figure

    Neonatal Euthanasia and the Groningen Protocol

    Get PDF
    Neonatal euthanasia has been legal in the Netherlands since 2005. Data indicate that neonatal euthanasia is practiced sub rosa by some clinicians in other countries as well; however, the true extent of neonatal euthanasia practice remains unknown. In this chapter, we review end-of-life options to describe the ethical background in the adult setting and how these translate into the neonatal setting. Further, the ethical arguments in favor and opposed to allowing euthanasia of infants, and those in favor and opposed to the use of paralytics in neonatal euthanasia, are presented

    Resident-generated versus instructor-generated cases in ethics and professionalism training

    Get PDF
    BACKGROUND: The emphasis on ethics and professionalism in medical education continues to increase. Indeed, in the United States the ACGME will require residency programs to include professionalism training in all curricula by 2007. Most courses focus on cases generated by the course instructors rather than on cases generated by the trainees. To date, however, there has been no assessment of the utility of these two case discussion formats. In order to determine the utility of instructor-generated cases (IGCs) versus resident-generated cases (RGCs) in ethics and professionalism training, the author developed an innovative course that included both case formats. The IGCs were landmark cases and cases from the experience of the course instructors, while the RGCs were selected by the residents themselves. Residents were then surveyed to assess the strengths and weaknesses of each format. RESULTS: Of twenty-two second and third year residents, fourteen completed surveys (response rate 64%). Residents were nearly evenly split in preferring RGCs (38%), IGCs (31%), or not preferring one to the other (31%). 29% stated that they learn more from the RGCs, 21% stated that they learn more from the IGCs, and 50% stated that they did not find a difference in their learning based on format. In general, residents surveyed prefer a mix of formats. Residents tended to find the RGCs more relevant and interesting, and felt the IGCs were necessary to ensure adequate breadth of cases and concepts. CONCLUSION: Based on our relatively small sample at a single institution, we believe that educators should consider incorporating both instructor-generated and resident-generated cases in their ethics and professionalism curricula, and should evaluate the utility of such a model at their own institution. Further work is needed to illuminate other potential improvements in ethics and professionalism education
    • …
    corecore