832 research outputs found

    Darboux cyclides and webs from circles

    Full text link
    Motivated by potential applications in architecture, we study Darboux cyclides. These algebraic surfaces of order a most 4 are a superset of Dupin cyclides and quadrics, and they carry up to six real families of circles. Revisiting the classical approach to these surfaces based on the spherical model of 3D Moebius geometry, we provide computational tools for the identification of circle families on a given cyclide and for the direct design of those. In particular, we show that certain triples of circle families may be arranged as so-called hexagonal webs, and we provide a complete classification of all possible hexagonal webs of circles on Darboux cyclides.Comment: 34 pages, 20 figure

    Combination of linear classifiers using score function -- analysis of possible combination strategies

    Full text link
    In this work, we addressed the issue of combining linear classifiers using their score functions. The value of the scoring function depends on the distance from the decision boundary. Two score functions have been tested and four different combination strategies were investigated. During the experimental study, the proposed approach was applied to the heterogeneous ensemble and it was compared to two reference methods -- majority voting and model averaging respectively. The comparison was made in terms of seven different quality criteria. The result shows that combination strategies based on simple average, and trimmed average are the best combination strategies of the geometrical combination

    Learning Timbre Analogies from Unlabelled Data by Multivariate Tree Regression

    Get PDF
    This is the Author's Original Manuscript of an article whose final and definitive form, the Version of Record, has been published in the Journal of New Music Research, November 2011, copyright Taylor & Francis. The published article is available online at http://www.tandfonline.com/10.1080/09298215.2011.596938

    Parameter Tuning Using Gaussian Processes

    Get PDF
    Most machine learning algorithms require us to set up their parameter values before applying these algorithms to solve problems. Appropriate parameter settings will bring good performance while inappropriate parameter settings generally result in poor modelling. Hence, it is necessary to acquire the “best” parameter values for a particular algorithm before building the model. The “best” model not only reflects the “real” function and is well fitted to existing points, but also gives good performance when making predictions for new points with previously unseen values. A number of methods exist that have been proposed to optimize parameter values. The basic idea of all such methods is a trial-and-error process whereas the work presented in this thesis employs Gaussian process (GP) regression to optimize the parameter values of a given machine learning algorithm. In this thesis, we consider the optimization of only two-parameter learning algorithms. All the possible parameter values are specified in a 2-dimensional grid in this work. To avoid brute-force search, Gaussian Process Optimization (GPO) makes use of “expected improvement” to pick useful points rather than validating every point of the grid step by step. The point with the highest expected improvement is evaluated using cross-validation and the resulting data point is added to the training set for the Gaussian process model. This process is repeated until a stopping criterion is met. The final model is built using the learning algorithm based on the best parameter values identified in this process. In order to test the effectiveness of this optimization method on regression and classification problems, we use it to optimize parameters of some well-known machine learning algorithms, such as decision tree learning, support vector machines and boosting with trees. Through the analysis of experimental results obtained on datasets from the UCI repository, we find that the GPO algorithm yields competitive performance compared with a brute-force approach, while exhibiting a distinct advantage in terms of training time and number of cross-validation runs. Overall, the GPO method is a promising method for the optimization of parameter values in machine learning

    Weakly-supervised evidence pinpointing and description

    Get PDF
    We propose a learning method to identify which specific regions and features of images contribute to a certain classification. In the medical imaging context, they can be the evidence regions where the abnormalities are most likely to appear, and the discriminative features of these regions supporting the pathology classification. The learning is weakly-supervised requiring only the pathological labels and no other prior knowledge. The method can also be applied to learn the salient description of an anatomy discriminative from its background, in order to localise the anatomy before a classification step. We formulate evidence pinpointing as a sparse descriptor learning problem. Because of the large computational complexity, the objective function is composed in a stochastic way and is optimised by the Regularised Dual Averaging algorithm. We demonstrate that the learnt feature descriptors contain more specific and better discriminative information than hand-crafted descriptors contributing to superior performance for the tasks of anatomy localisation and pathology classification respectively. We apply our method on the problem of lumbar spinal stenosis for localising and classifying vertebrae in MRI images. Experimental results show that our method when trained with only target labels achieves better or competitive performance on both tasks compared with strongly-supervised methods requiring labels and multiple landmarks. A further improvement is achieved with training on additional weakly annotated data, which gives robust localisation with average error within 2 mm and classification accuracies close to human performance

    Combination of linear classifiers using score function -- analysis of possible combination strategies

    Get PDF
    In this work, we addressed the issue of combining linear classifiers using their score functions. The value of the scoring function depends on the distance from the decision boundary. Two score functions have been tested and four different combination strategies were investigated. During the experimental study, the proposed approach was applied to the heterogeneous ensemble and it was compared to two reference methods -- majority voting and model averaging respectively. The comparison was made in terms of seven different quality criteria. The result shows that combination strategies based on simple average, and trimmed average are the best combination strategies of the geometrical combination
    corecore