832 research outputs found
Darboux cyclides and webs from circles
Motivated by potential applications in architecture, we study Darboux
cyclides. These algebraic surfaces of order a most 4 are a superset of Dupin
cyclides and quadrics, and they carry up to six real families of circles.
Revisiting the classical approach to these surfaces based on the spherical
model of 3D Moebius geometry, we provide computational tools for the
identification of circle families on a given cyclide and for the direct design
of those. In particular, we show that certain triples of circle families may be
arranged as so-called hexagonal webs, and we provide a complete classification
of all possible hexagonal webs of circles on Darboux cyclides.Comment: 34 pages, 20 figure
Combination of linear classifiers using score function -- analysis of possible combination strategies
In this work, we addressed the issue of combining linear classifiers using
their score functions. The value of the scoring function depends on the
distance from the decision boundary. Two score functions have been tested and
four different combination strategies were investigated. During the
experimental study, the proposed approach was applied to the heterogeneous
ensemble and it was compared to two reference methods -- majority voting and
model averaging respectively. The comparison was made in terms of seven
different quality criteria. The result shows that combination strategies based
on simple average, and trimmed average are the best combination strategies of
the geometrical combination
Learning Timbre Analogies from Unlabelled Data by Multivariate Tree Regression
This is the Author's Original Manuscript of an article whose final and definitive form, the Version of Record, has been published in the Journal of New Music Research, November 2011, copyright Taylor & Francis. The published article is available online at http://www.tandfonline.com/10.1080/09298215.2011.596938
Recommended from our members
Machine Learning Outperforms ACC / AHA CVD Risk Calculator in MESA.
Background Studies have demonstrated that the current US guidelines based on American College of Cardiology/American Heart Association (ACC/AHA) Pooled Cohort Equations Risk Calculator may underestimate risk of atherosclerotic cardiovascular disease ( CVD ) in certain high-risk individuals, therefore missing opportunities for intensive therapy and preventing CVD events. Similarly, the guidelines may overestimate risk in low risk populations resulting in unnecessary statin therapy. We used Machine Learning ( ML ) to tackle this problem. Methods and Results We developed a ML Risk Calculator based on Support Vector Machines ( SVM s) using a 13-year follow up data set from MESA (the Multi-Ethnic Study of Atherosclerosis) of 6459 participants who were atherosclerotic CVD-free at baseline. We provided identical input to both risk calculators and compared their performance. We then used the FLEMENGHO study (the Flemish Study of Environment, Genes and Health Outcomes) to validate the model in an external cohort. ACC / AHA Risk Calculator, based on 7.5% 10-year risk threshold, recommended statin to 46.0%. Despite this high proportion, 23.8% of the 480 "Hard CVD " events occurred in those not recommended statin, resulting in sensitivity 0.76, specificity 0.56, and AUC 0.71. In contrast, ML Risk Calculator recommended only 11.4% to take statin, and only 14.4% of "Hard CVD " events occurred in those not recommended statin, resulting in sensitivity 0.86, specificity 0.95, and AUC 0.92. Similar results were found for prediction of "All CVD " events. Conclusions The ML Risk Calculator outperformed the ACC/AHA Risk Calculator by recommending less drug therapy, yet missing fewer events. Additional studies are underway to validate the ML model in other cohorts and to explore its ability in short-term CVD risk prediction
Parameter Tuning Using Gaussian Processes
Most machine learning algorithms require us to set up their parameter values before applying these algorithms to solve problems. Appropriate parameter settings will bring good performance while inappropriate parameter settings generally result in poor modelling. Hence, it is necessary to acquire the “best” parameter values for a particular algorithm before building the model. The “best” model not only reflects the “real” function and is well fitted to existing points, but also gives good performance when making predictions for new points with previously unseen values.
A number of methods exist that have been proposed to optimize parameter values. The basic idea of all such methods is a trial-and-error process whereas the work presented in this thesis employs Gaussian process (GP) regression to optimize the parameter values of a given machine learning algorithm. In this thesis, we consider the optimization of only two-parameter learning algorithms. All the possible parameter values are specified in a 2-dimensional grid in this work. To avoid brute-force search, Gaussian Process Optimization (GPO) makes use of “expected improvement” to pick useful points rather than validating every point of the grid step by step. The point with the highest expected improvement is evaluated using cross-validation and the resulting data point is added to the training set for the Gaussian process model. This process is repeated until a stopping criterion is met. The final model is built using the learning algorithm based on the best parameter values identified in this process.
In order to test the effectiveness of this optimization method on regression and classification problems, we use it to optimize parameters of some well-known machine learning algorithms, such as decision tree learning, support vector machines and boosting with trees. Through the analysis of experimental results obtained on datasets from the UCI repository, we find that the GPO algorithm yields competitive performance compared with a brute-force approach, while exhibiting a distinct advantage in terms of training time and number of cross-validation runs. Overall, the GPO method is a promising method for the optimization of parameter values in machine learning
Weakly-supervised evidence pinpointing and description
We propose a learning method to identify which specific regions and features of images contribute to a certain classification. In the medical imaging context, they can be the evidence regions where the abnormalities are most likely to appear, and the discriminative features of these regions supporting the pathology classification. The learning is weakly-supervised requiring only the pathological labels and no other prior knowledge. The method can also be applied to learn the salient description of an anatomy discriminative from its background, in order to localise the anatomy before a classification step. We formulate evidence pinpointing as a sparse descriptor learning problem. Because of the large computational complexity, the objective function is composed in a stochastic way and is optimised by the Regularised Dual Averaging algorithm. We demonstrate that the learnt feature descriptors contain more specific and better discriminative information than hand-crafted descriptors contributing to superior performance for the tasks of anatomy localisation and pathology classification respectively. We apply our method on the problem of lumbar spinal stenosis for localising and classifying vertebrae in MRI images. Experimental results show that our method when trained with only target labels achieves better or competitive performance on both tasks compared with strongly-supervised methods requiring labels and multiple landmarks. A further improvement is achieved with training on additional weakly annotated data, which gives robust localisation with average error within 2 mm and classification accuracies close to human performance
Combination of linear classifiers using score function -- analysis of possible combination strategies
In this work, we addressed the issue of combining linear classifiers using
their score functions. The value of the scoring function depends on the
distance from the decision boundary. Two score functions have been tested and
four different combination strategies were investigated. During the
experimental study, the proposed approach was applied to the heterogeneous
ensemble and it was compared to two reference methods -- majority voting and
model averaging respectively. The comparison was made in terms of seven
different quality criteria. The result shows that combination strategies based
on simple average, and trimmed average are the best combination strategies of
the geometrical combination
- …