68 research outputs found
INTERPRETATION OF EXPERIMENTS IN PAUL TRAP
Chosen aspects of laser spectroscopy in a Paul trap have been discussed. On the example of interpretation of the experimental results for europium ion the advantages of the use of semiempirical atomicstructure calculations have been proved. For the calculations a computer package, prepared and adopted by ourselves to be used on vector computers, has been applied.Pozna
COMPUTATIONAL PACKAGE FOR ANALYSIS OF THE FINE STRUCTURE OF A FREE ATOM
This work describes some computer programs that are available in compiled form for users of the SGI Power Challenge machine at Poznań Supercomputing and Networking Center. These programs give apossibility to solve very complex problems in the structure of free atoms using experimental data. The procedure of determination of the Slater integrals, spin-orbit parameters and also the parameters representingthe effect of virtual excitations (many-body effects) is described. The program gives the propositions of the spectroscopic designations of energy levels. Moreover, the wavefunctions in intermediate coupling scheme in many-configurations approximation for the selected configuration system can be obtained.Pozna
Learning Interpretable Rules for Multi-label Classification
Multi-label classification (MLC) is a supervised learning problem in which,
contrary to standard multiclass classification, an instance can be associated
with several class labels simultaneously. In this chapter, we advocate a
rule-based approach to multi-label classification. Rule learning algorithms are
often employed when one is not only interested in accurate predictions, but
also requires an interpretable theory that can be understood, analyzed, and
qualitatively evaluated by domain experts. Ideally, by revealing patterns and
regularities contained in the data, a rule-based theory yields new insights in
the application domain. Recently, several authors have started to investigate
how rule-based models can be used for modeling multi-label data. Discussing
this task in detail, we highlight some of the problems that make rule learning
considerably more challenging for MLC than for conventional classification.
While mainly focusing on our own previous work, we also provide a short
overview of related work in this area.Comment: Preprint version. To appear in: Explainable and Interpretable Models
in Computer Vision and Machine Learning. The Springer Series on Challenges in
Machine Learning. Springer (2018). See
http://www.ke.tu-darmstadt.de/bibtex/publications/show/3077 for further
informatio
Efficient Discovery of Expressive Multi-label Rules using Relaxed Pruning
Being able to model correlations between labels is considered crucial in
multi-label classification. Rule-based models enable to expose such
dependencies, e.g., implications, subsumptions, or exclusions, in an
interpretable and human-comprehensible manner. Albeit the number of possible
label combinations increases exponentially with the number of available labels,
it has been shown that rules with multiple labels in their heads, which are a
natural form to model local label dependencies, can be induced efficiently by
exploiting certain properties of rule evaluation measures and pruning the label
search space accordingly. However, experiments have revealed that multi-label
heads are unlikely to be learned by existing methods due to their
restrictiveness. To overcome this limitation, we propose a plug-in approach
that relaxes the search space pruning used by existing methods in order to
introduce a bias towards larger multi-label heads resulting in more expressive
rules. We further demonstrate the effectiveness of our approach empirically and
show that it does not come with drawbacks in terms of training time or
predictive performance.Comment: Preprint version. To appear in Proceedings of the 22nd International
Conference on Discovery Science, 201
Exploiting Anti-monotonicity of Multi-label Evaluation Measures for Inducing Multi-label Rules
Exploiting dependencies between labels is considered to be crucial for
multi-label classification. Rules are able to expose label dependencies such as
implications, subsumptions or exclusions in a human-comprehensible and
interpretable manner. However, the induction of rules with multiple labels in
the head is particularly challenging, as the number of label combinations which
must be taken into account for each rule grows exponentially with the number of
available labels. To overcome this limitation, algorithms for exhaustive rule
mining typically use properties such as anti-monotonicity or decomposability in
order to prune the search space. In the present paper, we examine whether
commonly used multi-label evaluation metrics satisfy these properties and
therefore are suited to prune the search space for multi-label heads.Comment: Preprint version. To appear in: Proceedings of the Pacific-Asia
Conference on Knowledge Discovery and Data Mining (PAKDD) 2018. See
http://www.ke.tu-darmstadt.de/bibtex/publications/show/3074 for further
information. arXiv admin note: text overlap with arXiv:1812.0005
Multi-Target Prediction: A Unifying View on Problems and Methods
Multi-target prediction (MTP) is concerned with the simultaneous prediction
of multiple target variables of diverse type. Due to its enormous application
potential, it has developed into an active and rapidly expanding research field
that combines several subfields of machine learning, including multivariate
regression, multi-label classification, multi-task learning, dyadic prediction,
zero-shot learning, network inference, and matrix completion. In this paper, we
present a unifying view on MTP problems and methods. First, we formally discuss
commonalities and differences between existing MTP problems. To this end, we
introduce a general framework that covers the above subfields as special cases.
As a second contribution, we provide a structured overview of MTP methods. This
is accomplished by identifying a number of key properties, which distinguish
such methods and determine their suitability for different types of problems.
Finally, we also discuss a few challenges for future research
On Aggregation in Ensembles of Multilabel Classifiers
While a variety of ensemble methods for multilabel classification have been
proposed in the literature, the question of how to aggregate the predictions of
the individual members of the ensemble has received little attention so far. In
this paper, we introduce a formal framework of ensemble multilabel
classification, in which we distinguish two principal approaches: "predict then
combine" (PTC), where the ensemble members first make loss minimizing
predictions which are subsequently combined, and "combine then predict" (CTP),
which first aggregates information such as marginal label probabilities from
the individual ensemble members, and then derives a prediction from this
aggregation. While both approaches generalize voting techniques commonly used
for multilabel ensembles, they allow to explicitly take the target performance
measure into account. Therefore, concrete instantiations of CTP and PTC can be
tailored to concrete loss functions. Experimentally, we show that standard
voting techniques are indeed outperformed by suitable instantiations of CTP and
PTC, and provide some evidence that CTP performs well for decomposable loss
functions, whereas PTC is the better choice for non-decomposable losses.Comment: 14 pages, 2 figure
On calibration of nested dichotomies
Nested dichotomies (NDs) are used as a method of transforming a multiclass classification problem into a series of binary problems. A tree structure is induced that recursively splits the set of classes into subsets, and a binary classification model learns to discriminate between the two subsets of classes at each node. In this paper, we demonstrate that these NDs typically exhibit poor probability calibration, even when the binary base models are well-calibrated. We also show that this problem is exacerbated when the binary models are poorly calibrated. We discuss the effectiveness of different calibration strategies and show that accuracy and log-loss can be significantly improved by calibrating both the internal base models and the full ND structure, especially when the number of classes is high
- …