2,409 research outputs found
On object specificity
[W]e have demonstrated that the object specificity follows from the same principle as the subject specificity under the EMH. Furthermore, the semantic discrepancy between the realis and irrealis object shift constructions turns out to be a subcase of the more general indicative-modal asymmetry. Although our analysis presented here is nothing but conclusive, it does suggest that the EMH is a potent candidate for explaining the indicative-modal asymmetry, as well as for building a general theory of the specificity effects in question
Active Sampling of Pairs and Points for Large-scale Linear Bipartite Ranking
Bipartite ranking is a fundamental ranking problem that learns to order
relevant instances ahead of irrelevant ones. The pair-wise approach for
bi-partite ranking construct a quadratic number of pairs to solve the problem,
which is infeasible for large-scale data sets. The point-wise approach, albeit
more efficient, often results in inferior performance. That is, it is difficult
to conduct bipartite ranking accurately and efficiently at the same time. In
this paper, we develop a novel active sampling scheme within the pair-wise
approach to conduct bipartite ranking efficiently. The scheme is inspired from
active learning and can reach a competitive ranking performance while focusing
only on a small subset of the many pairs during training. Moreover, we propose
a general Combined Ranking and Classification (CRC) framework to accurately
conduct bipartite ranking. The framework unifies point-wise and pair-wise
approaches and is simply based on the idea of treating each instance point as a
pseudo-pair. Experiments on 14 real-word large-scale data sets demonstrate that
the proposed algorithm of Active Sampling within CRC, when coupled with a
linear Support Vector Machine, usually outperforms state-of-the-art point-wise
and pair-wise ranking approaches in terms of both accuracy and efficiency.Comment: a shorter version was presented in ACML 201
Singly Cabibbo suppressed decays of with SU(3) flavor symmetry
We analyze the weak processes of anti-triplet charmed baryons decaying to
octet baryons and mesons with the SU(3) flavor symmetry and topological quark
diagram scheme. We study the decay branching ratios without neglecting the
contributions from for the first time in the SU(3)
flavor symmetry approach. The fitting results for the Cabibbo allowed and
suppressed decays of are all consistent with the experimental
data. We predict all singly Cabibbo suppressed decays. In particular, we find
that , which is
slightly below the current experimental upper limit of and
can be tested by the ongoing experiment at BESIII as well as the future one at
Belle-II.Comment: 11 pages, 2 figure, revised version accepted by PL
Charmed Baryon Weak Decays with SU(3) Flavor Symmetry
We study the semileptonic and non-leptonic charmed baryon decays with
flavor symmetry, where the charmed baryons can be , , , or . With denoted as the baryon
octet (decuplet), we find that the
decays are forbidden, while the ,
, and decays are the only existing Cabibbo-allowed modes
for , , and , respectively. We predict the rarely studied
decays, such as and . For the observation, the doubly and triply charmed baryon decays of
, ,
, and are the favored Cabibbo-allowed decays,
which are accessible to the BESIII and LHCb experiments.Comment: 29 pages, no figure, a typo in the table correcte
Soft Methodology for Cost-and-error Sensitive Classification
Many real-world data mining applications need varying cost for different
types of classification errors and thus call for cost-sensitive classification
algorithms. Existing algorithms for cost-sensitive classification are
successful in terms of minimizing the cost, but can result in a high error rate
as the trade-off. The high error rate holds back the practical use of those
algorithms. In this paper, we propose a novel cost-sensitive classification
methodology that takes both the cost and the error rate into account. The
methodology, called soft cost-sensitive classification, is established from a
multicriteria optimization problem of the cost and the error rate, and can be
viewed as regularizing cost-sensitive classification with the error rate. The
simple methodology allows immediate improvements of existing cost-sensitive
classification algorithms. Experiments on the benchmark and the real-world data
sets show that our proposed methodology indeed achieves lower test error rates
and similar (sometimes lower) test costs than existing cost-sensitive
classification algorithms. We also demonstrate that the methodology can be
extended for considering the weighted error rate instead of the original error
rate. This extension is useful for tackling unbalanced classification problems.Comment: A shorter version appeared in KDD '1
- β¦