1,270 research outputs found
The Form is Not a Proper Part in Aristotleâs Metaphysics Z.17, 1041b11â33
When Aristotle argues at the Metaphysics Z.17, 1041b11â33 that a whole, which is not a heap, contains âsomething elseâ, i.e. the form, besides the elements, it is not clear whether or not the form is a proper part of the whole. I defend the claim that the form is not a proper part within the context of the relevant passage, since the whole is divided into elements, not into elements and the form. Different divisions determine different senses of âpartâ, and thus the form is not a part in the same sense as the elements are parts. I object to Koslickiâs (2006) interpretation, according to which the form is a proper part along the elements in a single sense of âpartâ, although she insists that the form and the elements belong to different categories. I argue that Koslickiâs reading involves a category mistake, i.e. the conjunction of items that do not belong to the same category (Goldwater 2018). Since for Aristotle parthood presupposes some kind of similarity of parts, the conjunction of form and elements requires treating these items as somehow belonging to the same category, e.g. âbeingâ, but no such category exists
Stationary Mixing Bandits
We study the bandit problem where arms are associated with stationary
phi-mixing processes and where rewards are therefore dependent: the question
that arises from this setting is that of recovering some independence by
ignoring the value of some rewards. As we shall see, the bandit problem we
tackle requires us to address the exploration/exploitation/independence
trade-off. To do so, we provide a UCB strategy together with a general regret
analysis for the case where the size of the independence blocks (the ignored
rewards) is fixed and we go a step beyond by providing an algorithm that is
able to compute the size of the independence blocks from the data. Finally, we
give an analysis of our bandit problem in the restless case, i.e., in the
situation where the time counters for all mixing processes simultaneously
evolve
Confusion Matrix Stability Bounds for Multiclass Classification
In this paper, we provide new theoretical results on the generalization
properties of learning algorithms for multiclass classification problems. The
originality of our work is that we propose to use the confusion matrix of a
classifier as a measure of its quality; our contribution is in the line of work
which attempts to set up and study the statistical properties of new evaluation
measures such as, e.g. ROC curves. In the confusion-based learning framework we
propose, we claim that a targetted objective is to minimize the size of the
confusion matrix C, measured through its operator norm ||C||. We derive
generalization bounds on the (size of the) confusion matrix in an extended
framework of uniform stability, adapted to the case of matrix valued loss.
Pivotal to our study is a very recent matrix concentration inequality that
generalizes McDiarmid's inequality. As an illustration of the relevance of our
theoretical results, we show how two SVM learning procedures can be proved to
be confusion-friendly. To the best of our knowledge, the present paper is the
first that focuses on the confusion matrix from a theoretical point of view
Decoy Bandits Dueling on a Poset
We adress the problem of dueling bandits defined on partially ordered sets,
or posets. In this setting, arms may not be comparable, and there may be
several (incomparable) optimal arms. We propose an algorithm, UnchainedBandits,
that efficiently finds the set of optimal arms of any poset even when pairs of
comparable arms cannot be distinguished from pairs of incomparable arms, with a
set of minimal assumptions. This algorithm relies on the concept of decoys,
which stems from social psychology. For the easier case where the
incomparability information may be accessible, we propose a second algorithm,
SlicingBandits, which takes advantage of this information and achieves a very
significant gain of performance compared to UnchainedBandits. We provide
theoretical guarantees and experimental evaluation for both algorithms
From Cutting Planes Algorithms to Compression Schemes and Active Learning
Cutting-plane methods are well-studied localization(and optimization)
algorithms. We show that they provide a natural framework to perform
machinelearning ---and not just to solve optimization problems posed by
machinelearning--- in addition to their intended optimization use. In
particular, theyallow one to learn sparse classifiers and provide good
compression schemes.Moreover, we show that very little effort is required to
turn them intoeffective active learning methods. This last property provides a
generic way todesign a whole family of active learning algorithms from existing
passivemethods. We present numerical simulations testifying of the relevance
ofcutting-plane methods for passive and active learning tasks.Comment: IJCNN 2015, Jul 2015, Killarney, Ireland. 2015,
\<http://www.ijcnn.org/\&g
On Decoding Schemes for the MDPC-McEliece Cryptosystem
Recently, it has been shown how McEliece public-key cryptosystems based on
moderate-density parity-check (MDPC) codes allow for very compact keys compared
to variants based on other code families. In this paper, classical (iterative)
decoding schemes for MPDC codes are considered. The algorithms are analyzed
with respect to their error-correction capability as well as their resilience
against a recently proposed reaction-based key-recovery attack on a variant of
the MDPC-McEliece cryptosystem by Guo, Johansson and Stankovski (GJS). New
message-passing decoding algorithms are presented and analyzed. Two proposed
decoding algorithms have an improved error-correction performance compared to
existing hard-decision decoding schemes and are resilient against the GJS
reaction-based attack for an appropriate choice of the algorithm's parameters.
Finally, a modified belief propagation decoding algorithm that is resilient
against the GJS reaction-based attack is presented
Unconfused Ultraconservative Multiclass Algorithms
We tackle the problem of learning linear classifiers from noisy datasets in a
multiclass setting. The two-class version of this problem was studied a few
years ago by, e.g. Bylander (1994) and Blum et al. (1996): in these
contributions, the proposed approaches to fight the noise revolve around a
Perceptron learning scheme fed with peculiar examples computed through a
weighted average of points from the noisy training set. We propose to build
upon these approaches and we introduce a new algorithm called UMA (for
Unconfused Multiclass additive Algorithm) which may be seen as a generalization
to the multiclass setting of the previous approaches. In order to characterize
the noise we use the confusion matrix as a multiclass extension of the
classification noise studied in the aforementioned literature. Theoretically
well-founded, UMA furthermore displays very good empirical noise robustness, as
evidenced by numerical simulations conducted on both synthetic and real data.
Keywords: Multiclass classification, Perceptron, Noisy labels, Confusion MatrixComment: ACML, Australia (2013
Protograph-Based LDPC Code Design for Shaped Bit-Metric Decoding
A protograph-based low-density parity-check (LDPC) code design technique for
bandwidth-efficient coded modulation is presented. The approach jointly
optimizes the LDPC code node degrees and the mapping of the coded bits to the
bit-interleaved coded modulation (BICM) bit-channels. For BICM with uniform
input and for BICM with probabilistic shaping, binary-input symmetric-output
surrogate channels for the code design are used. The constructed codes for
uniform inputs perform as good as the multi-edge type codes of Zhang and
Kschischang (2013). For 8-ASK and 64-ASK with probabilistic shaping, codes of
rates 2/3 and 5/6 with blocklength 64800 are designed, which operate within
0.63dB and 0.69dB of continuous AWGN capacity for a target frame error rate of
1e-3 at spectral efficiencies of 1.38 and 4.25 bits/channel use, respectively.Comment: 9 pages, 10 figures. arXiv admin note: substantial text overlap with
arXiv:1501.0559
High-Throughput Random Access via Codes on Graphs
Recently, contention resolution diversity slotted ALOHA (CRDSA) has been
introduced as a simple but effective improvement to slotted ALOHA. It relies on
MAC burst repetitions and on interference cancellation to increase the
normalized throughput of a classic slotted ALOHA access scheme. CRDSA allows
achieving a larger throughput than slotted ALOHA, at the price of an increased
average transmitted power. A way to trade-off the increment of the average
transmitted power and the improvement of the throughput is presented in this
paper. Specifically, it is proposed to divide each MAC burst in k sub-bursts,
and to encode them via a (n,k) erasure correcting code. The n encoded
sub-bursts are transmitted over the MAC channel, according to specific
time/frequency-hopping patterns. Whenever n-e>=k sub-bursts (of the same burst)
are received without collisions, erasure decoding allows recovering the
remaining e sub-bursts (which were lost due to collisions). An interference
cancellation process can then take place, removing in e slots the interference
caused by the e recovered sub-bursts, possibly allowing the correct decoding of
sub-bursts related to other bursts. The process is thus iterated as for the
CRDSA case.Comment: Presented at the Future Network and MobileSummit 2010 Conference,
Florence (Italy), June 201
Caching at the Edge with Fountain Codes
We address the use of linear randon fountain codes caching schemes in a
heterogeneous satellite network. We consider a system composed of multiple hubs
and a geostationary Earth orbit satellite. Coded content is memorized in hubs'
caches in order to serve immediately the user requests and reduce the usage of
the satellite backhaul link. We derive the analytical expression of the average
backhaul rate, as well as a tight upper bound to it with a simple expression.
Furthermore, we derive the optimal caching strategy which minimizes the average
backhaul rate and compare the performance of the linear random fountain code
scheme to that of a scheme using maximum distance separable codes. Our
simulation results indicate that the performance obtained using fountain codes
is similar to that of maximum distance separable codes
- âŠ