1,124 research outputs found
On the Disambiguation of Weighted Automata
We present a disambiguation algorithm for weighted automata. The algorithm
admits two main stages: a pre-disambiguation stage followed by a transition
removal stage. We give a detailed description of the algorithm and the proof of
its correctness. The algorithm is not applicable to all weighted automata but
we prove sufficient conditions for its applicability in the case of the
tropical semiring by introducing the *weak twins property*. In particular, the
algorithm can be used with all acyclic weighted automata, relevant to
applications. While disambiguation can sometimes be achieved using
determinization, our disambiguation algorithm in some cases can return a result
that is exponentially smaller than any equivalent deterministic automaton. We
also present some empirical evidence of the space benefits of disambiguation
over determinization in speech recognition and machine translation
applications
Exceptional String: Instanton Expansions and Seiberg-Witten Curve
We investigate instanton expansions of partition functions of several toric
E-string models using local mirror symmetry and elliptic modular forms. We also
develop a method to obtain the Seiberg--Witten curve of E-string with arbitrary
Wilson lines with the help of elliptic functions.Comment: 71 pages, three Wilson line
Off-diagonal impedance in amorphous wires and application to linear magnetic sensors
The magnetic-field behaviour of the off-diagonal impedance in Co-based
amorphous wires is investigated under the condition of sinusoidal (50 MHz) and
pulsed (5 ns rising time) current excitations. For comparison, the field
characteristics of the diagonal impedance are measured as well. In general,
when an alternating current is applied to a magnetic wire the voltage signal is
generated not only across the wire but also in the coil mounted on it. These
voltages are related with the diagonal and off-diagonal impedances,
respectively. It is demonstrated that these impedances have a different
behaviour as a function of axial magnetic field: the former is symmetrical and
the latter is antisymmetrical with a near linear portion within a certain field
interval. In the case of the off-diagonal response, the dc bias current
eliminating circular domains is necessary. The pulsed excitation that combines
both high and low frequency harmonics produces the off-diagonal voltage
response without additional bias current or field. This suits ideal for a
practical sensor circuit design. The principles of operation of a linear
magnetic sensor based on C-MOS transistor circuit are discussed.Comment: Accepted to IEEE Trans. Magn. (2004
On the Existence of the Adversarial Bayes Classifier (Extended Version)
Adversarial robustness is a critical property in a variety of modern machine
learning applications. While it has been the subject of several recent
theoretical studies, many important questions related to adversarial robustness
are still open. In this work, we study a fundamental question regarding Bayes
optimality for adversarial robustness. We provide general sufficient conditions
under which the existence of a Bayes optimal classifier can be guaranteed for
adversarial robustness. Our results can provide a useful tool for a subsequent
study of surrogate losses in adversarial robustness and their consistency
properties. This manuscript is the extended version of the paper "On the
Existence of the Adversarial Bayes Classifier" published in NeurIPS. The
results of the original paper did not apply to some non-strictly convex norms.
Here we extend our results to all possible norms. Additionally, we clarify a
missing step in one of our proofs.Comment: 54 pages, 8 figures. Extended version of the paper "On the Existence
of the Adversarial Bayes Classifier" published in NeurIP
Liveness-Based Garbage Collection for Lazy Languages
We consider the problem of reducing the memory required to run lazy
first-order functional programs. Our approach is to analyze programs for
liveness of heap-allocated data. The result of the analysis is used to preserve
only live data---a subset of reachable data---during garbage collection. The
result is an increase in the garbage reclaimed and a reduction in the peak
memory requirement of programs. While this technique has already been shown to
yield benefits for eager first-order languages, the lack of a statically
determinable execution order and the presence of closures pose new challenges
for lazy languages. These require changes both in the liveness analysis itself
and in the design of the garbage collector.
To show the effectiveness of our method, we implemented a copying collector
that uses the results of the liveness analysis to preserve live objects, both
evaluated (i.e., in WHNF) and closures. Our experiments confirm that for
programs running with a liveness-based garbage collector, there is a
significant decrease in peak memory requirements. In addition, a sizable
reduction in the number of collections ensures that in spite of using a more
complex garbage collector, the execution times of programs running with
liveness and reachability-based collectors remain comparable
Recommended from our members
Containment and equivalence of weighted automata: Probabilistic and max-plus cases
This paper surveys some results regarding decision problems for probabilistic and max-plus automata, such as containment and equivalence. Probabilistic and max-plus automata are part of the general family of weighted automata, whose semantics are maps from words to real values. Given two weighted automata, the equivalence problem asks whether their semantics are the same, and the containment problem whether one is point-wise smaller than the other one. These problems have been studied intensively and this paper will review some techniques used to show (un)decidability and state a list of open questions that still remain
Discounting in LTL
In recent years, there is growing need and interest in formalizing and
reasoning about the quality of software and hardware systems. As opposed to
traditional verification, where one handles the question of whether a system
satisfies, or not, a given specification, reasoning about quality addresses the
question of \emph{how well} the system satisfies the specification. One
direction in this effort is to refine the "eventually" operators of temporal
logic to {\em discounting operators}: the satisfaction value of a specification
is a value in , where the longer it takes to fulfill eventuality
requirements, the smaller the satisfaction value is.
In this paper we introduce an augmentation by discounting of Linear Temporal
Logic (LTL), and study it, as well as its combination with propositional
quality operators. We show that one can augment LTL with an arbitrary set of
discounting functions, while preserving the decidability of the model-checking
problem. Further augmenting the logic with unary propositional quality
operators preserves decidability, whereas adding an average-operator makes some
problems undecidable. We also discuss the complexity of the problem, as well as
various extensions
Seiberg-Witten prepotential for E-string theory and random partitions
We find a Nekrasov-type expression for the Seiberg-Witten prepotential for
the six-dimensional non-critical E_8 string theory toroidally compactified down
to four dimensions. The prepotential represents the BPS partition function of
the E_8 strings wound around one of the circles of the toroidal
compactification with general winding numbers and momenta. We show that our
expression exhibits expected modular properties. In particular, we prove that
it obeys the modular anomaly equation known to be satisfied by the
prepotential.Comment: 14 page
Learning with Biased Complementary Labels
In this paper, we study the classification problem in which we have access to
easily obtainable surrogate for true labels, namely complementary labels, which
specify classes that observations do \textbf{not} belong to. Let and
be the true and complementary labels, respectively. We first model
the annotation of complementary labels via transition probabilities
, where is the number of
classes. Previous methods implicitly assume that , are identical, which is not true in practice because humans are
biased toward their own experience. For example, as shown in Figure 1, if an
annotator is more familiar with monkeys than prairie dogs when providing
complementary labels for meerkats, she is more likely to employ "monkey" as a
complementary label. We therefore reason that the transition probabilities will
be different. In this paper, we propose a framework that contributes three main
innovations to learning with \textbf{biased} complementary labels: (1) It
estimates transition probabilities with no bias. (2) It provides a general
method to modify traditional loss functions and extends standard deep neural
network classifiers to learn with biased complementary labels. (3) It
theoretically ensures that the classifier learned with complementary labels
converges to the optimal one learned with true labels. Comprehensive
experiments on several benchmark datasets validate the superiority of our
method to current state-of-the-art methods.Comment: ECCV 2018 Ora
- …