49,744 research outputs found
A Coding Theoretic Study on MLL proof nets
Coding theory is very useful for real world applications. A notable example
is digital television. Basically, coding theory is to study a way of detecting
and/or correcting data that may be true or false. Moreover coding theory is an
area of mathematics, in which there is an interplay between many branches of
mathematics, e.g., abstract algebra, combinatorics, discrete geometry,
information theory, etc. In this paper we propose a novel approach for
analyzing proof nets of Multiplicative Linear Logic (MLL) by coding theory. We
define families of proof structures and introduce a metric space for each
family. In each family, 1. an MLL proof net is a true code element; 2. a proof
structure that is not an MLL proof net is a false (or corrupted) code element.
The definition of our metrics reflects the duality of the multiplicative
connectives elegantly. In this paper we show that in the framework one
error-detecting is possible but one error-correcting not. Our proof of the
impossibility of one error-correcting is interesting in the sense that a proof
theoretical property is proved using a graph theoretical argument. In addition,
we show that affine logic and MLL + MIX are not appropriate for this framework.
That explains why MLL is better than such similar logics.Comment: minor modification
Inverse zero-sum problems and algebraic invariants
In this article, we study the maximal cross number of long zero-sumfree
sequences in a finite Abelian group. Regarding this inverse-type problem, we
formulate a general conjecture and prove, among other results, that this
conjecture holds true for finite cyclic groups, finite Abelian p-groups and for
finite Abelian groups of rank two. Also, the results obtained here enable us to
improve, via the resolution of a linear integer program, a result of W. Gao and
A. Geroldinger concerning the minimal number of elements with maximal order in
a long zero-sumfree sequence of a finite Abelian group of rank two.Comment: 17 pages, to appear in Acta Arithmetic
Fast rate of convergence in high dimensional linear discriminant analysis
This paper gives a theoretical analysis of high dimensional linear
discrimination of Gaussian data. We study the excess risk of linear
discriminant rules. We emphasis on the poor performances of standard procedures
in the case when dimension p is larger than sample size n. The corresponding
theoretical results are non asymptotic lower bounds. On the other hand, we
propose two discrimination procedures based on dimensionality reduction and
provide associated rates of convergence which can be O(log(p)/n) under sparsity
assumptions. Finally all our results rely on a theorem that provides simple
sharp relations between the excess risk and an estimation error associated to
the geometric parameters defining the used discrimination rule
Low-Complexity Quantized Switching Controllers using Approximate Bisimulation
In this paper, we consider the problem of synthesizing low-complexity
controllers for incrementally stable switched systems. For that purpose, we
establish a new approximation result for the computation of symbolic models
that are approximately bisimilar to a given switched system. The main advantage
over existing results is that it allows us to design naturally quantized
switching controllers for safety or reachability specifications; these can be
pre-computed offline and therefore the online execution time is reduced. Then,
we present a technique to reduce the memory needed to store the control law by
borrowing ideas from algebraic decision diagrams for compact function
representation and by exploiting the non-determinism of the synthesized
controllers. We show the merits of our approach by applying it to a simple
model of temperature regulation in a building
On a combinatorial problem of Erdos, Kleitman and Lemke
In this paper, we study a combinatorial problem originating in the following
conjecture of Erdos and Lemke: given any sequence of n divisors of n,
repetitions being allowed, there exists a subsequence the elements of which are
summing to n. This conjecture was proved by Kleitman and Lemke, who then
extended the original question to a problem on a zero-sum invariant in the
framework of finite Abelian groups. Building among others on earlier works by
Alon and Dubiner and by the author, our main theorem gives a new upper bound
for this invariant in the general case, and provides its right order of
magnitude.Comment: 15 page
On the existence of zero-sum subsequences of distinct lengths
In this paper, we obtain a characterization of short normal sequences over a
finite Abelian p-group, thus answering positively a conjecture of Gao for a
variety of such groups. Our main result is deduced from a theorem of Alon,
Friedland and Kalai, originally proved so as to study the existence of regular
subgraphs in almost regular graphs. In the special case of elementary p-groups,
Gao's conjecture is solved using Alon's Combinatorial Nullstellensatz. To
conclude, we show that, assuming every integer satisfies Property B, this
conjecture holds in the case of finite Abelian groups of rank two.Comment: 10 pages, to appear in Rocky Mountain Journal of Mathematic
Plugin procedure in segmentation and application to hyperspectral image segmentation
In this article we give our contribution to the problem of segmentation with
plug-in procedures. We give general sufficient conditions under which plug in
procedure are efficient. We also give an algorithm that satisfy these
conditions. We give an application of the used algorithm to hyperspectral
images segmentation. Hyperspectral images are images that have both spatial and
spectral coherence with thousands of spectral bands on each pixel. In the
proposed procedure we combine a reduction dimension technique and a spatial
regularisation technique. This regularisation is based on the mixlet
modelisation of Kolaczyck and Al
Unification and Logarithmic Space
We present an algebraic characterization of the complexity classes Logspace
and NLogspace, using an algebra with a composition law based on unification.
This new bridge between unification and complexity classes is inspired from
proof theory and more specifically linear logic and Geometry of Interaction.
We show how unification can be used to build a model of computation by means
of specific subalgebras associated to finite permutations groups. We then prove
that whether an observation (the algebraic counterpart of a program) accepts a
word can be decided within logarithmic space. We also show that the
construction can naturally represent pointer machines, an intuitive way of
understanding logarithmic space computing
Punishing Pharmaceutical Companies for Unlawful Promotion of Approved Drugs: Why the False Claims Act is the Wrong Rx
This article criticizes the shift in focus from correction and compliance to punishment of pharmaceutical companies allegedly violating the Food, Drug, & Cosmetic Act (FD&C Act) prohibitions on unlawful drug promotion. Traditionally, the Food and Drug Administration (FDA) has addressed unlawful promotional activities under the misbranding and new drug provisions of the FD&C Act. Recently though, the Justice Department (DOJ) has expanded the purview of the False Claims Act to include the same allegedly unlawful behavior on the theory that unlawful promotion “induces” physicians to prescribe drugs that result in the filing of false claims for reimbursement. Unchecked and unchallenged, the DOJ has negotiated criminal and civil settlements with individual pharmaceutical companies ranging from just under ten to hundreds of millions of dollars. In part, companies settle these cases to avoid the potential loss of revenue associated with the exclusion regime administered by the U.S. Department of Health and Human Services, under which companies risk losing the right to participate in federal health care programs. Even more disturbing, these settlements allow DOJ to circumvent judicial review of its enforcement approach, preventing any type of accountability for its legal theories or procedures. This article discusses the traditional enforcement methods employed by the FDA as well as the more recent DOJ prosecutions under the False Claims Act. Although it concludes that the FD&C Act should provide the sole means for prosecuting unlawful drug promotion, it also suggests that when prosecuting pharmaceutical companies under either Act, the government must avoid the temptation to mine companies for large settlements in lieu of developing a more coherent and responsible enforcement strategy
- …