131,731 research outputs found
Synchrotron and Inverse Compton Constraints on Lorentz Violations for Electrons
We present a method for constraining Lorentz violation in the electron
sector, based on observations of the photons emitted by high-energy
astrophysical sources. The most important Lorentz-violating operators at the
relevant energies are parameterized by a tensor c^{nu mu) with nine independent
components. If c is nonvanishing, then there may be either a maximum electron
velocity less than the speed of light or a maximum energy for subluminal
electrons; both these quantities will generally depend on the direction of an
electron's motion. From synchrotron radiation, we may infer a lower bound on
the maximum velocity, and from inverse Compton emission, a lower bound on the
maximum subluminal energy. With observational data for both these types of
emission from multiple celestial sources, we may then place bounds on all nine
of the coefficients that make up c. The most stringent bound, on a certain
combination of the coefficients, is at the 6 x 10^(-20) level, and bounds on
the coefficients individually range from the 7 x 10^(-15) level to the 2 x
10^(-17) level. For most of the coefficients, these are the most precise bounds
available, and with newly available data, we can already improve over previous
bounds obtained by the same methods.Comment: 28 page
Recommended from our members
Implementing parameterized types in Java
The goal of this project was to investigate the addition of parameterized types to the Java programming language. Two different parametric polymorphism mechanisms were developed and compared. The first was a preprocessor and the second was a compiler.
Parameterized types allow a programmer to create generic programs. Much as a function parameter allows the value of a variable to be changed each time the function is called, a type parameter allows the type of a variable to be changed. This allows the creation of classes that can have the type on which they operate specified at compile time.
The principal reason for parameterized types is code reuse. Generic and efficient type-safe libraries are easily created which programmers can instantiate with a type parameter when the library is needed. This research creates a mechanism that allows two similar classes that differ only in the type of value they operate on to share the same function bodies. Two of the main benefits are reducing programming time and reducing errors in the program[Stroustrup91].
Parameterized types allow for the easy creation of reusable libraries. For example, the Standard Template Library (STL) in C++ relies heavily on parameterized types
Kernelization of Cycle Packing with Relaxed Disjointness Constraints
A key result in the field of kernelization, a subfield of parameterized complexity, states that the classic Disjoint Cycle Packing problem, i.e. finding k vertex disjoint cycles in a given graph G, admits no polynomial kernel unless NP subseteq coNP/poly. However, very little is known about this problem beyond the aforementioned kernelization lower bound (within the parameterized complexity framework). In the hope of clarifying the picture and better understanding the types of "constraints" that separate "kernelizable" from "non-kernelizable" variants of Disjoint Cycle Packing, we investigate two relaxations of the problem. The first variant, which we call Almost Disjoint Cycle Packing, introduces a "global" relaxation parameter t. That is, given a graph G and integers k and t, the goal is to find at least k distinct cycles such that every vertex of G appears in at most t of the cycles. The second variant, Pairwise Disjoint Cycle Packing, introduces a "local" relaxation parameter and we seek at least k distinct cycles such that every two cycles intersect in at most t vertices. While the Pairwise Disjoint Cycle Packing problem admits a polynomial kernel for all t >= 1, the kernelization complexity of Almost Disjoint Cycle Packing reveals an interesting spectrum of upper and lower bounds. In particular, for t = k/c, where c could be a function of k, we obtain a kernel of size O(2^{c^{2}}*k^{7+c}*log^3(k)) whenever c in o(sqrt(k))). Thus the kernel size varies from being sub-exponential when c in o(sqrt(k)), to quasipolynomial when c in o(log^l(k)), l in R_+, and polynomial when c in O(1). We complement these results for Almost Disjoint Cycle Packing by showing that the problem does not admit a polynomial kernel whenever t in O(k^{epsilon}), for any 0 <= epsilon < 1
Parameterized Concurrent Multi-Party Session Types
Session types have been proposed as a means of statically verifying
implementations of communication protocols. Although prior work has been
successful in verifying some classes of protocols, it does not cope well with
parameterized, multi-actor scenarios with inherent asynchrony. For example, the
sliding window protocol is inexpressible in previously proposed session type
systems. This paper describes System-A, a new typing language which overcomes
many of the expressiveness limitations of prior work. System-A explicitly
supports asynchrony and parallelism, as well as multiple forms of
parameterization. We define System-A and show how it can be used for the static
verification of a large class of asynchronous communication protocols.Comment: In Proceedings FOCLASA 2012, arXiv:1208.432
Towards Parameterized Regular Type Inference Using Set Constraints
We propose a method for inferring \emph{parameterized regular types} for
logic programs as solutions for systems of constraints over sets of finite
ground Herbrand terms (set constraint systems). Such parameterized regular
types generalize \emph{parametric} regular types by extending the scope of the
parameters in the type definitions so that such parameters can relate the types
of different predicates. We propose a number of enhancements to the procedure
for solving the constraint systems that improve the precision of the type
descriptions inferred. The resulting algorithm, together with a procedure to
establish a set constraint system from a logic program, yields a program
analysis that infers tighter safe approximations of the success types of the
program than previous comparable work, offering a new and useful efficiency vs.
precision trade-off. This is supported by experimental results, which show the
feasibility of our analysis
- …