1,089 research outputs found
Computerized polar plots by a cathode ray tube/grid overlay method
Overlay is aligned with four calibration dots so it is not affected by CRT drift or changes in vertical or horizontal gain when producing Nyquist /frequency-response phase/amplitude/ plots. Method produces over 50 plots per hour
Signing on a Postcard
We investigate the problem of signing short messages using a scheme that minimizes the total length of the original message and the appended signature. This line of research was motivated by several postal services interested by stamping machines capable of producing digital signatures. Although several message recovery schemes exist, their security is questionable. This paper proposes variants of DSA and ECDSA allowing partial recovery: the signature is appended to a truncated message and the discarded bytes are recovered by the verification algorithm
An efficient quantum algorithm for the hidden subgroup problem in extraspecial groups
Extraspecial groups form a remarkable subclass of p-groups. They are also
present in quantum information theory, in particular in quantum error
correction. We give here a polynomial time quantum algorithm for finding hidden
subgroups in extraspecial groups. Our approach is quite different from the
recent algorithms presented in [17] and [2] for the Heisenberg group, the
extraspecial p-group of size p3 and exponent p. Exploiting certain nice
automorphisms of the extraspecial groups we define specific group actions which
are used to reduce the problem to hidden subgroup instances in abelian groups
that can be dealt with directly.Comment: 10 page
Group Diffie-Hellman Key Exchange Secure against Dictionary Attacks
Group Diffie-Hellman schemes for password-based key exchange are designed to provide a pool of players communicating over a public network, and sharing just a human-memorable password, with a session key (e.g, the key is used for multicast data integrity and confidentiality) . The fundamental security goal to achieve in this scenario is security against dictionary attacks. While solutions have been proposed to solve this problem no formal treatment has ever been suggested. In this paper, we define a security model and then present a protocol with its security proof in both the random oracle model and the ideal-cipher model
A Machine-Checked Formalization of the Generic Model and the Random Oracle Model
Most approaches to the formal analyses of cryptographic protocols make the perfect cryptography assumption, i.e. the hypothese that there is no way to obtain knowledge about the plaintext pertaining to a ciphertext without knowing the key. Ideally, one would prefer to rely on a weaker hypothesis on the computational cost of gaining information about the plaintext pertaining to a ciphertext without knowing the key. Such a view is permitted by the Generic Model and the Random Oracle Model which provide non-standard computational models in which one may reason about the computational cost of breaking a cryptographic scheme. Using the proof assistant Coq, we provide a machine-checked account of the Generic Model and the Random Oracle Mode
Self-consistent theory of reversible ligand binding to a spherical cell
In this article, we study the kinetics of reversible ligand binding to
receptors on a spherical cell surface using a self-consistent stochastic
theory. Binding, dissociation, diffusion and rebinding of ligands are
incorporated into the theory in a systematic manner. We derive explicitly the
time evolution of the ligand-bound receptor fraction p(t) in various regimes .
Contrary to the commonly accepted view, we find that the well-known
Berg-Purcell scaling for the association rate is modified as a function of
time. Specifically, the effective on-rate changes non-monotonically as a
function of time and equals the intrinsic rate at very early as well as late
times, while being approximately equal to the Berg-Purcell value at
intermediate times. The effective dissociation rate, as it appears in the
binding curve or measured in a dissociation experiment, is strongly modified by
rebinding events and assumes the Berg-Purcell value except at very late times,
where the decay is algebraic and not exponential. In equilibrium, the ligand
concentration everywhere in the solution is the same and equals its spatial
mean, thus ensuring that there is no depletion in the vicinity of the cell.
Implications of our results for binding experiments and numerical simulations
of ligand-receptor systems are also discussed.Comment: 23 pages with 4 figure
Milagro Constraints on Very High Energy Emission from Short Duration Gamma-Ray Bursts
Recent rapid localizations of short, hard gamma-ray bursts (GRBs) by the
Swift and HETE satellites have led to the observation of the first afterglows
and the measurement of the first redshifts from this type of burst. Detection
of >100 GeV counterparts would place powerful constraints on GRB mechanisms.
Seventeen short duration (< 5 s) GRBs detected by satellites occurred within
the field of view of the Milagro gamma-ray observatory between 2000 January and
2006 December. We have searched the Milagro data for >100 GeV counterparts to
these GRBs and find no significant emission correlated with these bursts. Due
to the absorption of high-energy gamma rays by the extragalactic background
light (EBL), detections are only expected for redshifts less than ~0.5. While
most long duration GRBs occur at redshifts higher than 0.5, the opposite is
thought to be true of short GRBs. Lack of a detected VHE signal thus allows
setting meaningful fluence limits. One GRB in the sample (050509b) has a likely
association with a galaxy at a redshift of 0.225, while another (051103) has
been tentatively linked to the nearby galaxy M81. Fluence limits are corrected
for EBL absorption, either using the known measured redshift, or computing the
corresponding absorption for a redshift of 0.1 and 0.5, as well as for the case
of z=0.Comment: Accepted for publication in the Astrophysical Journa
Discovery of Localized Regions of Excess 10-TeV Cosmic Rays
An analysis of 7 years of Milagro data performed on a 10-degree angular scale
has found two localized regions of excess of unknown origin with greater than
12 sigma significance. Both regions are inconsistent with gamma-ray emission
with high confidence. One of the regions has a different energy spectrum than
the isotropic cosmic-ray flux at a level of 4.6 sigma, and it is consistent
with hard spectrum protons with an exponential cutoff, with the most
significant excess at ~10 TeV. Potential causes of these excesses are explored,
but no compelling explanations are found.Comment: Submitted to PhysRevLet
Verifiable Elections That Scale for Free
In order to guarantee a fair and transparent voting process, electronic voting schemes must be verifiable. Most of the time, however, it is important that elections also be anonymous. The notion of a verifiable shuffle describes how to satisfy both properties at the same time: ballots are submitted to a public bulletin board in encrypted form, verifiably shuffled by several mix servers (thus guaranteeing anonymity), and then verifiably decrypted by an appropriate threshold decryption mechanism. To guarantee transparency, the intermediate shuffles and decryption results, together with proofs of their correctness, are posted on the bulletin board throughout this process.
In this paper, we present a verifiable shuffle and threshold decryption scheme in which, for security parameter k, L voters, M mix servers, and N decryption servers, the proof that the end tally corresponds to the original encrypted ballots is only O(k(L + M + N)) bits long. Previous verifiable shuffle constructions had proofs of size O(kLM + kLN), which, for elections with thousands of voters, mix servers, and decryption servers, meant that verifying an election on an ordinary computer in a reasonable amount of time was out of the question.
The linchpin of each construction is a controlled-malleable proof (cm-NIZK), which allows each server, in turn, to take a current set of ciphertexts and a proof that the computation done by other servers has proceeded correctly so far. After shuffling or partially decrypting these ciphertexts, the server can also update the proof of correctness, obtaining as a result a cumulative proof that the computation is correct so far. In order to verify the end result, it is therefore sufficient to verify just the proof produced by the last server
- âŠ