1,032 research outputs found

    Tests for Establishing Security Properties

    Get PDF
    Ensuring strong security properties in some cases requires participants to carry out tests during the execution of a protocol. A classical example is electronic voting: participants are required to verify the presence of their ballots on a bulletin board, and to verify the computation of the election outcome. The notion of certificate transparency is another example, in which participants in the protocol are required to perform tests to verify the integrity of a certificate log. We present a framework for modelling systems with such `testable properties', using the applied pi calculus. We model the tests that are made by participants in order to obtain the security properties. Underlying our work is an attacker model called ``malicious but cautious'', which lies in between the Dolev-Yao model and the ``honest but curious'' model. The malicious-but-cautious model is appropriate for cloud computing providers that are potentially malicious but are assumed to be cautious about launching attacks that might cause user tests to fail

    Fairly Allocating Contiguous Blocks of Indivisible Items

    Full text link
    In this paper, we study the classic problem of fairly allocating indivisible items with the extra feature that the items lie on a line. Our goal is to find a fair allocation that is contiguous, meaning that the bundle of each agent forms a contiguous block on the line. While allocations satisfying the classical fairness notions of proportionality, envy-freeness, and equitability are not guaranteed to exist even without the contiguity requirement, we show the existence of contiguous allocations satisfying approximate versions of these notions that do not degrade as the number of agents or items increases. We also study the efficiency loss of contiguous allocations due to fairness constraints.Comment: Appears in the 10th International Symposium on Algorithmic Game Theory (SAGT), 201

    Sensitive PCR method for the detection and real-time quantification of human cells in xenotransplantation systems

    Get PDF
    The sensitive detection of human cells in immunodeficient rodents is a prerequisite for the monitoring of micrometastasis of solid tumours, dissemination of leukaemic cells, or engraftment of haematological cells. We developed a universally applicable polymerase chain reaction method for the detection of a human-specific 850-bp fragment of the α-satellite DNA on human chromosome 17. The method allows the detection of one human cell in 106 murine cells and could be established as both, a conventional DNA polymerase chain reaction-assay for routine screening, and a quantitative real-time polymerase chain reaction-assay using TaqMan-methodology. It was applied to the following xenotransplantation systems in SCID and NOD/SCID mice: (1) In a limiting dilution assay, cells of the MDA-MB 435 breast carcinoma were injected into the mammary fat pad of NOD/SCID mice. It could be shown that 10 cells mouse−1 were sufficient to induce a positive polymerase chain reaction signal in liver and lung tissue 30 days after transplantation as an indicator for micrometastasis. At this time a palpable tumour was not yet detectable in the mammary fat pad region. (2) Cells of a newly established human acute lymphatic leukaemia were administered intraperitoneally to SCID mice. These cells apparently disseminated and were detectable as early as day 50 in the peripheral blood of living mice, while the leukaemia manifestation was delayed by day 140. (3) In a transplantation experiment using mature human lymphocytes we wanted to standardise conditions for a successful survival of these cells in NOD/SCID mice. It was established that at least 5×107 cells given intravenously were necessary and that the mice had to be conditioned by 2 Gy body irradiation to get positive polymerase chain reaction bands in several organs. (4) Engraftment studies with blood stem cells originating from cytapheresis samples of tumour patients or from cord blood were undertaken in NOD/SCID mice in order to define conditions of successful engraftment and to use this model for further optimisation strategies. The polymerase chain reaction method presented allowed a reliable prediction of positive engraftment and agreed well with the results of immunohistochemical or FACS analysis. All together, the polymerase chain reaction method developed allows a sensitive and reliable detection of low numbers of human cells in immunodeficient hosts. In combination with real-time (TaqMan) technique it allows an exact quantification of human cells. As this method can be performed with accessible material of living animals, follow up studies for the monitoring of therapeutic interventions are possible in which the survival time of mice as evaluation criteria can be omitted

    Efficient Equilibria in Polymatrix Coordination Games

    Get PDF
    We consider polymatrix coordination games with individual preferences where every player corresponds to a node in a graph who plays with each neighbor a separate bimatrix game with non-negative symmetric payoffs. In this paper, we study α\alpha-approximate kk-equilibria of these games, i.e., outcomes where no group of at most kk players can deviate such that each member increases his payoff by at least a factor α\alpha. We prove that for α≄2\alpha \ge 2 these games have the finite coalitional improvement property (and thus α\alpha-approximate kk-equilibria exist), while for α<2\alpha < 2 this property does not hold. Further, we derive an almost tight bound of 2α(n−1)/(k−1)2\alpha(n-1)/(k-1) on the price of anarchy, where nn is the number of players; in particular, it scales from unbounded for pure Nash equilibria (k=1)k = 1) to 2α2\alpha for strong equilibria (k=nk = n). We also settle the complexity of several problems related to the verification and existence of these equilibria. Finally, we investigate natural means to reduce the inefficiency of Nash equilibria. Most promisingly, we show that by fixing the strategies of kk players the price of anarchy can be reduced to n/kn/k (and this bound is tight)

    Anharmonicities of giant dipole excitations

    Get PDF
    The role of anharmonic effects on the excitation of the double giant dipole resonance is investigated in a simple macroscopic model.Perturbation theory is used to find energies and wave functions of the anharmonic ascillator.The cross sections for the electromagnetic excitation of the one- and two-phonon giant dipole resonances in energetic heavy-ion collisions are then evaluated through a semiclassical coupled-channel calculation.It is argued that the variations of the strength of the anharmonic potential should be combined with appropriate changes in the oscillator frequency,in order to keep the giant dipole resonance energy consistent with the experimental value.When this is taken into account,the effects of anharmonicities on the double giant dipole resonance excitation probabilities are small and cannot account for the well-known discrepancy between theory and experiment

    The mean energy, strength and width of triple giant dipole resonances

    Get PDF
    We investigate the mean energy, strength and width of the triple giant dipole resonance using sum rules.Comment: 12 page

    On the optimality of gluing over scales

    Full text link
    We show that for every α>0\alpha > 0, there exist nn-point metric spaces (X,d) where every "scale" admits a Euclidean embedding with distortion at most α\alpha, but the whole space requires distortion at least Ω(αlog⁥n)\Omega(\sqrt{\alpha \log n}). This shows that the scale-gluing lemma [Lee, SODA 2005] is tight, and disproves a conjecture stated there. This matching upper bound was known to be tight at both endpoints, i.e. when α=Θ(1)\alpha = \Theta(1) and α=Θ(log⁥n)\alpha = \Theta(\log n), but nowhere in between. More specifically, we exhibit nn-point spaces with doubling constant λ\lambda requiring Euclidean distortion Ω(log⁥λlog⁥n)\Omega(\sqrt{\log \lambda \log n}), which also shows that the technique of "measured descent" [Krauthgamer, et. al., Geometric and Functional Analysis] is optimal. We extend this to obtain a similar tight result for LpL_p spaces with p>1p > 1.Comment: minor revision

    Coulomb Dissociation of 17Ne^{17}Ne

    Get PDF

    Strategic Capacity Planning Problems in Revenue‐Sharing Joint Ventures

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/154244/1/poms13128_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/154244/2/poms13128.pd

    Particle emission following Coulomb excitation in ultrarelativistic heavy-ion collisions

    Get PDF
    We study nuclear reactions induced by virtual photons associated with Lorentz-boosted Coulomb fields of ultrarelativistic heavy ions. Evaporation, fission and multifragmentation mechanisms are included in a new RELDIS code, which describes the deexcitation of residual nuclei formed after single and double photon absorption in peripheral heavy-ion collisions. Partial cross sections for different dissociation channels, including the multiple neutron emission ones, are calculated and compared with data when available. Rapidity and transverse momentum distributions of nucleons, nuclear fragments and pions, produced electromagnetically, are also calculated. These results provide important information for designing large-rapidity detectors and zero-degree calorimeters at RHIC and LHC. The electromagnetic dissociation of nuclei imposes some constrains on the investigation of exotic particle production in gamma-gamma fusion reactions.Comment: 26 LaTeX pages including 8 figures, uses epsf.st
    • 

    corecore