928 research outputs found

    Space-Time Tradeoffs for Distributed Verification

    Full text link
    Verifying that a network configuration satisfies a given boolean predicate is a fundamental problem in distributed computing. Many variations of this problem have been studied, for example, in the context of proof labeling schemes (PLS), locally checkable proofs (LCP), and non-deterministic local decision (NLD). In all of these contexts, verification time is assumed to be constant. Korman, Kutten and Masuzawa [PODC 2011] presented a proof-labeling scheme for MST, with poly-logarithmic verification time, and logarithmic memory at each vertex. In this paper we introduce the notion of a tt-PLS, which allows the verification procedure to run for super-constant time. Our work analyzes the tradeoffs of tt-PLS between time, label size, message length, and computation space. We construct a universal tt-PLS and prove that it uses the same amount of total communication as a known one-round universal PLS, and tt factor smaller labels. In addition, we provide a general technique to prove lower bounds for space-time tradeoffs of tt-PLS. We use this technique to show an optimal tradeoff for testing that a network is acyclic (cycle free). Our optimal tt-PLS for acyclicity uses label size and computation space O((log⁡n)/t)O((\log n)/t). We further describe a recursive O(log⁡∗n)O(\log^* n) space verifier for acyclicity which does not assume previous knowledge of the run-time tt.Comment: Pre-proceedings version of paper presented at the 24th International Colloquium on Structural Information and Communication Complexity (SIROCCO 2017

    Cartographie du risque unitaire d'endommagement (CRUE) par inondations pour les résidences unifamiliales du Québec

    Get PDF
    Actuellement, en considĂ©rant simultanĂ©ment les Ă©lĂ©ments constitutifs du risque, soit l'alĂ©a et la vulnĂ©rabilitĂ©, aucune des mĂ©thodes existantes dites de cartographie des risques d'inondation ne permet d'Ă©tablir de façon prĂ©cise et quantifiable en tous points du territoire les risques d'inondation. La mĂ©thode de cartographie prĂ©sentĂ©e permet de combler ce besoin en rĂ©pondant aux critĂšres suivants : facilitĂ© d'utilisation, de consultation et d'application, rĂ©sultats distribuĂ©s spatialement, simplicitĂ© de mise Ă  jour, applicabilitĂ© Ă  divers types de rĂ©sidences.La mĂ©thode prĂ©sentĂ©e utilise une formulation unitaire du risque basĂ©e sur les taux d'endommagement distribuĂ©s et reliĂ©s Ă  diverses pĂ©riodes de retour de crues Ă  l'eau libre. Ceux-ci sont d'abord calculĂ©s Ă  partir des hauteurs de submersion qu'on dĂ©duit de la topographie, des niveaux d'eau pour des pĂ©riodes de retour reprĂ©sentatives et du mode d'implantation des rĂ©sidences (prĂ©sence de sous-sol, hauteur moyenne du rez-de-chaussĂ©e). Ensuite, le risque unitaire est obtenu par intĂ©gration du produit du taux d'endommagement croissant par son incrĂ©ment de probabilitĂ© au dĂ©passement. Le rĂ©sultat est une carte reprĂ©sentant le risque en % de dommage direct moyen annuel. Une Ă©tude pilote sur un tronçon de la riviĂšre Montmorency (QuĂ©bec, Canada) a montrĂ© que les cartes sont expressives, flexibles et peuvent recevoir tous les traitements additionnels permis par un SIG tel que le logiciel MODELEUR/HYDROSIM dĂ©veloppĂ© Ă  l'INRS-ETE, l'outil utilisĂ© pour cette recherche. Enfin, l'interprĂ©tation sur la Montmorency des cartes d'inondation en vigueur actuellement au Canada (les limites de crue de 20/100 ans) soulĂšve des interrogations sur le niveau de risque actuellement acceptĂ© dans la rĂ©glementation, surtout quand on le compare aux taux de taxation municipale.Public managers of flood risks need simple and precise tools to deal with this problem and to minimize its consequences, especially for land planning and management. Several methods exist that produce flood risk maps and help to restrict building residences in flood plains. For example, the current method in Canada is based on the delineation in flood plains of two regions corresponding to floods of 20- and 100-year return periods (CONVENTION CANADA/QUÉBEC, 1994), mostly applied to ice-free flooding conditions. The method applied by the Federal Emergency Management Agency FEMA (2004) is also based on the statistical structure of the floods in different contexts, with a goal mostly oriented towards the determination of insurance rates. In France, the INONDABILITÉ method (GILARD and GENDREAU, 1998) seeks to match the present probability of flooding to a reduced one that the stakeholders would be willing to accept.However, considering that the commonly accepted definition of risk includes both the probability of flooding and its consequences (costs of damages), very few, if any of the present methods can strictly be considered as risk-mapping methods. The method presented hereafter addresses this gap by representing the mean annual rate of direct damage (unit value) for different residential building modes, taking into account the flood probability structure and the spatial distribution of the submersion height, which takes into account the topography of the flood plain and the water stage distribution, the residential settlement mode (basement or not) and the first floor elevation of the building. The method seeks to meet important criteria related to efficient land planning and management, including: ease of utilisation, consultation and application for managers; spatially distributed results usable in current geographical information systems (GIS maps); availability anywhere in the area under study; ease of updating; and adaptability for a wide range of residence types.The proposed method is based on a unit treatment of the risk variable that corresponds to a rate of damage, instead of an absolute value expressed in monetary units. Direct damages to the building are considered, excluding damages to furniture and other personal belongs. Damage rates are first computed as a function of the main explanatory variable represented by the field of submersion depths. This variable, which is obtained from the 2D subtraction of the terrain topography from the water stage for each reference flood event, is defined by its probability of occurrence. The mean annual rate of damage (unit risk) is obtained by integrating the field of damage rate with respect to the annual probability structure of the available flood events. The result is a series of maps corresponding to representative modes of residential settlement.The damage rate was computed with a set of empirical functional relationships developed for the Saguenay region (QuĂ©bec, Canada) after the flood of 1996. These curves were presented in LECLERC et al. (2003); four different curves form the set that represents residences with or without a basement, with a value below or above $CAD 50,000, which is roughly correlated with the type of occupation (i.e., secondary or main residence). While it cannot be assumed that theses curves are generic with respect to the general situation in Canada, or more specifically, in the province of QuĂ©bec, the method itself can still be applied by making use of alternate sets of submersion rates of damage curves developed for other specific scenarios. Moreover, as four different functional relationships were used to represent the different residential settlement modes, four different maps have to be drawn to represent the vulnerability of the residential sector depending of the type of settlement. Consequently, as the maps are designed to represent a homogeneous mode of settlement, they represent potential future development in a given region better than the current situation. They can also be used to evaluate public policies regarding urban development and building restrictions in the flood plains.A pilot study was conducted on a reach of the Montmorency River (QuĂ©bec, Canada; BLIN, 2002). It was possible to verify the compliance of the method to the proposed utilisation criteria. The method proved to be simple to use, adaptive and compatible with GIS modeling environments, such as MODELEUR (SECRETAN at al, 1999), a 2D finite elements modeling system designed for a fluvial environment. Water stages were computed with a 2D hydrodynamic simulator (HYDROSIM; HENICHE et al., 1999a) to deal with the river reach complexity (a breaded reach with back waters). Due to the availability of 2D results, a 2D graphic representation of the information layers can therefore be configured, taking into account the specific needs of the interveners. In contexts where one dimensional water stage profiles are computed (e.g., HEC-RAS by USACE, 1990; DAMBRK by FREAD, 1984), an extended 2D representation of these data needs to be developed in the lateral flood plains in order to achieve a 2D distributed submersion field.Among the interesting results, it was possible to compare the risk level for given modes of settlements (defined by the presence/absence of a basement and the elevation of the first floor with respect to the land topography) with current practices, based only on the delineation of the limits of the flood zones corresponding to 20/100 year return periods. We conclude that, at least in the particular case under study, the distributed annual rate of damage seems relatively large with respect to other financial indicators for residences such as urban taxation rates

    Thermodynamic potential with correct asymptotics for PNJL model

    Full text link
    An attempt is made to resolve certain incongruities within the Nambu - Jona-Lasinio (NJL) and Polyakov loop extended NJL models (PNJL) which currently are used to extract the thermodynamic characteristics of the quark-gluon system. It is argued that the most attractive resolution of these incongruities is the possibility to obtain the thermodynamic potential directly from the corresponding extremum conditions (gap equations) by integrating them, an integration constant being fixed in accordance with the Stefan-Boltzmann law. The advantage of the approach is that the regulator is kept finite both in divergent and finite valued integrals at finite temperature and chemical potential. The Pauli-Villars regularization is used, although a standard 3D sharp cutoff can be applied as well.Comment: 16 pages, 5 figures, extended version, title change

    Counting, generating and sampling tree alignments

    Get PDF
    Pairwise ordered tree alignment are combinatorial objects that appear in RNA secondary structure comparison. However, the usual representation of tree alignments as supertrees is ambiguous, i.e. two distinct supertrees may induce identical sets of matches between identical pairs of trees. This ambiguity is uninformative, and detrimental to any probabilistic analysis.In this work, we consider tree alignments up to equivalence. Our first result is a precise asymptotic enumeration of tree alignments, obtained from a context-free grammar by mean of basic analytic combinatorics. Our second result focuses on alignments between two given ordered trees SS and TT. By refining our grammar to align specific trees, we obtain a decomposition scheme for the space of alignments, and use it to design an efficient dynamic programming algorithm for sampling alignments under the Gibbs-Boltzmann probability distribution. This generalizes existing tree alignment algorithms, and opens the door for a probabilistic analysis of the space of suboptimal RNA secondary structures alignments.Comment: ALCOB - 3rd International Conference on Algorithms for Computational Biology - 2016, Jun 2016, Trujillo, Spain. 201

    Isoscalar g Factors of Even-Even and Odd-Odd Nuclei

    Full text link
    We consider T=0 states in even-even and odd-odd N=Z nuclei. The g factors that emerge are isoscalar. We find that the single j shell model gives simple expressions for these g factors which for even-even nuclei are suprisingly close to the collective values for K=0 bands. The g factors of many 2+ in even-even nuclei and 1+ and 3+ states in odd-odd nuclei have g factors close to 0.5

    Voltage controlled terahertz transmission through GaN quantum wells

    Full text link
    We report measurements of radiation transmission in the 0.220--0.325 THz frequency domain through GaN quantum wells grown on sapphire substrates at room and low temperatures. A significant enhancement of the transmitted beam intensity with the applied voltage on the devices under test is found. For a deeper understanding of the physical phenomena involved, these results are compared with a phenomenological theory of light transmission under electric bias relating the transmission enhancement to changes in the differential mobility of the two-dimensional electron gas

    Pion Observables in the Extended NJL Model with Vector and Axial-Vector Mesons

    Get PDF
    The momentum-space bosonization method of a Nambu and Jona-Lasinio type model with vector and axial-vector mesons is applied to ππ\pi\pi scattering. Unlike the case in earlier published papers, we obtain the ππ\pi\pi scattering amplitude using the linear and nonlinear realizations of chiral symmetry and fully taking into account the momentum dependence of meson vertices. We show the full physical equivalence between these two approaches. The chiral expansion procedure in this model is discussed in detail. Chiral expansions of the quark mass, pion mass and constant fπf_\pi are obtained. The low-energy ππ\pi \pi phase shifts are compared to the available data. We also study the scalar form factor of the pion.Comment: 39 pp, LaTeX file, uses epsf, 7 figures (appended as compressed tar files in pion.uu

    Eight-quark interactions as a chiral thermometer

    Full text link
    A NJL Lagrangian extended to six and eight quark interactions is applied to study temperature effects (SU(3) flavor limit, massless case), and (realistic massive case). The transition temperature can be considerably reduced as compared to the standard approach, in accordance with recent lattice calculations. The mesonic spectra built on the spontaneously broken vacuum induced by the 't Hooft interaction strength, as opposed to the commonly considered case driven by the four-quark coupling, undergoes a rapid crossover to the unbroken phase, with a slope and at a temperature which is regulated by the strength of the OZI violating eight-quark interactions. This strength can be adjusted in consonance with the four-quark coupling and leaves the spectra unchanged, except for the sigma meson mass, which decreases. A first order transition behavior is also a possible solution within the present approach.Comment: 4 pages, 4 figures, prepared for the proceedings of Quark Matter 2008 - 20th International Conference on Ultra-Relativistic Nucleus Nucleus Collisions, February 4-10, Jaipur (India

    Parity nonconservation effects in the photodesintegration of polarized deuterons

    Get PDF
    P-odd correlations in the deuteron photodesintegration are considered. The π\pi-meson exchange is not operative in the case of unpolarized deuterons. For polarized deuterons a P-odd correlation due to the π\pi-meson exchange is about 3×10−93 \times 10^{-9}. Short-distance P-odd contributions exceed essentially than the contribution of the π\pi-meson exchange.Comment: 12 pages, Latex, 3 figure

    Machine Learning-Based Event Generator for Electron-Proton Scattering

    Get PDF
    We present a new machine learning-based Monte Carlo event generator using generative adversarial networks (GANs) that can be trained with calibrated detector simulations to construct a vertex-level event generator free of theoretical assumptions about femtometer scale physics. Our framework includes a GAN-based detector folding as a fast-surrogate model that mimics detector simulators. The framework is tested and validated on simulated inclusive deep-inelastic scattering data along with existing parametrizations for detector simulation, with uncertainty quantification based on a statistical bootstrapping technique. Our results provide for the first time a realistic proof of concept to mitigate theory bias in inferring vertex-level event distributions needed to reconstruct physical observables
    • 

    corecore