6,406 research outputs found

    Stein's method, Malliavin calculus, Dirichlet forms and the fourth moment theorem

    Full text link
    The fourth moment theorem provides error bounds of the order E(F4)−3\sqrt{{\mathbb E}(F^4) - 3} in the central limit theorem for elements FF of Wiener chaos of any order such that E(F2)=1{\mathbb E}(F^2) = 1. It was proved by Nourdin and Peccati (2009) using Stein's method and the Malliavin calculus. It was also proved by Azmoodeh, Campese and Poly (2014) using Stein's method and Dirichlet forms. This paper is an exposition on the connections between Stein's method and the Malliavin calculus and between Stein's method and Dirichlet forms, and on how these connections are exploited in proving the fourth moment theorem

    Generic design of Chinese remaindering schemes

    Get PDF
    We propose a generic design for Chinese remainder algorithms. A Chinese remainder computation consists in reconstructing an integer value from its residues modulo non coprime integers. We also propose an efficient linear data structure, a radix ladder, for the intermediate storage and computations. Our design is structured into three main modules: a black box residue computation in charge of computing each residue; a Chinese remaindering controller in charge of launching the computation and of the termination decision; an integer builder in charge of the reconstruction computation. We then show that this design enables many different forms of Chinese remaindering (e.g. deterministic, early terminated, distributed, etc.), easy comparisons between these forms and e.g. user-transparent parallelism at different parallel grains

    On-the-job learning and earnings

    Get PDF
    A simple model of informal learning on-the-job which combines learning by oneself and learning from others is proposed in this paper. It yields a closed-form solution that revises Mincer-Jovanovic's (1981) treatment of tenure in the human capital earnings function by relating earnings to the individual's learning potential from jobs and firms. We estimate the structural parameters of this non-linear model on a large French survey with matched employer-employee data. We find that workers on average can learn from others ten percent of their own human capital on entering the firm, and catch half of their learning potential in just two years. The measurement of worker's learning potential in their jobs and establishments provides a simple characterization of primary-type and secondary-type jobs and establishments. We find a strong relationship between the job-specific learning potential and tenure. Predictions of dual labor market theory regarding the positive match of primary-type firms (which offer high learning opportunities) with highly endowed workers (educated, high wages) are visible at the establishment level but seem to vanish at the job's level.Human capital, earnings functions, informal training, learning from others, learning by oneself, returns to tenure, dualism.

    The representation of the symmetric group on m-Tamari intervals

    Get PDF
    An m-ballot path of size n is a path on the square grid consisting of north and east unit steps, starting at (0,0), ending at (mn,n), and never going below the line {x=my}. The set of these paths can be equipped with a lattice structure, called the m-Tamari lattice and denoted by T_n^{m}, which generalizes the usual Tamari lattice T_n obtained when m=1. This lattice was introduced by F. Bergeron in connection with the study of diagonal coinvariant spaces in three sets of n variables. The representation of the symmetric group S_n on these spaces is conjectured to be closely related to the natural representation of S_n on (labelled) intervals of the m-Tamari lattice, which we study in this paper. An interval [P,Q] of T_n^{m} is labelled if the north steps of Q are labelled from 1 to n in such a way the labels increase along any sequence of consecutive north steps. The symmetric group S_n acts on labelled intervals of T_n^{m} by permutation of the labels. We prove an explicit formula, conjectured by F. Bergeron and the third author, for the character of the associated representation of S_n. In particular, the dimension of the representation, that is, the number of labelled m-Tamari intervals of size n, is found to be (m+1)^n(mn+1)^{n-2}. These results are new, even when m=1. The form of these numbers suggests a connection with parking functions, but our proof is not bijective. The starting point is a recursive description of m-Tamari intervals. It yields an equation for an associated generating function, which is a refined version of the Frobenius series of the representation. This equation involves two additional variables x and y, a derivative with respect to y and iterated divided differences with respect to x. The hardest part of the proof consists in solving it, and we develop original techniques to do so, partly inspired by previous work on polynomial equations with "catalytic" variables.Comment: 29 pages --- This paper subsumes the research report arXiv:1109.2398, which will not be submitted to any journa

    Analysis and modeling of green wood milling: Chip production by slabber

    Get PDF
    During the primary transformation of wood, logs are faced with slabber heads. Chips produced are raw materials for pulp paper and particleboard industries. Efficiency of these industries is partly due to particle size distribution. Command of this distribution is no easy matter because of great dependence on cutting conditions and variability in material. This study aimed a better understanding and predictionof chip fragmentation. It starts with a detailed description of cutting kinematic and interaction between knife and log. This leads to the numerical development of a generic slabber head. Chip fragmentation phenomena were studied through experiments in dynamic conditions. These experiments were carried out thanks to a pendulum (Vc = 400 m/min). It was instrumented with piezoelectric force sensors and high speed camera. Obtained results agreed very well with previous quasi-static experiments

    Phase Space Engineering in Optical Microcavities I: Preserving near-field uniformity while inducing far-field directionality

    Full text link
    Optical microcavities have received much attention over the last decade from different research fields ranging from fundamental issues of cavity QED to specific applications such as microlasers and bio-sensors. A major issue in the latter applications is the difficulty to obtain directional emission of light in the far-field while keeping high energy densities inside the cavity (i.e. high quality factor). To improve our understanding of these systems, we have studied the annular cavity (a dielectric disk with a circular hole), where the distance cavity-hole centers, d, is used as a parameter to alter the properties of cavity resonances. We present results showing how one can affect the directionality of the far-field while preserving the uniformity (hence the quality factor) of the near-field simply by increasing the value of d. Interestingly, the transition between a uniform near- and far-field to a uniform near- and directional far-field is rather abrupt. We can explain this behavior quite nicely with a simple model, supported by full numerical calculations, and we predict that the effect will also be found in a large class of eigenmodes of the cavity.Comment: 12th International Conference on Transparent Optical Network

    Searching for supersymmetry using deep learning with the ATLAS detector

    Full text link
    Le Modèle Standard de la physique des particules (MS) est une théorie fondamentale de la nature dont la validité a été largement établie par diverses expériences. Par contre, quelques problèmes théoriques et expérimentaux subsistent, ce qui motive la recherche de théories alternatives. La Supersymétrie (SUSY), famille de théories dans laquelle une nouvelle particule est associée à chaque particules du MS, est une des théories ayant les meilleures motivations pour étendre la portée du modèle. Par exemple, plusieurs théories supersymétriques prédisent de nouvelles particules stables et interagissant seulement par la force faible, ce qui pourrait expliquer les observations astronomiques de la matière sombre. La découverte de SUSY représenterait aussi une importante étape dans le chemin vers une théorie unifiée de l'univers. Les recherches de supersymétrie sont au coeur du programme expérimental de la collaboration ATLAS, qui exploite un détecteur de particules installé au Grand Collisioneur de Hadrons (LHC) au CERN à Genève, mais à ce jours aucune preuve en faveur de la supersymétrie n'a été enregistrée par les présentes analyses, largement basées sur des techniques simples et bien comprises. Cette thèse documente l'implémentation d'une nouvelle approche à la recherche de particules basée sur l'apprentissage profond, utilisant seulement les quadri-impulsions comme variables discriminatoires; cette analyse utilise l'ensemble complet de données d'ATLAS enregistré en 2015-2018. Les problèmes de la naturalité du MS et de la matière sombre orientent la recherche vers les partenaires supersymétriques du gluon (le gluino), des quarks de troisième génération (stop et sbottom), ainsi que des bosons de gauge (le neutralino). Plusieurs techniques récentes sont employées, telles que l'utilisation directe des quadri-impulsions reconstruites à partir des données enregistrées par le détecteur ATLAS ainsi que la paramétrisation d'un réseau de neurone avec les masses des particules recherchées, ce qui permet d'atteindre une performance optimale quelle que soit l'hypothèse de masses. Cette méthode améliore la signification statistique par un facteur 85 par rapport au dernier résultat d'ATLAS pour certaines hypothèses de masses, et ce avec la même luminosité. Aucun excès signifif au-delà du Modèle Standard n'est observé. Les masses du gluino en deçà de 2.45 TeV et du neutralino en deça de 1.7 TeV sont exclues à un niveau de confiance de 95%, ce qui étend largement les limites précédentes sur deux modèles de productions de paires de gluinos faisant intervenir des stops et des sbottoms, respectivement.The Standard Model of particle physics (SM) is a fundamental theory of nature whose validity has been extensively confirmed by experiments. However, some theoretical and experimental problems subsist, which motivates searches for alternative theories to supersede it. Supersymmetry (SUSY), which associate new fundamental particles to each SM particle, is one of the best-motivated such theory and could solve some of the biggest outstanding problems with the SM. For example, many SUSY scenarios predict stable neutral particles that could explain observations of dark matter in the universe. The discovery of SUSY would also represent a huge step towards a unified theory of the universe. Searches for SUSY are at the heart of the experimental program of the ATLAS collaboration, which exploits a state-of-the-art particle detector installed at the Large Hadron Collider (LHC) at CERN in Geneva. The probability to observe many supersymmetric particles went up when the LHC ramped up its collision energy to 13~TeV, the highest ever achieved in laboratory, but so far no evidence for SUSY has been recorded by current searches, which are mostly based on well-known simple techniques such as counting experiments. This thesis documents the implementation of a novel deep learning-based approach using only the four-momenta of selected physics objects, and its application to the search for supersymmetric particles using the full ATLAS 2015-2018 dataset. Motivated by naturalness considerations as well as by the problem of dark matter, the search focuses on finding evidence for supersymmetric partners of the gluon (the gluino), third generation quarks (the stop and the sbottom), and gauge bosons (the neutralino). Many recently introduced physics-specific machine learning developments are employed, such as directly using detector-recorded energies and momenta of produced particles instead of first deriving a restricted set of physically motivated variables and parametrizing the classification model with the masses of the particles searched for, which allows optimal sensitivity for all mass hypothesis. This method improves the statistical significance of the search by up to 85 times that of the previous ATLAS analysis for some mass hypotheses, after accounting for the luminosity difference. No significant excesses above the SM background are recorded. Gluino masses below 2.45 TeV and neutralino masses below 1.7 TeV are excluded at the 95% confidence level, greatly increasing the previous limit on two simplified models of gluino pair production with off-shell stops and sbottoms, respectively
    • …
    corecore