374 research outputs found

    Analysis of airplane boarding via space-time geometry and random matrix theory

    Full text link
    We show that airplane boarding can be asymptotically modeled by 2-dimensional Lorentzian geometry. Boarding time is given by the maximal proper time among curves in the model. Discrepancies between the model and simulation results are closely related to random matrix theory. We then show how such models can be used to explain why some commonly practiced airline boarding policies are ineffective and even detrimental.Comment: 4 page

    Beam-induced Background Simulations for the CMS Experiment at the LHC

    Get PDF
    Beam-induced background comes from interactions of the beam and beam halo particles with either the residual gas in the vacuum chamber of accelerator or the collimators that define the beam aperture. Beam-induced processes can potentially be a significant source of background for physics analyses at the LHC. This contribution describes the simulation software environment used for this part of the CMS experiment activity and recent beam-induced background simulation results for the Phase-2 CMS operation design

    Limit Distributions of Self-Normalized Sums

    Get PDF
    If Xi are i.i.d. and have zero mean and arbitrary finite variance the limiting probability distribution of Sn(2) =(∑ni=1 Xi)/(∑nj=1Xj2)1/2 as n→∞ has density f(t) = (2π)−1/2 exp(−t2/2) by the central limit theorem and the law of large numbers. If the tails of Xi are sufficiently smooth and satisfy P(Xi \u3e t) ∼ rt−α and P(Xi \u3c −t) ∼ lt−α as t→∞, where 0 \u3c α \u3c 2, r \u3e 0, l \u3e 0, Sn(2) still has a limiting distribution F even though Xi has infinite variance. The density f of F depends on α as well as on r/l. We also study the limiting distribution of the more general Sn(p) = (∑ni=1Xi)/(∑nj=1 |Xj|p)1/p where Xi are i.i.d. and in the domain of a stable law G with tails as above. In the cases p = 2 (see (4.21)) and p = 1 (see (3.7)) we obtain exact, computable formulas for f(t) = f(t,α,r/l), and give graphs of f for a number of values of α and r/l. For p = 2, we find that f is always symmetric about zero on (−1,1), even though f is symmetric on (−∞,∞) only when r = l

    Three embeddings of the Klein simple group into the Cremona group of rank three

    Get PDF
    We study the action of the Klein simple group G consisting of 168 elements on two rational threefolds: the three-dimensional projective space and a smooth Fano threefold X of anticanonical degree 22 and index 1. We show that the Cremona group of rank three has at least three non-conjugate subgroups isomorphic to G. As a by-product, we prove that X admits a Kahler-Einstein metric, and we construct a smooth polarized K3 surface of degree 22 with an action of the group G.Comment: 43 page

    A dimensionally continued Poisson summation formula

    Full text link
    We generalize the standard Poisson summation formula for lattices so that it operates on the level of theta series, allowing us to introduce noninteger dimension parameters (using the dimensionally continued Fourier transform). When combined with one of the proofs of the Jacobi imaginary transformation of theta functions that does not use the Poisson summation formula, our proof of this generalized Poisson summation formula also provides a new proof of the standard Poisson summation formula for dimensions greater than 2 (with appropriate hypotheses on the function being summed). In general, our methods work to establish the (Voronoi) summation formulae associated with functions satisfying (modular) transformations of the Jacobi imaginary type by means of a density argument (as opposed to the usual Mellin transform approach). In particular, we construct a family of generalized theta series from Jacobi theta functions from which these summation formulae can be obtained. This family contains several families of modular forms, but is significantly more general than any of them. Our result also relaxes several of the hypotheses in the standard statements of these summation formulae. The density result we prove for Gaussians in the Schwartz space may be of independent interest.Comment: 12 pages, version accepted by JFAA, with various additions and improvement

    Evaluating observed versus predicted forest biomass: R-squared, index of agreement or maximal information coefficient?

    Get PDF
    The accurate prediction of forest above-ground biomass is nowadays key to implementing climate change mitigation policies, such as reducing emissions from deforestation and forest degradation. In this context, the coefficient of determination (R2{R^2}) is widely used as a means of evaluating the proportion of variance in the dependent variable explained by a model. However, the validity of R2{R^2} for comparing observed versus predicted values has been challenged in the presence of bias, for instance in remote sensing predictions of forest biomass. We tested suitable alternatives, e.g. the index of agreement (dd) and the maximal information coefficient (MICMIC). Our results show that dd renders systematically higher values than R2{R^2}, and may easily lead to regarding as reliable models which included an unrealistic amount of predictors. Results seemed better for MICMIC, although MICMIC favoured local clustering of predictions, whether or not they corresponded to the observations. Moreover, R2{R^2} was more sensitive to the use of cross-validation than dd or MICMIC, and more robust against overfitted models. Therefore, we discourage the use of statistical measures alternative to R2{R^2} for evaluating model predictions versus observed values, at least in the context of assessing the reliability of modelled biomass predictions using remote sensing. For those who consider dd to be conceptually superior to R2{R^2}, we suggest using its square d2{d^2}, in order to be more analogous to R2{R^2} and hence facilitate comparison across studies

    A cross-sectional investigation of back pain beliefs and fear in physiotherapy and sport undergraduate students

    Get PDF
    Background Although low back pain (LBP) beliefs have been well investigated in mainstream healthcare discipline students, the beliefs within sports-related study students, such as Sport and Exercise Science (SES), Sports Therapy (ST), and Sport Performance and Coaching (SPC) programmes have yet to be explored. This study aims to understand any differences in the beliefs and fear associated with movement in students enrolled in four undergraduate study programmes–physiotherapy (PT), ST, SES, and SPC. Method 136 undergraduate students completed an online survey. All participants completed the Tampa Scale of Kinesiophobia (TSK) and Back Beliefs Questionnaire (BBQ). Two sets of two-way between-subjects Analysis of Variance (ANOVA) were conducted for each outcome of TSK and BBQ, with the independent variables of the study programme, study year (1st, 2nd, 3rd), and their interaction. Results There was a significant interaction between study programme and year for TSK (F(6, 124) = 4.90, P < 0.001) and BBQ (F(6, 124) = 8.18, P < 0.001). Post-hoc analysis revealed that both PT and ST students had lower TSK and higher BBQ scores than SES and SPC students particularly in the 3rd year. Conclusions The beliefs of clinicians and trainers managing LBP are known to transfer to patients, and more negative beliefs have been associated with greater disability. This is the first study to understand the beliefs about back pain in various sports study programmes, which is timely, given that the management of injured athletes typically involves a multidisciplinary team

    Assortment optimisation under a general discrete choice model: A tight analysis of revenue-ordered assortments

    Full text link
    The assortment problem in revenue management is the problem of deciding which subset of products to offer to consumers in order to maximise revenue. A simple and natural strategy is to select the best assortment out of all those that are constructed by fixing a threshold revenue π\pi and then choosing all products with revenue at least π\pi. This is known as the revenue-ordered assortments strategy. In this paper we study the approximation guarantees provided by revenue-ordered assortments when customers are rational in the following sense: the probability of selecting a specific product from the set being offered cannot increase if the set is enlarged. This rationality assumption, known as regularity, is satisfied by almost all discrete choice models considered in the revenue management and choice theory literature, and in particular by random utility models. The bounds we obtain are tight and improve on recent results in that direction, such as for the Mixed Multinomial Logit model by Rusmevichientong et al. (2014). An appealing feature of our analysis is its simplicity, as it relies only on the regularity condition. We also draw a connection between assortment optimisation and two pricing problems called unit demand envy-free pricing and Stackelberg minimum spanning tree: These problems can be restated as assortment problems under discrete choice models satisfying the regularity condition, and moreover revenue-ordered assortments correspond then to the well-studied uniform pricing heuristic. When specialised to that setting, the general bounds we establish for revenue-ordered assortments match and unify the best known results on uniform pricing.Comment: Minor changes following referees' comment

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    • …
    corecore