3,092 research outputs found

    Maximal representations of uniform complex hyperbolic lattices in exceptional Hermitian Lie groups

    Full text link
    We complete the classification of maximal representations of uniform complex hyperbolic lattices in Hermitian Lie groups by dealing with the exceptional groups E6{\rm E}_6 and E7{\rm E}_7. We prove that if ρ\rho is a maximal representation of a uniform complex hyperbolic lattice Γ⊂SU(1,n)\Gamma\subset{\rm SU}(1,n), n>1n>1, in an exceptional Hermitian group GG, then n=2n=2 and G=E6G={\rm E}_6, and we describe completely the representation ρ\rho. The case of classical Hermitian target groups was treated by Vincent Koziarz and the second named author (arxiv:1506.07274). However we do not focus immediately on the exceptional cases and instead we provide a more unified perspective, as independent as possible of the classification of the simple Hermitian Lie groups. This relies on the study of the cominuscule representation of the complexification of the target group. As a by-product of our methods, when the target Hermitian group GG has tube type, we obtain an inequality on the Toledo invariant of the representation ρ:Γ→G\rho:\Gamma\rightarrow G which is stronger than the Milnor-Wood inequality (thereby excluding maximal representations in such groups).Comment: Comments are welcome

    Bayesian subset simulation

    Full text link
    We consider the problem of estimating a probability of failure α\alpha, defined as the volume of the excursion set of a function f:X⊆Rd→Rf:\mathbb{X} \subseteq \mathbb{R}^{d} \to \mathbb{R} above a given threshold, under a given probability measure on X\mathbb{X}. In this article, we combine the popular subset simulation algorithm (Au and Beck, Probab. Eng. Mech. 2001) and our sequential Bayesian approach for the estimation of a probability of failure (Bect, Ginsbourger, Li, Picheny and Vazquez, Stat. Comput. 2012). This makes it possible to estimate α\alpha when the number of evaluations of ff is very limited and α\alpha is very small. The resulting algorithm is called Bayesian subset simulation (BSS). A key idea, as in the subset simulation algorithm, is to estimate the probabilities of a sequence of excursion sets of ff above intermediate thresholds, using a sequential Monte Carlo (SMC) approach. A Gaussian process prior on ff is used to define the sequence of densities targeted by the SMC algorithm, and drive the selection of evaluation points of ff to estimate the intermediate probabilities. Adaptive procedures are proposed to determine the intermediate thresholds and the number of evaluations to be carried out at each stage of the algorithm. Numerical experiments illustrate that BSS achieves significant savings in the number of function evaluations with respect to other Monte Carlo approaches

    Bayesian Subset Simulation: a kriging-based subset simulation algorithm for the estimation of small probabilities of failure

    Full text link
    The estimation of small probabilities of failure from computer simulations is a classical problem in engineering, and the Subset Simulation algorithm proposed by Au & Beck (Prob. Eng. Mech., 2001) has become one of the most popular method to solve it. Subset simulation has been shown to provide significant savings in the number of simulations to achieve a given accuracy of estimation, with respect to many other Monte Carlo approaches. The number of simulations remains still quite high however, and this method can be impractical for applications where an expensive-to-evaluate computer model is involved. We propose a new algorithm, called Bayesian Subset Simulation, that takes the best from the Subset Simulation algorithm and from sequential Bayesian methods based on kriging (also known as Gaussian process modeling). The performance of this new algorithm is illustrated using a test case from the literature. We are able to report promising results. In addition, we provide a numerical study of the statistical properties of the estimator.Comment: 11th International Probabilistic Assessment and Management Conference (PSAM11) and The Annual European Safety and Reliability Conference (ESREL 2012), Helsinki : Finland (2012

    A Bayesian approach to constrained single- and multi-objective optimization

    Get PDF
    This article addresses the problem of derivative-free (single- or multi-objective) optimization subject to multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, non-linear and expensive to evaluate. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited, as in complex industrial design optimization problems. The method we propose to overcome this difficulty has its roots in both the Bayesian and the multi-objective optimization literatures. More specifically, an extended domination rule is used to handle objectives and constraints in a unified way, and a corresponding expected hyper-volume improvement sampling criterion is proposed. This new criterion is naturally adapted to the search of a feasible point when none is available, and reduces to existing Bayesian sampling criteria---the classical Expected Improvement (EI) criterion and some of its constrained/multi-objective extensions---as soon as at least one feasible point is available. The calculation and optimization of the criterion are performed using Sequential Monte Carlo techniques. In particular, an algorithm similar to the subset simulation method, which is well known in the field of structural reliability, is used to estimate the criterion. The method, which we call BMOO (for Bayesian Multi-Objective Optimization), is compared to state-of-the-art algorithms for single- and multi-objective constrained optimization

    Stability of trajectories for N -particles dynamics with singular potential

    Get PDF
    We study the stability in finite times of the trajectories of interacting particles. Our aim is to show that in average and uniformly in the number of particles, two trajectories whose initial positions in phase space are close, remain close enough at later times. For potential less singular than the classical electrostatic kernel, we are able to prove such a result, for initial positions/velocities distributed according to the Gibbs equilibrium of the system

    Bayesian robot Programming

    Get PDF
    We propose a new method to program robots based on Bayesian inference and learning. The capacities of this programming method are demonstrated through a succession of increasingly complex experiments. Starting from the learning of simple reactive behaviors, we present instances of behavior combinations, sensor fusion, hierarchical behavior composition, situation recognition and temporal sequencing. This series of experiments comprises the steps in the incremental development of a complex robot program. The advantages and drawbacks of this approach are discussed along with these different experiments and summed up as a conclusion. These different robotics programs may be seen as an illustration of probabilistic programming applicable whenever one must deal with problems based on uncertain or incomplete knowledge. The scope of possible applications is obviously much broader than robotics

    Energy cascade and the four-fifths law in superfluid turbulence

    Get PDF
    The 4/5-law of turbulence, which characterizes the energy cascade from large to small-sized eddies at high Reynolds numbers in classical fluids, is verified experimentally in a superfluid 4He wind tunnel, operated down to 1.56 K and up to R_lambda ~ 1640. The result is corroborated by high-resolution simulations of Landau-Tisza's two-fluid model down to 1.15 K, corresponding to a residual normal fluid concentration below 3 % but with a lower Reynolds number of order R_lambda ~ 100. Although the K\'arm\'an-Howarth equation (including a viscous term) is not valid \emph{a priori} in a superfluid, it is found that it provides an empirical description of the deviation from the ideal 4/5-law at small scales and allows us to identify an effective viscosity for the superfluid, whose value matches the kinematic viscosity of the normal fluid regardless of its concentration.Comment: 6 pages, 7 figure
    • 

    corecore