800 research outputs found

    Labour and Employment in a Globalising World. Autonomy, Collectives and Political Dilemmas.

    Get PDF
    This collection of essays provides new insight into the complex realities of labour and employment market globalisation. The pluridisciplinary and multi-faced understanding of globalisation is based upon ground research in ten countries from South to North. Its contextualisation of globalising labour and employment market, perceived as process, constitutes the originality of the book. Globalisation is understood through a single process of both standardisation and differentiation, which also underscores its political agenda. The globalising process incorporates trends of convergent and somewhat undifferentiated Southern and Northern situations in labour and employment. Strong political perspectives thereby emerge to help understand changes in current capitalism and question the longstanding North to South paradigm. As labour and employment markets standardise and differentiate, what other problematical threads can be pulled to strengthen the hypothesis that trends converge within a single globalising process? The comparative concepts and tools proposed in this volume help to answer these queries.Globalisation; Employment market; Labour market;

    The Weight Function in the Subtree Kernel is Decisive

    Get PDF
    Tree data are ubiquitous because they model a large variety of situations, e.g., the architecture of plants, the secondary structure of RNA, or the hierarchy of XML files. Nevertheless, the analysis of these non-Euclidean data is difficult per se. In this paper, we focus on the subtree kernel that is a convolution kernel for tree data introduced by Vishwanathan and Smola in the early 2000's. More precisely, we investigate the influence of the weight function from a theoretical perspective and in real data applications. We establish on a 2-classes stochastic model that the performance of the subtree kernel is improved when the weight of leaves vanishes, which motivates the definition of a new weight function, learned from the data and not fixed by the user as usually done. To this end, we define a unified framework for computing the subtree kernel from ordered or unordered trees, that is particularly suitable for tuning parameters. We show through eight real data classification problems the great efficiency of our approach, in particular for small datasets, which also states the high importance of the weight function. Finally, a visualization tool of the significant features is derived.Comment: 36 page

    Optimal choice among a class of nonparametric estimators of the jump rate for piecewise-deterministic Markov processes

    Get PDF
    A piecewise-deterministic Markov process is a stochastic process whose behavior is governed by an ordinary differential equation punctuated by random jumps occurring at random times. We focus on the nonparametric estimation problem of the jump rate for such a stochastic model observed within a long time interval under an ergodicity condition. We introduce an uncountable class (indexed by the deterministic flow) of recursive kernel estimates of the jump rate and we establish their strong pointwise consistency as well as their asymptotic normality. We propose to choose among this class the estimator with the minimal variance, which is unfortunately unknown and thus remains to be estimated. We also discuss the choice of the bandwidth parameters by cross-validation methods.Comment: 36 pages, 18 figure

    Multiple Testing and Variable Selection along Least Angle Regression's path

    Full text link
    In this article, we investigate multiple testing and variable selection using Least Angle Regression (LARS) algorithm in high dimensions under the Gaussian noise assumption. LARS is known to produce a piecewise affine solutions path with change points referred to as knots of the LARS path. The cornerstone of the present work is the expression in closed form of the exact joint law of K-uplets of knots conditional on the variables selected by LARS, namely the so-called post-selection joint law of the LARS knots. Numerical experiments demonstrate the perfect fit of our finding. Our main contributions are three fold. First, we build testing procedures on variables entering the model along the LARS path in the general design case when the noise level can be unknown. This testing procedures are referred to as the Generalized t-Spacing tests (GtSt) and we prove that they have exact non-asymptotic level (i.e., Type I error is exactly controlled). In that way, we extend a work from (Taylor et al., 2014) where the Spacing test works for consecutive knots and known variance. Second, we introduce a new exact multiple false negatives test after model selection in the general design case when the noise level can be unknown. We prove that this testing procedure has exact non-asymptotic level for general design and unknown noise level. Last, we give an exact control of the false discovery rate (FDR) under orthogonal design assumption. Monte-Carlo simulations and a real data experiment are provided to illustrate our results in this case. Of independent interest, we introduce an equivalent formulation of LARS algorithm based on a recursive function.Comment: 62 pages; new: FDR control and power comparison between Knockoff, FCD, Slope and our proposed method; new: the introduction has been revised and now present a synthetic presentation of the main results. We believe that this introduction brings new insists compared to previous version

    Integral estimation based on Markovian design

    Get PDF
    Suppose that a mobile sensor describes a Markovian trajectory in the ambient space. At each time the sensor measures an attribute of interest, e.g., the temperature. Using only the location history of the sensor and the associated measurements, the aim is to estimate the average value of the attribute over the space. In contrast to classical probabilistic integration methods, e.g., Monte Carlo, the proposed approach does not require any knowledge on the distribution of the sensor trajectory. Probabilistic bounds on the convergence rates of the estimator are established. These rates are better than the traditional "root n"-rate, where n is the sample size, attached to other probabilistic integration methods. For finite sample sizes, the good behaviour of the procedure is demonstrated through simulations and an application to the evaluation of the average temperature of oceans is considered.Comment: 45 page

    Asymptotic formula for the tail of the maximum of smooth Stationary Gaussian fields on non locally convex sets

    Get PDF
    International audienceIn this paper we consider the distribution of the maximum of a Gaussian field defined on non locally convex sets. Adler and Taylor or AzaĂŻs and Wschebor give the expansions in the locally convex case. The present paper generalizes their results to the non locally convex case by giving a full expansion in dimension 2 and some generalizations in higher dimension. For a given class of sets, a Steiner formula is established and the correspondence between this formula and the tail of the maximum is proved. The main tool is a recent result of AzaĂŻs and Wschebor that shows that under some conditions the excursion set is close to a ball with a random radius. Examples are given in dimension 2 and higher

    Power of the Spacing test for Least-Angle Regression

    Full text link
    Recent advances in Post-Selection Inference have shown that conditional testing is relevant and tractable in high-dimensions. In the Gaussian linear model, further works have derived unconditional test statistics such as the Kac-Rice Pivot for general penalized problems. In order to test the global null, a prominent offspring of this breakthrough is the spacing test that accounts the relative separation between the first two knots of the celebrated least-angle regression (LARS) algorithm. However, no results have been shown regarding the distribution of these test statistics under the alternative. For the first time, this paper addresses this important issue for the spacing test and shows that it is unconditionally unbiased. Furthermore, we provide the first extension of the spacing test to the frame of unknown noise variance. More precisely, we investigate the power of the spacing test for LARS and prove that it is unbiased: its power is always greater or equal to the significance level α\alpha. In particular, we describe the power of this test under various scenarii: we prove that its rejection region is optimal when the predictors are orthogonal; as the level α\alpha goes to zero, we show that the probability of getting a true positive is much greater than α\alpha; and we give a detailed description of its power in the case of two predictors. Moreover, we numerically investigate a comparison between the spacing test for LARS and the Pearson's chi-squared test (goodness of fit).Comment: 22 pages, 8 figure
    • …
    corecore