35,577 research outputs found

    Who gets caught for corruption when corruption is pervasive? Evidence from China’s anti-bribery blacklist

    Get PDF
    © 2016 Informa UK Limited, trading as Taylor & Francis Group. This article empirically investigates why in a corruption-pervasive country only a minority of the firms get caught for bribery while the majority get away with it. By matching manufacturing firms to a blacklist of bribers in the healthcare sector of a province in China, we show that the government-led blacklisting is selective: while economically more visible firms are slightly more likely to be blacklisted, state-controlled firms are the most protected compared to their private and foreign competitors. Our finding points to the fact that a government can use regulations to impose its preferences when the rule of law is weak and the rule of government is strong

    Weak solutions for forward--backward SDEs--a martingale problem approach

    Full text link
    In this paper, we propose a new notion of Forward--Backward Martingale Problem (FBMP), and study its relationship with the weak solution to the forward--backward stochastic differential equations (FBSDEs). The FBMP extends the idea of the well-known (forward) martingale problem of Stroock and Varadhan, but it is structured specifically to fit the nature of an FBSDE. We first prove a general sufficient condition for the existence of the solution to the FBMP. In the Markovian case with uniformly continuous coefficients, we show that the weak solution to the FBSDE (or equivalently, the solution to the FBMP) does exist. Moreover, we prove that the uniqueness of the FBMP (whence the uniqueness of the weak solution) is determined by the uniqueness of the viscosity solution of the corresponding quasilinear PDE.Comment: Published in at http://dx.doi.org/10.1214/08-AOP0383 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Covariate assisted screening and estimation

    Full text link
    Consider a linear model Y=Xβ+zY=X\beta+z, where X=Xn,pX=X_{n,p} and z∼N(0,In)z\sim N(0,I_n). The vector β\beta is unknown but is sparse in the sense that most of its coordinates are 00. The main interest is to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao [Nonlinear Time Series: Nonparametric and Parametric Methods (2003) Springer]) and the change-point problem (Bhattacharya [In Change-Point Problems (South Hadley, MA, 1992) (1994) 28-56 IMS]), we are primarily interested in the case where the Gram matrix G=X′XG=X'X is nonsparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the covariate assisted screening and estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage screen and clean [Fan and Song Ann. Statist. 38 (2010) 3567-3604; Wasserman and Roeder Ann. Statist. 37 (2009) 2178-2201] procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1243 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Deep and Autoregressive Approach for Topic Modeling of Multimodal Data

    Full text link
    Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.Comment: 24 pages, 10 figures. A version has been accepted by TPAMI on Aug 4th, 2015. Add footnote about how to train the model in practice in Section 5.1. arXiv admin note: substantial text overlap with arXiv:1305.530
    • …
    corecore