578 research outputs found

    Benchmarking of patents: An application of GAM methodology

    Get PDF
    The present article reexamines some of the issues regarding the benchmarking of patents using the NBER data base on U.S. patents by generalizing a parametric citation model and by estimating it using GAM methodology. The main conclusion is that the estimated effects differ considerably from sector to sector, and the differences can be estimated nonparametrically but not by the parametric dummy variable approach.USPTO, patent benchmarking, GAM

    Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity

    Full text link
    A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.Comment: 30 page

    Sparse vector Markov switching autoregressive models. Application to multivariate time series of temperature

    Get PDF
    International audienceMultivariate time series are of interest in many fields including economics and environment. The dynamical processes occurring in these domains often exhibit regimes so that it is common to describe them using Markov Switching vector autoregressive processes. However the estimation of such models is difficult even when the dimension is not so high because of the number of parameters involved. In this paper we propose to use a Smoothly Clipped Absolute DEviation (SCAD) penalization of the likelihood to shrink the parameters. The Expectation Maximization algorithm build for maximizing the penalized likelihood is described in details and tested on daily mean temperature time series

    Evaluating Microarray-based Classifiers: An Overview

    Get PDF
    For the last eight years, microarray-based class prediction has been the subject of numerous publications in medicine, bioinformatics and statistics journals. However, in many articles, the assessment of classification accuracy is carried out using suboptimal procedures and is not paid much attention. In this paper, we carefully review various statistical aspects of classifier evaluation and validation from a practical point of view. The main topics addressed are accuracy measures, error rate estimation procedures, variable selection, choice of classifiers and validation strategy

    Three essays in financial econometrics

    Get PDF
    Sparse Weighted Norm Minimum Variance Portfolio. In this paper, I propose a weighted L1 and squared L2 norm penalty in portfolio optimization to improve the portfolio performance as the number of available assets N goes large. I show that under certain conditions, the realized risk of the portfolio obtained from this strategy will asymptotically be less than that of some benchmark portfolios with high probability. An intuitive interpretation for why including a fewer number of assets may be beneficial in the high dimensional situation is built on a constraint between sparsity of the optimal weight vector and the realized risk. The theoretical results also imply that the penalty parameters for the weighted norm penalty can be specified as a function of N and sample size n. An efficient coordinate-wise descent type algorithm is then introduced to solve the penalized weighted norm portfolio optimization problem. I find performances of the weighted norm strategy dominate other benchmarks for the case of Fama-French 100 size and book to market ratio portfolios, but are mixed for the case of individual stocks. Several novel alternative penalties are also proposed, and their performances are shown to be comparable to the weighted norm strategy. Bond Variance Risk Premia (Joint work with Philippe Mueller and Andrea Vedolin). Using data from 1983 to 2010, we propose a new fear measure for Treasury markets, akin to the VIX for equities, labeled TIV. We show that TIV explains one third of the time variation in funding liquidity and that the spread between the VIX and TIV captures flight to quality. We then construct Treasury bond variance risk premia as the difference between the implied variance and an expected variance estimate using autoregressive models. Bond variance risk premia display pronounced spikes during crisis periods. We show that variance risk premia encompass a broad spectrum of macroeconomic uncertainty. Uncertainty about the nominal and the real side of the economy increase variance risk premia but uncertainty about monetary policy has a strongly negative effect. We document that bond variance risk premia predict excess returns on Treasuries, stocks, corporate bonds and mortgage-backed securities, both in-sample and out-of-sample. Furthermore, this predictability is not subsumed by other standard predictors. Testing Jumps via False Discovery Rate Control. Many recently developed nonparametric jump tests can be viewed as multiple hypothesis testing problems. For such multiple hypothesis tests, it is well known that controlling type I error often unavoidably makes a large proportion of erroneous rejections, and such situation becomes even worse when the jump occurrence is a rare event. To obtain more reliable results, we aim to control the false discovery rate (FDR), an efficient compound error measure for erroneous rejections in multiple testing problems. We perform the test via a nonparametric statistic proposed by Barndorff-Nielsen and Shephard (2006), and control the FDR with a procedure proposed by Benjamini and Hochberg (1995). We provide asymptotical results for the FDR control. From simulations, we examine relevant theoretical results and demonstrate the advantages of controlling FDR. The hybrid approach is then applied to empirical analysis on two benchmark stock indices with high frequency data
    corecore