9,494 research outputs found

    Explicit MDS Codes with Complementary Duals

    Get PDF
    In 1964, Massey introduced a class of codes with complementary duals which are called Linear Complimentary Dual (LCD for short) codes. He showed that LCD codes have applications in communication system, side-channel attack (SCA) and so on. LCD codes have been extensively studied in literature. On the other hand, MDS codes form an optimal family of classical codes which have wide applications in both theory and practice. The main purpose of this paper is to give an explicit construction of several classes of LCD MDS codes, using tools from algebraic function fields. We exemplify this construction and obtain several classes of explicit LCD MDS codes for the odd characteristic case

    Feature selection when there are many influential features

    Full text link
    Recent discussion of the success of feature selection methods has argued that focusing on a relatively small number of features has been counterproductive. Instead, it is suggested, the number of significant features can be in the thousands or tens of thousands, rather than (as is commonly supposed at present) approximately in the range from five to fifty. This change, in orders of magnitude, in the number of influential features, necessitates alterations to the way in which we choose features and to the manner in which the success of feature selection is assessed. In this paper, we suggest a general approach that is suited to cases where the number of relevant features is very large, and we consider particular versions of the approach in detail. We propose ways of measuring performance, and we study both theoretical and numerical properties of the proposed methodology.Comment: Published in at http://dx.doi.org/10.3150/13-BEJ536 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Consistent HAC Estimation and Robust Regression Testing Using Sharp Origin Kernels with No Truncation

    Get PDF
    A new family of kernels is suggested for use in heteroskedasticity and autocorrelation consistent (HAC) and long run variance (LRV) estimation and robust regression testing. The kernels are constructed by taking powers of the Bartlett kernel and are intended to be used with no truncation (or bandwidth) parameter. As the power parameter (rho) increases, the kernels become very sharp at the origin and increasingly downweight values away fro the origin, thereby achieving effects similar to a bandwidth parameter. Sharp origin kernels can be used in regression testing in much the same way as conventional kernels with no truncation, as suggested in the work of Kiefer and Vogelsang (2002a, 2002b). A unified representation of HAC limit theory for untruncated kernels is provided using a new proof based on Mercer's theorem that allows for kernels which may or may not be differentiable at the origin. This new representation helps to explain earlier findings like the dominance of the Bartlett kernel over quadratic kernels in test power and yields new findings about the asymptotic properties of tests with sharp origin kernels. Analysis and simulations indicate that sharp origin kernels lead to tests with improved size properties relative to conventional tests and better power properties than other tests using Bartlett and other conventional kernels without truncation. If rho is passed to infinity with the sample size (T), the new kernels provide consistent HAC and LRV estimates as well as continued robust regression testing. Optimal choice of rho based on minimizing the asymptotic mean squared error of estimation is considered, leading to a rate of convergence of the kernel estimate of T^{1/3}, analogous to that of a conventional truncated Bartlett kernel estimate with an optimal choice of bandwidth. A data-based version of the consistent sharp origin kernel is obtained which is easily implementable in practical work. Within this new framework, untruncated kernel estimation can be regarded as a form of conventional kernel estimation in which the usual bandwidth parameter is replaced by a power parameter that serves to control the degree of downweighting. Simulations show that in regression testing with the sharp origin kernel, the power properties are better than those with simple untruncated kernels (where rho = 1) and at least as good as those with truncated kernels. Size is generally more accurate with sharp origin kernels than truncated kernels. In practice a simple fixed choice of the exponent parameter around rho = 16 for the sharp origin kernel produces favorable results for both size and power in regression testing with sample sizes that are typical in econometric applications.Consistent HAC estimation, Data determined kernel estimation, Long run variance, Mercer?s theorem, Power parameter, Sharp origin kernel

    A New Approach to Robust Inference in Cointegration

    Get PDF
    A new approach to robust testing in cointegrated systems is proposed using nonparametric HAC estimators without truncation. While such HAC estimates are inconsistent, they still produce asymptotically pivotal tests and, as in conventional regression settings, can improve testing and inference. The present contribution makes use of steep origin kernels which are obtained by exponentiating traditional quadratic kernels. Simulations indicate that tests based on these methods have improved size properties relative to conventional tests and better power properties than other tests that use Bartlett or other traditional kernels with no truncation.Cointegration, HAC estimation, long-run covariance matrix, robust inference, steep origin kernel, fully modified estimation

    Jump starting GARCH: pricing and hedging options with jumps in returns and volatilities

    Get PDF
    This paper considers the pricing of options when there are jumps in the pricing kernel and correlated jumps in asset returns and volatilities. Our model nests Duan’s GARCH option models, where conditional returns are constrained to being normal, as well as mixed jump processes as used in Merton. The diffusion limits of our model have been shown to include jump diffusion models, stochastic volatility models and models with both jumps and diffusive elements in both returns and volatilities. Empirical analysis on the S&P 500 index reveals that the incorporation of jumps in returns and volatilities adds significantly to the description of the time series process and improves option pricing performance. In addition, we provide the first-ever hedging effectiveness tests of GARCH option models.Options (Finance) ; Hedging (Finance)
    • …
    corecore