21 research outputs found

    Econometrical Modelling of Profit Tax Revenue

    Get PDF
    The aim of this article is to present a forecast of budget revenue from the profit tax using econometric models. The set of applied models has to be reduced to very simple models due to short time series used. Therefore, the profit tax regression analysis is made in two stages. In the first stage, econometric modelling of profit tax revenue with the main profit indicators (called the profit tax base) is performed on the basis of information on profit tax regulation and its changes. In the second stage, algorithms of forecasting the profit tax base are formed when the main macroeconomic indicators of Lithuanian economy are used as regressors. Crossvalidation was applied to estimate the accuracy of these algorithms

    Moderate deviations for the determinant of Wigner matrices

    Full text link
    We establish a moderate deviations principle (MDP) for the log-determinant logdet(Mn)\log | \det (M_n) | of a Wigner matrix MnM_n matching four moments with either the GUE or GOE ensemble. Further we establish Cram\'er--type moderate deviations and Berry-Esseen bounds for the log-determinant for the GUE and GOE ensembles as well as for non-symmetric and non-Hermitian Gaussian random matrices (Ginibre ensembles), respectively.Comment: 20 pages, one missing reference added; Limit Theorems in Probability, Statistics and Number Theory, Springer Proceedings in Mathematics and Statistics, 201

    Moderate deviations via cumulants

    Full text link
    The purpose of the present paper is to establish moderate deviation principles for a rather general class of random variables fulfilling certain bounds of the cumulants. We apply a celebrated lemma of the theory of large deviations probabilities due to Rudzkis, Saulis and Statulevicius. The examples of random objects we treat include dependency graphs, subgraph-counting statistics in Erd\H{o}s-R\'enyi random graphs and UU-statistics. Moreover, we prove moderate deviation principles for certain statistics appearing in random matrix theory, namely characteristic polynomials of random unitary matrices as well as the number of particles in a growing box of random determinantal point processes like the number of eigenvalues in the GUE or the number of points in Airy, Bessel, and sin\sin random point fields.Comment: 24 page

    Econometric models of the impact of macroeconomic processes on the stock market in the baltic countries

    No full text
    The paper is meant for econometric modeling and prediction of sector in- dice variation regularities of stock prices in the OMX exchange of the Baltic countries’ companies.To develop regression models, quarterly time series of 2000 - 2011 years are used.Regression equations, obtained in the work, allow us to name the basic macroeconomic indicators that significantly influence stock mar- ket fluctuations and to quantitatively estimate their various impact on stock in- dices corresponding to individual economy sectors.A comparative analysis made shows that, on the basis of the developed regression models, there is a possibility to predict the tendencies of stock market variation more exactly than by apply- ing the Vector autoregression model of stock price sector indices, considered by the authors, which contains no variables that define macroeconomics

    Application of Clustering in the Non-Parametric Estimation of Distribution Density

    Get PDF
    This paper discusses a multimodal density function estimation problem of a random vector. A comparative accuracy analysis of some popular non-parametric estimators is made by using the Monte-Carlo method. The paper demonstrates that the estimation quality increases significantly if the sample is clustered (i.e., the multimodal density function is approximated by a mixture of unimodal densities), and later on, the density estimation methods are applied separately to each cluster. In this paper, the sample is clustered using the Gaussian distribution mixture model and the EM algorithm. The highest efficiency in the analysed cases was reached by using the iterative procedure proposed by Friedman for estimating a density component corresponding to each cluster after the primary sample clustering mentioned. The Friedman procedure is based on both the projection pursuit of multivariate observations and transformation of the univariate projections into the standard Gaussian random values (using the density function estimates of these projections)

    Probabilistic model for the growth of thesauri

    Get PDF

    ECONOMETRIC MODELS OF THE IMPACT OF MACROECONOMIC PROCESSES ON THE STOCK MARKET IN THE BALTIC COUNTRIES

    No full text
    Abstract The paper is meant for econometric modeling and prediction of sector indice variation regularities of stock prices in the OMX exchange of the Baltic countries' companies.To develop regression models, quarterly time series of 2000 -2011 years are used.Regression equations, obtained in the work, allow us to name the basic macroeconomic indicators that significantly influence stock market fluctuations and to quantitatively estimate their various impact on stock indices corresponding to individual economy sectors.A comparative analysis made shows that, on the basis of the developed regression models, there is a possibility to predict the tendencies of stock market variation more exactly than by applying the Vector autoregression model of stock price sector indices, considered by the authors, which contains no variables that define macroeconomics

    Goodness of fit tests based on kernel density estimators

    No full text
    The paper is devoted to goodness of fit tests based on kernel estimators of probability density functions. In particular, univariate case is investigated. The test statistic is considered in the form of maximum of the normalized deviation of the estimate from its expected value. Produced comparative Monte Carlo power study show that the proposed test is a powerful competitor to the exist- ing classical criteria testing goodness of fit against a specific type of alternative hypothesis. An analytical way for establishing the asymptotic distribution of the test statistic is proposed, using the theory of high excursions of Gaussian random processes and fields introduced by Rudzkis [17,18]. The extension of the proposed methods to the multivariate case are discussed

    Goodness of fit tests based on kernel density estimators

    No full text
    The paper is devoted to goodness of fit tests based on kernel estimators of probability density functions. In particular, univariate case is investigated. The test statistic is considered in the form of maximum of the normalized deviation of the estimate from its expected value. Produced comparative Monte Carlo power study show that the proposed test is a powerful competitor to the exist- ing classical criteria testing goodness of fit against a specific type of alternative hypothesis. An analytical way for establishing the asymptotic distribution of the test statistic is proposed, using the theory of high excursions of Gaussian random processes and fields introduced by Rudzkis [17,18]. The extension of the proposed methods to the multivariate case are discussed

    On Statistical Classification of Scientific Texts

    No full text
    The research considers the problem of classification of scientific texts. Models and methods based on stochastic distribution of scientific terms are discussed. The preliminar results of experimental study over real-world data are reported
    corecore