2,692 research outputs found
Blind image separation based on exponentiated transmuted Weibull distribution
In recent years the processing of blind image separation has been
investigated. As a result, a number of feature extraction algorithms for direct
application of such image structures have been developed. For example,
separation of mixed fingerprints found in any crime scene, in which a mixture
of two or more fingerprints may be obtained, for identification, we have to
separate them. In this paper, we have proposed a new technique for separating a
multiple mixed images based on exponentiated transmuted Weibull distribution.
To adaptively estimate the parameters of such score functions, an efficient
method based on maximum likelihood and genetic algorithm will be used. We also
calculate the accuracy of this proposed distribution and compare the
algorithmic performance using the efficient approach with other previous
generalized distributions. We find from the numerical results that the proposed
distribution has flexibility and an efficient resultComment: 14 pages, 12 figures, 4 tables. International Journal of Computer
Science and Information Security (IJCSIS),Vol. 14, No. 3, March 2016 (pp.
423-433
Simulation techniques for generalized Gaussian densities
This contribution deals with Monte Carlo simulation of generalized Gaussian random variables. Such a parametric family of distributions has been proposed in many applications in science to describe physical phenomena and in engineering, and it seems also useful in modeling economic and financial data. For values of the shape parameter a within a certain range, the distribution presents heavy tails. In particular, the cases a=1/3 and a=1/2 are considered. For such values of the shape parameter, different simulation methods are assessed.Generalized Gaussian density, heavy tails, transformations of rendom variables, Monte Carlo simulation, Lambert W function
Heavy-Tailed Features and Empirical Analysis of the Limit Order Book Volume Profiles in Futures Markets
This paper poses a few fundamental questions regarding the attributes of the
volume profile of a Limit Order Books stochastic structure by taking into
consideration aspects of intraday and interday statistical features, the impact
of different exchange features and the impact of market participants in
different asset sectors. This paper aims to address the following questions:
1. Is there statistical evidence that heavy-tailed sub-exponential volume
profiles occur at different levels of the Limit Order Book on the bid and ask
and if so does this happen on intra or interday time scales ?
2.In futures exchanges, are heavy tail features exchange (CBOT, CME, EUREX,
SGX and COMEX) or asset class (government bonds, equities and precious metals)
dependent and do they happen on ultra-high (<1sec) or mid-range (1sec -10min)
high frequency data?
3.Does the presence of stochastic heavy-tailed volume profile features evolve
in a manner that would inform or be indicative of market participant behaviors,
such as high frequency algorithmic trading, quote stuffing and price discovery
intra-daily?
4. Is there statistical evidence for a need to consider dynamic behavior of
the parameters of models for Limit Order Book volume profiles on an intra-daily
time scale ?
Progress on aspects of each question is obtained via statistically rigorous
results to verify the empirical findings for an unprecedentedly large set of
futures market LOB data. The data comprises several exchanges, several futures
asset classes and all trading days of 2010, using market depth (Type II) order
book data to 5 levels on the bid and ask
Recommended from our members
Sparse functional regression models: minimax rates and contamination
In functional linear regression and functional generalized linear regression models, the effect of the predictor function is usually assumed to be spread across the index space. In this dissertation we consider the sparse functional linear model and the sparse functional generalized linear models (GLM), where the impact of the predictor process on the response is only via its value at one point in the index space, defined as the sensitive point. We are particularly interested in estimating the sensitive point. The minimax rate of convergence for estimating the parameters in sparse functional linear regression is derived. It is shown that the optimal rate for estimating the sensitive point depends on the roughness of the predictor function, which is quantified by a "generalized Hurst exponent". The least squares estimator (LSE) is shown to attain the optimal rate. Also, a lower bound is given on the minimax risk of estimating the parameters in sparse functional GLM, which also depends on the generalized Hurst exponent of the predictor process. The order of the minimax lower bound is the same as that of the weak convergence rate of the maximum likelihood estimator (MLE), given that the functional predictor behaves like a Brownian motion
Autoregressive Kernels For Time Series
We propose in this work a new family of kernels for variable-length time
series. Our work builds upon the vector autoregressive (VAR) model for
multivariate stochastic processes: given a multivariate time series x, we
consider the likelihood function p_{\theta}(x) of different parameters \theta
in the VAR model as features to describe x. To compare two time series x and
x', we form the product of their features p_{\theta}(x) p_{\theta}(x') which is
integrated out w.r.t \theta using a matrix normal-inverse Wishart prior. Among
other properties, this kernel can be easily computed when the dimension d of
the time series is much larger than the lengths of the considered time series x
and x'. It can also be generalized to time series taking values in arbitrary
state spaces, as long as the state space itself is endowed with a kernel
\kappa. In that case, the kernel between x and x' is a a function of the Gram
matrices produced by \kappa on observations and subsequences of observations
enumerated in x and x'. We describe a computationally efficient implementation
of this generalization that uses low-rank matrix factorization techniques.
These kernels are compared to other known kernels using a set of benchmark
classification tasks carried out with support vector machines
- …