128 research outputs found

    Asymptotics for the Arc Length of a Multivariate Time Series and Its Applications as a Measure of Risk

    Get PDF
    The necessity of more trustworthy methods for measuring the risk (volatility) of financial assets has come to the surface with the global market downturn This dissertation aims to propose sample arc length of a time series, which provides a measure of the overall magnitude of the one-step-ahead changes over the observation time period, as a new approach for quantifying the risk. The Gaussian functional central limit theorem is proven under finite second moment conditions. Without loss of generality we consider equally spaced time series when first differences of the series follow a variety of popular stationary models including autoregressive moving average, generalized auto regressive conditional heteroscedastic, and stochastic volatility. As applications we use CUSUM statistic to identify changepoints in terms of volatility of Dow Jones Index returns from January, 2005 through December, 2009. We also compare asset series to determine if they have different volatility structures when arc length is used as the tool of quantification. The idea is that processes with larger sample arc lengths exhibit larger fluctuations, and hence suggest greater variability

    Inference for the autocovariance of a functional time series under conditional heteroscedasticity

    Get PDF
    Most methods for analyzing functional time series rely on the estimation of lagged autocovariance operators or surfaces. As in univariate time series analysis, testing whether or not such operators are zero is an important diagnostic step that is well understood when the data, or model residuals, form a strong white noise. When functional data are constructed from dense records of, for example, asset prices or returns, a weak white noise model allowing for conditional heteroscedasticity is often more realistic. Applying inferential procedures for the autocovariance based on a strong white noise to such data often leads to the erroneous conclusion that the data exhibit significant autocorrelation. We develop methods for performing inference for the lagged autocovariance operators of stationary functional time series that are valid under general conditional heteroscedasticity conditions. These include a portmanteau test to assess the cumulative significance of empirical autocovariance operators up to a user selected maximum lag, as well as methods for obtaining confidence bands for a functional version of the autocorrelation that are useful in model selection/validation. We analyze the efficacy of these methods through a simulation study, and apply them to functional time series derived from asset price data of several representative assets. In this application, we found that strong white noise tests often suggest that such series exhibit significant autocorrelation, whereas our tests, which account for functional conditional heteroscedasticity, show that these data are in fact uncorrelated in a function space

    Fourier-type monitoring procedures for strict stationarity

    Full text link
    We consider model-free monitoring procedures for strict stationarity of a given time series. The new criteria are formulated as L2-type statistics incorporating the empirical characteristic function. Asymptotic as well as Monte Carlo results are presented. The new methods are also employed in order to test for possible stationarity breaks in time-series data from the financial sector

    Heterogeneous Sensor Signal Processing for Inference with Nonlinear Dependence

    Get PDF
    Inferring events of interest by fusing data from multiple heterogeneous sources has been an interesting and important topic in recent years. Several issues related to inference using heterogeneous data with complex and nonlinear dependence are investigated in this dissertation. We apply copula theory to characterize the dependence among heterogeneous data. In centralized detection, where sensor observations are available at the fusion center (FC), we study copula-based fusion. We design detection algorithms based on sample-wise copula selection and mixture of copulas model in different scenarios of the true dependence. The proposed approaches are theoretically justified and perform well when applied to fuse acoustic and seismic sensor data for personnel detection. Besides traditional sensors, the access to the massive amount of social media data provides a unique opportunity for extracting information about unfolding events. We further study how sensor networks and social media complement each other in facilitating the data-to-decision making process. We propose a copula-based joint characterization of multiple dependent time series from sensors and social media. As a proof-of-concept, this model is applied to the fusion of Google Trends (GT) data and stock/flu data for prediction, where the stock/flu data serves as a surrogate for sensor data. In energy constrained networks, local observations are compressed before they are transmitted to the FC. In these cases, conditional dependence and heterogeneity complicate the system design particularly. We consider the classification of discrete random signals in Wireless Sensor Networks (WSNs), where, for communication efficiency, only local decisions are transmitted. We derive the necessary conditions for the optimal decision rules at the sensors and the FC by introducing a hidden random variable. An iterative algorithm is designed to search for the optimal decision rules. Its convergence and asymptotical optimality are also proved. The performance of the proposed scheme is illustrated for the distributed Automatic Modulation Classification (AMC) problem. Censoring is another communication efficient strategy, in which sensors transmit only informative observations to the FC, and censor those deemed uninformative . We design the detectors that take into account the spatial dependence among observations. Fusion rules for censored data are proposed with continuous and discrete local messages, respectively. Their computationally efficient counterparts based on the key idea of injecting controlled noise at the FC before fusion are also investigated. In this thesis, with heterogeneous and dependent sensor observations, we consider not only inference in parallel frameworks but also the problem of collaborative inference where collaboration exists among local sensors. Each sensor forms coalition with other sensors and shares information within the coalition, to maximize its inference performance. The collaboration strategy is investigated under a communication constraint. To characterize the influence of inter-sensor dependence on inference performance and thus collaboration strategy, we quantify the gain and loss in forming a coalition by introducing the copula-based definitions of diversity gain and redundancy loss for both estimation and detection problems. A coalition formation game is proposed for the distributed inference problem, through which the information contained in the inter-sensor dependence is fully explored and utilized for improved inference performance

    Some asymptotic results on non-standard likelihood ratio tests, and Cox process modeling in finance

    Get PDF
    This dissertation consists of two parts. In the first part, the subject of hypothesis testing is addressed. Here, non-standard formulations of the null hypothesis are discussed, e.g., non-stationarity under the null, and boundary hypotheses. In the second part, stochastic models for financial markets are developed and studied. Particular emphasis is placed on the application of Cox processes. Part one begins with a survey of time-series models which allow for conditional heteroscedasticity and autoregression, AR-GARCH models. These models reduce to a white noise model, when some of the conditional heteroscedasticity parameters take their boundary value at zero, and the autoregressive component is in fact not present. The asymptotic distribution of the pseudo-log-likelihood ratio statistics for testing the presence of conditional heteroscedasticity and the autoregression term is reproduced. For financial market data, the model parameters are estimated and tests for the reduction to white noise are performed. The impact of these results on risk measurement is discussed by comparing several Value-at-Risk calculations assuming the alternative model specifications. Furthermore, the power function of these tests is examined by a simulation study of the ARCH(1) and the AR(1)-ARCH(1) models. First, the simulations are carried out assuming Gaussian innovations and then, the Gaussian distribution is replaced by the heavy tailed t-distribution. This reveals that a substantial loss of power is associated with the use of heavy tailed innovations. A related testing problem arises in the analysis of the Ornstein-Uhlenbeck (OU) model, driven by Levy processes. This model is designed to capture mean reverting behaviour if it exists; but the data may in fact be adequately described by a pure Levy process with no OU (autoregressive) effect. For an appropriate discretized version of the model, likelihood methods are utilized to test for such a reduction of the OU process to Levy motion, deriving the distribution of the relevant pseudo-log-likelihood ratio statistics, asymptotically, both for a refining sequence of partitions on a fixed time interval with mesh size tending to zero, and as the length of the observation window grows large. These analyses are non-standard in that the mean reversion parameter vanishes under the null of a pure Levy process for the data. Despite this a very general analysis is conducted with no technical restrictions on the underlying processes or parameter sets, other than a finite variance assumption for the Levy process. As a special case, for Brownian Motion as driving process, the limiting distribution is deduced in a quite explicit way, finding results which generalise the well-known Dickey-Fuller ("unit-root") theory. Part two of this dissertation considers the application of Cox processes in mathematical finance. Here, a framework is discussed for the valuation of employee share options (ESO), and credit risk modeling. One popular approach for ESO valuation involves a modification of standard option pricing models, augmenting them by the possibility of departure of the executive at an exogenously given random time. Such models are called reduced form models, in contrast to structural models that require measures of the employee's utility function and other unobservable quantities. Here, an extension of the reduced form model for the valuation of ESOs is developed. This model incorporates and emphasises employee departure, company takeover, performance vesting and other exotic provisions specific to ESOs. The assumptions underlying the reduced form model are clearified, and discussed for their implications. Further, the probabilistic structure of the model is analysed which includes an explicit characterization of the set of equivalent martingale measures, as well as the computation of prominent martingale measures like, e.g., the variance optimal martingale measure and the minimal martingale measure. Particular ESO specifications are studied emphasizing different aspects of the proposed framework. In this context, also strict no-arbitrage bounds for ESO prices are provided by applying optimal stopping. Furthermore, possible limitations of the proposed model are explored by examining departures from the crucial assumptions of no-arbitrage, i.e. by considering the effects of the employee having inside information. In a continuous time market model, credit risk modeling and pricing of credit derivatives is discussed. In the approach it is adopted that credit risk is described by the interest rate spread between a corporate bond and a government bond. This spread is modeled in terms of explaining variables. For this purpose, a specific market model consisting of four assets is considered where the default process of the company is incorporated in a risky money market by a Cox process. It is shown that this market model has a unique equivalent martingale measure and is complete. As a consequence, contingent claim valuation can be executed in the usual way. This is illustrated with the valuation of a convertible bond which fits naturally in the given setting

    Risk analysis in the evaluation of the international investment opportunities. Advances in modelling and forecasting volatility for risk assessment purposes

    Get PDF
    The thesis proposes to assess the risk topic in the context of foreign investment decisions. In identifying two main risk-related concepts, I have split risks in two categories using a unique criterion: the ratio between the endogenous and exogenous content of the problem. According to it, I have built a pool of risks that the company may have entirely or partially under control (forming the endogenous part of the problem), and a pool with exogenous risks that the company cannot control at all, but can assess and build strategies for their management (forming the exogenous part of the problem). In each category I have identified one source of risk, representing the most important of all risks belonging to the same pool. For the endogenous risks part, credit risk (in its extensive version counterparty risk) was selected. Related to this, there have been additionally discussed the topics of systemic risk and of the risk associated to the impact of the activity of the international rating agencies on the firm financing problem when a company proceeded to debt issuance. The other half of the problem involves the risk of the sector the company activates in. I have found that the risk assessment in this category became an econometric problem of volatility forecasting for a portfolio of a number of selected returns. The discussion complicates given the following factors: 1. The scientific world has not reached yet to a consensus on the superiority of a certain model or group of models that measures volatility. As such, forecasted volatility estimates may depend on the model or methodologies to be used, type of data frequency (high or low), selection of the error statistics etc. As such, decision making as regards the opportunity of the investment becomes highly dependent on econometric choices to be made. 2. Multivariate models are computationally intensive due to the parameter estimation problem. If a large number of stocks are included in the portfolio, the number of estimations to be done would be so high that the problem would be extremely difficult to be technically undertaken. 3. Due to high correlation of stocks, the estimation problem becomes particularly imprecise and computationally difficult. As a solution to such problems, I have justified the superiority of one autoregressive heteroskedastic model (PC-GARCH) considering not only estimation performance but also cost saving component. For this purpose, I have run an empirical exercise with a portfolio formed of seven stocks belonging to the US IT sector (Adobe, Apple, Autodesk, Cisco, Dell, Microsoft and 3M) in order to evidentiate advantages of this model. They may be summarized as it follows: PC-GARCH • Minimizes computational efforts (by transforming multivariate GARCH models into univariate ones), by reducing significantly the computational time and getting rid of any problem that may arise from complex data manipulations; • Ensures a tight control of the amount of “noise” due to reducing the number of variables to fewer principal components. This may prove benefic since it may result in more stable correlation estimates; • Produces volatilities and correlations for all variables in the system, including those for which direct GARCH estimation is computationally difficult. As such, I’ve concluded that when using large portfolios formed of hundreds or thousands of stocks, for the scope of volatility (and therefore risk) forecasting, PCGARCH is the most appropriate model to be used.risk, endogeneity, exogeneity, credit risk, systemic risk, counterparty risk,rating, volatility, forecasting, GARCH, PC-GARCH, principal components, autocorrelation, heteroskedasticity, orthogonality

    Modeling with ARIMA-ARCH/GARCH Techniques to Estimate Weekly Exchange Rate of Liberia

    Get PDF
    The current research employs the estimations of univariate linear time series, ARIMA and two traditional volatility time series models, ARCH and GARCH to analyze the behavior of exchange rate volatility in the Liberian economy using weekly time series observation spanning from January 07, 2013 to December 25, 2017. This study estimated the parameters of the selected models and detected the irregular pattern the financial series portrays in the Liberian economy. Evidently, the paper finds huge volatility and fat tail distribution in the exchange rate series of Liberia and as such the series behavior is worrisome which needs to be expediently modeled well. Additionally, the ARCH and GARCH models were estimated separately to capture the volatility pattern in the series. The results show that there is persistence volatility in the financial series as the estimated ARCH parameter was equal to unity and the sum of the ARCH and GARCH terms were close to unity in the GARCH(1,1) specification. On the other hand, after assuming generalized error distribution for the exchange rate series due to its fat tail, the parameters on the volatility models reduced significantly but the means of these equations were practically zero. In addition, the two volatility models were re-estimated to the residuals of the ARIMA to model the noise in the univariate time series model. The results reveal that the models performed remarkably well when fitted to the residuals of the ARIMA(1,1,2) model. The recommendation from this empirical research is that with the high persistence in the series and the risk as well as the very low returns it comes with, modelers and policymakers should estimate the parameters of the exchange rate effectively and with care before any point forecast can come into play because knowledge about the distribution and the calculated returns will all aid in better prediction. Keywords: Exchange rate volatility, ARIMA model, ARCH model, GARCH model, Volatility clustering, Liberi

    Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain

    Get PDF
    The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio

    The 7th Conference of PhD Students in Computer Science

    Get PDF
    • …
    corecore