2,582 research outputs found

    On a generalised model for time-dependent variance with long-term memory

    Full text link
    The ARCH process (R. F. Engle, 1982) constitutes a paradigmatic generator of stochastic time series with time-dependent variance like it appears on a wide broad of systems besides economics in which ARCH was born. Although the ARCH process captures the so-called "volatility clustering" and the asymptotic power-law probability density distribution of the random variable, it is not capable to reproduce further statistical properties of many of these time series such as: the strong persistence of the instantaneous variance characterised by large values of the Hurst exponent (H > 0.8), and asymptotic power-law decay of the absolute values self-correlation function. By means of considering an effective return obtained from a correlation of past returns that has a q-exponential form we are able to fix the limitations of the original model. Moreover, this improvement can be obtained through the correct choice of a sole additional parameter, qmq_{m}. The assessment of its validity and usefulness is made by mimicking daily fluctuations of SP500 financial index.Comment: 6 pages, 4 figure

    Structure and apparent topography of TiO2 (110) surfaces

    Full text link
    We present self-consistent ab-initio total-energy and electronic-structure calculations on stoichiometric and non-stoichiometric TiO2 (110) surfaces. Scanning tunneling microscopy (STM) topographs are simulated by calculating the local electronic density of states over an energy window appropriate for the experimental positive-bias conditions. We find that under these conditions the STM tends to image the undercoordinated Ti atoms, in spite of the physical protrusion of the O atoms, giving an apparent reversal of topographic contrast on the stoichiometric 1x1 or missing-row 2x1 surface. We also show that both the interpretation of STM images and the direct comparison of surface energies favor an added-row structure over the missing-row structure for the oxygen-deficient 2x1 surface.Comment: 6 pages, two-column style with 5 postscript figures embedded. Uses REVTEX and epsf macros. Also available at http://www.physics.rutgers.edu/~dhv/preprints/index.html#ng_tio

    Model-Based Clustering and Classification of Functional Data

    Full text link
    The problem of complex data analysis is a central topic of modern statistical science and learning systems and is becoming of broader interest with the increasing prevalence of high-dimensional data. The challenge is to develop statistical models and autonomous algorithms that are able to acquire knowledge from raw data for exploratory analysis, which can be achieved through clustering techniques or to make predictions of future data via classification (i.e., discriminant analysis) techniques. Latent data models, including mixture model-based approaches are one of the most popular and successful approaches in both the unsupervised context (i.e., clustering) and the supervised one (i.e, classification or discrimination). Although traditionally tools of multivariate analysis, they are growing in popularity when considered in the framework of functional data analysis (FDA). FDA is the data analysis paradigm in which the individual data units are functions (e.g., curves, surfaces), rather than simple vectors. In many areas of application, the analyzed data are indeed often available in the form of discretized values of functions or curves (e.g., time series, waveforms) and surfaces (e.g., 2d-images, spatio-temporal data). This functional aspect of the data adds additional difficulties compared to the case of a classical multivariate (non-functional) data analysis. We review and present approaches for model-based clustering and classification of functional data. We derive well-established statistical models along with efficient algorithmic tools to address problems regarding the clustering and the classification of these high-dimensional data, including their heterogeneity, missing information, and dynamical hidden structure. The presented models and algorithms are illustrated on real-world functional data analysis problems from several application area

    FlashCam: a fully-digital camera for the medium-sized telescopes of the Cherenkov Telescope Array

    Full text link
    The FlashCam group is currently preparing photomultiplier-tube based cameras proposed for the medium-sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The cameras are designed around the FlashCam readout concept which is the first fully-digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for the front-end electronics modules and a high performance camera server as back-end. This contribution describes the progress of the full-scale FlashCam camera prototype currently under construction, as well as performance results also obtained with earlier demonstrator setups. Plans towards the production and implementation of FlashCams on site are also briefly presented.Comment: 8 pages, 6 figures. In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands. All CTA contributions at arXiv:1508.0589

    Performance Verification of the FlashCam Prototype Camera for the Cherenkov Telescope Array

    Full text link
    The Cherenkov Telescope Array (CTA) is a future gamma-ray observatory that is planned to significantly improve upon the sensitivity and precision of the current generation of Cherenkov telescopes. The observatory will consist of several dozens of telescopes with different sizes and equipped with different types of cameras. Of these, the FlashCam camera system is the first to implement a fully digital signal processing chain which allows for a traceable, configurable trigger scheme and flexible signal reconstruction. As of autumn 2016, a prototype FlashCam camera for the medium-sized telescopes of CTA nears completion. First results of the ongoing system tests demonstrate that the signal chain and the readout system surpass CTA requirements. The stability of the system is shown using long-term temperature cycling.Comment: 5 pages, 13 figures, Proceedings of the 9th International Workshop on Ring Imaging Cherenkov Detectors (RICH 2016), Lake Bled, Sloveni

    Au/TiO2(110) interfacial reconstruction stability from ab initio

    Full text link
    We determine the stability and properties of interfaces of low-index Au surfaces adhered to TiO2(110), using density functional theory energy density calculations. We consider Au(100) and Au(111) epitaxies on rutile TiO2(110) surface, as observed in experiments. For each epitaxy, we consider several different interfaces: Au(111)//TiO2(110) and Au(100)//TiO2(110), with and without bridging oxygen, Au(111) on 1x2 added-row TiO2(110) reconstruction, and Au(111) on a proposed 1x2 TiO reconstruction. The density functional theory energy density method computes the energy changes on each of the atoms while forming the interface, and evaluates the work of adhesion to determine the equilibrium interfacial structure.Comment: 20 pages, 11 figure

    Time connectedness of fear

    Get PDF
    This paper examines the interconnection between four implied volatility indices representative of the investors' consensus view of expected stock market volatility at different maturities during the period January 3, 2011-May 4, 2018. To this end, we first perform a static analysis to measure the total volatility connectedness in the entire period using a framework proposed by Diebold and Yilmaz (2014). Second, we apply a dynamic analysis to evaluate both the net directional connectedness for each market using the TVP-VAR connectedness approach developed by Antonakakis and Gabauer (2017). Our results suggest that a 72.27%, of the total variance of the forecast errors is explained by shocks across the examined investor time horizons, indicating that the remainder 27.73% of the variation is due to idiosyncratic shocks. Furthermore, we find that volatility connectedness varies over time, with a surge during periods of increasing economic and financial instability. Finally, we also document a superior performance of the TVP-VAR approach to connectedness respect to the original one proposed by Diebold and Yilmaz (2014

    Search for Exotic Strange Quark Matter in High Energy Nuclear Reactions

    Full text link
    We report on a search for metastable positively and negatively charged states of strange quark matter in Au+Pb reactions at 11.6 A GeV/c in experiment E864. We have sampled approximately six billion 10% most central Au+Pb interactions and have observed no strangelet states (baryon number A < 100 droplets of strange quark matter). We thus set upper limits on the production of these exotic states at the level of 1-6 x 10^{-8} per central collision. These limits are the best and most model independent for this colliding system. We discuss the implications of our results on strangelet production mechanisms, and also on the stability question of strange quark matter.Comment: 21 pages, 9 figures, to be published in Nuclear Physics A (Carl Dover memorial edition

    Pitfalls and Opportunities in the Use of Extreme Value Theory in Risk Management

    Get PDF
    Recent literature has trumpeted the claim that extreme value theory (EVT) holds promise for accurate estimation of extreme quantiles and tail probabilities of financial asset returns, and hence hold promise for advances in the management of extreme financial risks. Our view, based on a disinterested assessment of EVT from the vantage point of financial risk management, is that the recent optimism is partly appropriate but also partly exaggerated, and that at any rate much of the potential of EVT remains latent. We substantiate this claim by sketching a number of pitfalls associate with use of EVT techniques. More constructively, we show how certain of the pitfalls can be avoided, and we sketch a number of explicit research directions that will help the potential of EVT to be realized
    corecore