1,277 research outputs found

    Valuation of Six Asian Stock Markets: Financial System Identification in Noisy Environments

    Get PDF
    The open financial economic systems of six Asian countries Taiwan, Malaysia, Singapore, Philippines, Indonesia and Japan - over the period 1986 through 1995 are identified from empirical data to determine how their stock markets, economies and financial markets are interrelated. The objective is to find rational stock market valuations using a country's nominal GDP and a short term interest rate, based on a modified version of the Dividend Discount Model. But our empirical results contradict such conventional financial economic theory. Various methods are used to analyze the 3D data covariance ellipsoids: spectral analysis, analysis of information matrices, 2D and 3D noise/signal determination and ''super-filter'' system identification based on 3D projections. The new ''super-filter'' method provides the sharpest identification of the Grassmanian invariant q of the empirical systems and the best computation of the finite boundaries of the empirical parameter ranges. All six Asian systems are high noise environments, in which it is very difficult to separate systematic signals from noise. Because of these high noise levels, spectral analysis is not reliable. By plotting all 3D q = 2 {Complete} Least Squares projections we find that only Taiwan has a clear q = 2 system, i.e., Taiwan's stock market, economy and financial market are rationally coherent. In contrast, Malaysia, Singapore, Philippines and Indonesia have q = 1 systems, in which stock markets and economies are closely related, but unrelated to the respective domestic financial markets. Several possible economic explanations are provided. We also quantitatively establish the incoherence of Japan's financial economic system. Japan's stock market operates independently from its economy and from its financial market, which are mutually unrelated.

    Providing scientific visualisation for spatial data analysis: criteria and an assessment of SAGE

    Get PDF
    A consistent theme in recent work on developing exploratory spatial data analysis (ESDA) has been the importance attached to visualization techniques, particularly following the pioneering development of packages such as REGARD by Haslett et al (1990). The focus on visual techniques is often justified in two ways: (a) the power of modern graphical interfaces means that graphics is no longer a way of simply presenting results in the form of maps or graphs, but a tool for the extraction of information from data; (b)graphical, exploratory methods are felt to be more intuitive for non-specialists to use than methods of numerical spatial statistics enabling wider participation in the process of getting data insights. Despite the importance attached to visualisation techniques, very little work has been done to assess the effectiveness of techniques, either in the wider scientific visualisation community, or among those working with spatial data. This paper will describe a theoretical framework for developing visualisation tools for ESDA that incorporates a data model of what the analyst is looking for based on the concepts of "rough" and "smooth" elements of a data set and a theoretical scheme for assessing visual tools. The paper will include examples of appropriate tools and a commentary on the effectiveness of some existing packages

    Explanatory visualization of multidimensional projections

    Get PDF

    Explanatory visualization of multidimensional projections

    Get PDF

    Explanatory visualization of multidimensional projections

    Get PDF

    Neural Networks for improved signal source enumeration and localization with unsteered antenna arrays

    Get PDF
    Direction of Arrival estimation using unsteered antenna arrays, unlike mechanically scanned or phased arrays, requires complex algorithms which perform poorly with small aperture arrays or without a large number of observations, or snapshots. In general, these algorithms compute a sample covriance matrix to obtain the direction of arrival and some require a prior estimate of the number of signal sources. Herein, artificial neural network architectures are proposed which demonstrate improved estimation of the number of signal sources, the true signal covariance matrix, and the direction of arrival. The proposed number of source estimation network demonstrates robust performance in the case of coherent signals where conventional methods fail. For covariance matrix estimation, four different network architectures are assessed and the best performing architecture achieves a 20 times improvement in performance over the sample covariance matrix. Additionally, this network can achieve comparable performance to the sample covariance matrix with 1/8-th the amount of snapshots. For direction of arrival estimation, preliminary results are provided comparing six architectures which all demonstrate high levels of accuracy and demonstrate the benefits of progressively training artificial neural networks by training on a sequence of sub- problems and extending to the network to encapsulate the entire process

    Doctor of Philosophy

    Get PDF
    dissertationWith the ever-increasing amount of available computing resources and sensing devices, a wide variety of high-dimensional datasets are being produced in numerous fields. The complexity and increasing popularity of these data have led to new challenges and opportunities in visualization. Since most display devices are limited to communication through two-dimensional (2D) images, many visualization methods rely on 2D projections to express high-dimensional information. Such a reduction of dimension leads to an explosion in the number of 2D representations required to visualize high-dimensional spaces, each giving a glimpse of the high-dimensional information. As a result, one of the most important challenges in visualizing high-dimensional datasets is the automatic filtration and summarization of the large exploration space consisting of all 2D projections. In this dissertation, a new type of algorithm is introduced to reduce the exploration space that identifies a small set of projections that capture the intrinsic structure of high-dimensional data. In addition, a general framework for summarizing the structure of quality measures in the space of all linear 2D projections is presented. However, identifying the representative or informative projections is only part of the challenge. Due to the high-dimensional nature of these datasets, obtaining insights and arriving at conclusions based solely on 2D representations are limited and prone to error. How to interpret the inaccuracies and resolve the ambiguity in the 2D projections is the other half of the puzzle. This dissertation introduces projection distortion error measures and interactive manipulation schemes that allow the understanding of high-dimensional structures via data manipulation in 2D projections

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Personalized Sketch-Based Brushing in Scatterplots

    Get PDF
    Brushing is at the heart of most modern visual analytics solutions and effective and efficient brushing is crucial for successful interactive data exploration and analysis. As the user plays a central role in brushing, several data-driven brushing tools have been designed that are based on predicting the user's brushing goal. All of these general brushing models learn the users' average brushing preference, which is not optimal for every single user. In this paper, we propose an innovative framework that offers the user opportunities to improve the brushing technique while using it. We realized this framework with a CNN-based brushing technique and the result shows that with additional data from a particular user, the model can be refined (better performance in terms of accuracy), eventually converging to a personalized model based on a moderate amount of retraining.acceptedVersio
    corecore