263 research outputs found

    Does the cross-listing premium that Canadian firms gain also reflect their own default probabilities

    Get PDF
    1 online resource (iv, 35 p.) : col. ill.Includes abstract and appendix.Includes bibliographical references (p. 31-33).Cross-listing has been a popular strategy for business expansion and seems always to be followed by appreciation in firm value. Previous theories explain the existence of this stock premium either due to risk reduction, by committing and then providing better protection to minority shareholders and by improving information environment and media coverage, or due to growth opportunities, by raising capital for potential growth projects and by reducing the cost of capital among a larger investor base. This paper aims to connect stock premium with one of the firms’ aptitude, called default probability, and testing whether this relationship is statistically significant in several regression models. 47 Canadian firms from 10 major sectors and 38 industries are selected, which announced officially their cross-listing activities in NYSE or NASDAQ during 1982 to 2002. The financial data are collected from Datastream to measure firm specific factors and cross-sectional models are applied to capture the sector specific factors. It is reasonable to conclude that pre-listing premium and firm size account mostly for the post-listing premium, while default probability also exerts its explanatory power

    Unified Framework for the Effective Rate Analysis of Wireless Communication Systems over MISO Fading Channels

    Get PDF
    This paper proposes a unified framework for the effective rate analysis over arbitrary correlated and not necessarily identical multiple inputs single output (MISO) fading channels, which uses moment generating function (MGF) based approach and H transform representation. The proposed framework has the potential to simplify the cumbersome analysis procedure compared to the probability density function (PDF) based approach. Moreover, the effective rates over two specific fading scenarios are investigated, namely independent but not necessarily identical distributed (i.n.i.d.) MISO hyper Fox’s H fading channels and arbitrary correlated generalized K fading channels. The exact analytical representations for these two scenarios are also presented. By substituting corresponding parameters, the effective rates in various practical fading scenarios, such as Rayleigh, Nakagami-m, Weibull/Gamma and generalized K fading channels, are readily available. In addition, asymptotic approximations are provided for the proposed H transform and MGF based approach as well as for the effective rate over i.n.i.d. MISO hyper Fox’s H fading channels. Simulations under various fading scenarios are also presented, which support the validity of the proposed method

    Diffusion-Model-Assisted Supervised Learning of Generative Models for Density Estimation

    Full text link
    We present a supervised learning framework of training generative models for density estimation. Generative models, including generative adversarial networks, normalizing flows, variational auto-encoders, are usually considered as unsupervised learning models, because labeled data are usually unavailable for training. Despite the success of the generative models, there are several issues with the unsupervised training, e.g., requirement of reversible architectures, vanishing gradients, and training instability. To enable supervised learning in generative models, we utilize the score-based diffusion model to generate labeled data. Unlike existing diffusion models that train neural networks to learn the score function, we develop a training-free score estimation method. This approach uses mini-batch-based Monte Carlo estimators to directly approximate the score function at any spatial-temporal location in solving an ordinary differential equation (ODE), corresponding to the reverse-time stochastic differential equation (SDE). This approach can offer both high accuracy and substantial time savings in neural network training. Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner. Compared with existing normalizing flow models, our method does not require to use reversible neural networks and avoids the computation of the Jacobian matrix. Compared with existing diffusion models, our method does not need to solve the reverse-time SDE to generate new samples. As a result, the sampling efficiency is significantly improved. We demonstrate the performance of our method by applying it to a set of 2D datasets as well as real data from the UCI repository

    A divergence-free constrained magnetic field interpolation method for scattered data

    Full text link
    An interpolation method to evaluate magnetic fields given unstructured, scattered magnetic data is presented. The method is based on the reconstruction of the global magnetic field using a superposition of orthogonal functions. The coefficients of the expansion are obtained by minimizing a cost function defined as the L^2 norm of the difference between the ground truth and the reconstructed magnetic field evaluated on the training data. The divergence-free condition is incorporated as a constrain in the cost function allowing the method to achieve arbitrarily small errors in the magnetic field divergence. An exponential decay of the approximation error is observed and compared with the less favorable algebraic decay of local splines. Compared to local methods involving computationally expensive search algorithms, the proposed method exhibits a significant reduction of the computational complexity of the field evaluation, while maintaining a small error in the divergence even in the presence of magnetic islands and stochasticity. Applications to the computation of Poincar\'e sections using data obtained from numerical solutions of the magnetohydrodynamic equations in toroidal geometry are presented and compared with local methods currently in use
    • …
    corecore