181 research outputs found

    Conditional Scenario Generation with a GVAR Model

    Get PDF
    The stress-testing method formed an integral part of the practice of risk management. However, the underlying models for scenarios generation have not been much studied so far. In past practice, the users typically did not model risk factors for portfolios of moderate size endogenously due to the presence of "curse of dimensionality" problem. Moreover, it is almost impossible to impose the expert views for a future outcome of macroeconomy on the scenario generator without making ad-hoc adjustments. In this thesis we propose a GVAR-based framework which allows an efficient simulation of risk factors for a complex multi-currency portfolio of various classes of assets conditioning on economic scenarios. Given reasonable sets of economic forecasts, the GVAR model anticipates the trend and codependency of the future path of portfolio risk factors and supports the production of meaningful results from risk analytics

    EEGNN: edge enhanced graph neural network with a Bayesian nonparametric graph model

    Get PDF
    Training deep graph neural networks (GNNs) poses a challenging task, as the performance of GNNs may suffer from the number of hidden message-passing layers. The literature has focused on the proposals of over-smoothing and under-reaching to explain the performance deterioration of deep GNNs. In this paper, we propose a new explanation for such deteriorated performance phenomenon, mis-simplification, that is, mistakenly simplifying graphs by preventing self-loops and forcing edges to be unweighted. We show that such simplifying can reduce the potential of message-passing layers to capture the structural information of graphs. In view of this, we propose a new framework, edge enhanced graph neural network (EEGNN). EEGNN uses the structural information extracted from the proposed Dirichlet mixture Poisson graph model (DMPGM), a Bayesian nonparametric model for graphs, to improve the performance of various deep message-passing GNNs. We propose a Markov chain Monte Carlo inference framework for DMPGM. Experiments over different datasets show that our method achieves considerable performance increase compared to baselines

    From sparse to dense functional data in high dimensions: Revisiting phase transitions from a non-asymptotic perspective

    Full text link
    Nonparametric estimation of the mean and covariance functions is ubiquitous in functional data analysis and local linear smoothing techniques are most frequently used. Zhang and Wang (2016) explored different types of asymptotic properties of the estimation, which reveal interesting phase transition phenomena based on the relative order of the average sampling frequency per subject TT to the number of subjects nn, partitioning the data into three categories: ``sparse'', ``semi-dense'' and ``ultra-dense''. In an increasingly available high-dimensional scenario, where the number of functional variables pp is large in relation to nn, we revisit this open problem from a non-asymptotic perspective by deriving comprehensive concentration inequalities for the local linear smoothers. Besides being of interest by themselves, our non-asymptotic results lead to elementwise maximum rates of L2L_2 convergence and uniform convergence serving as a fundamentally important tool for further convergence analysis when pp grows exponentially with nn and possibly TT. With the presence of extra logp\log p terms to account for the high-dimensional effect, we then investigate the scaled phase transitions and the corresponding elementwise maximum rates from sparse to semi-dense to ultra-dense functional data in high dimensions. Finally, numerical studies are carried out to confirm our established theoretical properties

    Dvr1 transfers left–right asymmetric signals from Kupffer's vesicle to lateral plate mesoderm in zebrafish

    Get PDF
    AbstractAn early step in establishing left–right (LR) symmetry in zebrafish is the generation of asymmetric fluid flow by Kupffer's vesicle (KV). As a result of fluid flow, a signal is generated and propagated from the KV to the left lateral plate mesoderm, activating a transcriptional response of Nodal expression in the left lateral plate mesoderm (LPM). The mechanisms and molecules that aid in this transfer of information from the KV to the left LPM are still not clear. Here we provide several lines of evidence demonstrating a role for a member of the TGFβ family member, Dvr1, a zebrafish Vg1 ortholog. Dvr1 is expressed bilaterally between the KV and the LPM. Knockdown of Dvr1 by morpholino causes dramatically reduced or absent expression of southpaw (spaw, a Nodal homolog), in LPM, and corresponding loss of downstream Lefty (lft1 and lft) expression, and aberrant brain and heart LR patterning. Dvr1 morphant embryos have normal KV morphology and function, normal expression of southpaw (spaw) and charon (cha) in the peri-KV region and normal expression of a variety of LPM markers in LPM. Additionally, Dvr1 knockdown does not alter the capability of LPM to respond to signals that initiate and propagate spaw expression. Co-injection experiments in Xenopus and zebrafish indicate that Dvr1 and Spaw can enhance each other's ability to activate the Nodal response pathway and co-immunoprecipitation experiments reveal differential relationships among activators and inhibitors in this pathway. These results indicate that Dvr1 is responsible for enabling the transfer of a left–right signal from KV to the LPM

    AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets

    Full text link
    This paper studies the Binary Neural Networks (BNNs) in which weights and activations are both binarized into 1-bit values, thus greatly reducing the memory usage and computational complexity. Since the modern deep neural networks are of sophisticated design with complex architecture for the accuracy reason, the diversity on distributions of weights and activations is very high. Therefore, the conventional sign function cannot be well used for effectively binarizing full-precision values in BNNs. To this end, we present a simple yet effective approach called AdaBin to adaptively obtain the optimal binary sets {b1,b2}\{b_1, b_2\} (b1,b2Rb_1, b_2\in \mathbb{R}) of weights and activations for each layer instead of a fixed set (\textit{i.e.}, {1,+1}\{-1, +1\}). In this way, the proposed method can better fit different distributions and increase the representation ability of binarized features. In practice, we use the center position and distance of 1-bit values to define a new binary quantization function. For the weights, we propose an equalization method to align the symmetrical center of binary distribution to real-valued distribution, and minimize the Kullback-Leibler divergence of them. Meanwhile, we introduce a gradient-based optimization method to get these two parameters for activations, which are jointly trained in an end-to-end manner. Experimental results on benchmark models and datasets demonstrate that the proposed AdaBin is able to achieve state-of-the-art performance. For instance, we obtain a 66.4% Top-1 accuracy on the ImageNet using ResNet-18 architecture, and a 69.4 mAP on PASCAL VOC using SSD300. The PyTorch code is available at \url{https://github.com/huawei-noah/Efficient-Computing/tree/master/BinaryNetworks/AdaBin} and the MindSpore code is available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/AdaBin}.Comment: ECCV 202
    corecore