601 research outputs found
The Effect of Blockholders on Bank Valuation
This paper examines the effect of blockholders on bank valuation. We use two measures of bank valuation, namely Tobin\u27s Q and market to book ratio, and two measures of blockholders, namely number of blockholders and total ownership of all blockholders. Using a sample of publicly-traded bank holding companies in the U.S. from 1996 to 2001, we find a negative relationship between total ownership of all blockholders and bank valuation, but a positive relationship between number of blockholders and bank valuation
Probabilistic Reduced-Dimensional Vector Autoregressive Modeling for Dynamics Prediction and Reconstruction with Oblique Projections
In this paper, we propose a probabilistic reduced-dimensional vector
autoregressive (PredVAR) model with oblique projections. This model partitions
the measurement space into a dynamic subspace and a static subspace that do not
need to be orthogonal. The partition allows us to apply an oblique projection
to extract dynamic latent variables (DLVs) from high-dimensional data with
maximized predictability. We develop an alternating iterative PredVAR algorithm
that exploits the interaction between updating the latent VAR dynamics and
estimating the oblique projection, using expectation maximization (EM) and a
statistical constraint. In addition, the noise covariance matrices are
estimated as a natural outcome of the EM method. A simulation case study of the
nonlinear Lorenz oscillation system illustrates the advantages of the proposed
approach over two alternatives
Effect of an equilibrium phase transition on multiphase transport in relativistic heavy ion collisions
The hadronization scheme for parton transport in relativistic heavy ion collisions is considered in detail. It is pointed out that the traditional scheme for particles being freezed out one by one leads to serious problem on unreasonable long lifetime of partons. A collective phase transition following a supercooling is implemented in a simple way. It turns out that the modified model with a sudden phase transition is able to reproduce the experimental longitudinal distributions of final state particles better than the original one does. The encouraging results indicate that equilibrium phase transition should be taken into proper account in parton transport models for relativistic heavy ion collisions
Does the Dirac Cone Exist in Silicene on Metal Substrates?
Absence of the Dirac cone due to a strong band hybridization is revealed to
be a common feature for epitaxial silicene on metal substrates according to our
first-principles calculations for silicene on Ir, Cu, Mg, Au, Pt, Al, and Ag
substrates. The destroyed Dirac cone of silicene, however, can be effectively
restored with linear or parabolic dispersion by intercalating alkali metal
atoms between silicene and the metal substrates, offering an opportunity to
study the intriguing properties of silicene without further transfer of
silicene from the metal substrates
Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning
Recent studies have revealed the intriguing few-shot learning ability of
pretrained language models (PLMs): They can quickly adapt to a new task when
fine-tuned on a small amount of labeled data formulated as prompts, without
requiring abundant task-specific annotations. Despite their promising
performance, most existing few-shot approaches that only learn from the small
training set still underperform fully supervised training by nontrivial
margins. In this work, we study few-shot learning with PLMs from a different
perspective: We first tune an autoregressive PLM on the few-shot samples and
then use it as a generator to synthesize a large amount of novel training
samples which augment the original training set. To encourage the generator to
produce label-discriminative samples, we train it via weighted maximum
likelihood where the weight of each token is automatically adjusted based on a
discriminative meta-learning objective. A classification PLM can then be
fine-tuned on both the few-shot and the synthetic samples with regularization
for better generalization and stability. Our approach FewGen achieves an
overall better result across seven classification tasks of the GLUE benchmark
than existing few-shot learning methods, improving no-augmentation methods by
5+ average points, and outperforming augmentation methods by 3+ average points.Comment: Code: https://github.com/yumeng5/FewGe
Hierarchical Topic Mining via Joint Spherical Tree and Text Embedding
Mining a set of meaningful topics organized into a hierarchy is intuitively
appealing since topic correlations are ubiquitous in massive text corpora. To
account for potential hierarchical topic structures, hierarchical topic models
generalize flat topic models by incorporating latent topic hierarchies into
their generative modeling process. However, due to their purely unsupervised
nature, the learned topic hierarchy often deviates from users' particular needs
or interests. To guide the hierarchical topic discovery process with minimal
user supervision, we propose a new task, Hierarchical Topic Mining, which takes
a category tree described by category names only, and aims to mine a set of
representative terms for each category from a text corpus to help a user
comprehend his/her interested topics. We develop a novel joint tree and text
embedding method along with a principled optimization procedure that allows
simultaneous modeling of the category tree structure and the corpus generative
process in the spherical space for effective category-representative term
discovery. Our comprehensive experiments show that our model, named JoSH, mines
a high-quality set of hierarchical topics with high efficiency and benefits
weakly-supervised hierarchical text classification tasks.Comment: KDD 2020 Research Track. (Code: https://github.com/yumeng5/JoSH
- β¦