5,569 research outputs found

    Partitions of the polytope of Doubly Substochastic Matrices

    Full text link
    In this paper, we provide three different ways to partition the polytope of doubly substochastic matrices into subpolytopes via the prescribed row and column sums, the sum of all elements and the sub-defect respectively. Then we characterize the extreme points of each type of convex subpolytopes. The relations of the extreme points of the subpolytopes in the three partitions are also given

    On the maximum of the permanent of (I-A)

    Full text link
    Let A be an n by n doubly substochastic matrix and denote {\sigma}(A) the sum of all elements of A. In this paper we give the upper bound of the permanent of (I-A) with respect to n and {\sigma}(A)

    Star Formation Properties in Barred Galaxies(SFB). III. Statistical Study of Bar-driven Secular Evolution using a sample of nearby barred spirals

    Get PDF
    Stellar bars are important internal drivers of secular evolution in disk galaxies. Using a sample of nearby spiral galaxies with weak and strong bars, we explore the relationships between the star formation feature and stellar bars in galaxies. We find that galaxies with weak bars tend to be coincide with low concentrical star formation activity, while those with strong bars show a large scatter in the distribution of star formation activity. We find enhanced star formation activity in bulges towards stronger bars, although not predominantly, consistent with previous studies. Our results suggest that different stages of the secular process and many other factors may contribute to the complexity of the secular evolution. In addition, barred galaxies with intense star formation in bars tend to have active star formation in their bulges and disks, and bulges have higher star formation densities than bars and disks, indicating the evolutionary effects of bars. We then derived a possible criterion to quantify the different stages of bar-driven physical process, while future work is needed because of the uncertainties.Comment: 30 single-column pages, 9 figures, accepted for publication in A

    Tuning Co valence state in cobalt oxyhydrate superconductor by post reduction

    Full text link
    We report a successful tuning of Co valence state in cobalt oxyhydrate superconductor via a facile post reduction using NaOH as reducing agent. The change in Co valence was precisely determined by measuring the volume of the released oxygen. The possible hydronium-incorporation was greatly suppressed in concentrated NaOH solution, making the absolute Co valence determinable. As a result, an updated superconducting phase diagram was obtained, which shows that the superconducting transition temperature increases monotonically with increasing Co valence in a narrow range from +3.58 to +3.65.Comment: 17 pages, 5 figures and 1 table. Chem. Mat. in pres

    Memory-Efficient Topic Modeling

    Full text link
    As one of the simplest probabilistic topic modeling techniques, latent Dirichlet allocation (LDA) has found many important applications in text mining, computer vision and computational biology. Recent training algorithms for LDA can be interpreted within a unified message passing framework. However, message passing requires storing previous messages with a large amount of memory space, increasing linearly with the number of documents or the number of topics. Therefore, the high memory usage is often a major problem for topic modeling of massive corpora containing a large number of topics. To reduce the space complexity, we propose a novel algorithm without storing previous messages for training LDA: tiny belief propagation (TBP). The basic idea of TBP relates the message passing algorithms with the non-negative matrix factorization (NMF) algorithms, which absorb the message updating into the message passing process, and thus avoid storing previous messages. Experimental results on four large data sets confirm that TBP performs comparably well or even better than current state-of-the-art training algorithms for LDA but with a much less memory consumption. TBP can do topic modeling when massive corpora cannot fit in the computer memory, for example, extracting thematic topics from 7 GB PUBMED corpora on a common desktop computer with 2GB memory.Comment: 20 pages, 7 figure
    • …
    corecore