4,457 research outputs found

    Coupling geometry on binary bipartite networks: hypotheses testing on pattern geometry and nestedness

    Full text link
    Upon a matrix representation of a binary bipartite network, via the permutation invariance, a coupling geometry is computed to approximate the minimum energy macrostate of a network's system. Such a macrostate is supposed to constitute the intrinsic structures of the system, so that the coupling geometry should be taken as information contents, or even the nonparametric minimum sufficient statistics of the network data. Then pertinent null and alternative hypotheses, such as nestedness, are to be formulated according to the macrostate. That is, any efficient testing statistic needs to be a function of this coupling geometry. These conceptual architectures and mechanisms are by and large still missing in community ecology literature, and rendered misconceptions prevalent in this research area. Here the algorithmically computed coupling geometry is shown consisting of deterministic multiscale block patterns, which are framed by two marginal ultrametric trees on row and column axes, and stochastic uniform randomness within each block found on the finest scale. Functionally a series of increasingly larger ensembles of matrix mimicries is derived by conforming to the multiscale block configurations. Here matrix mimicking is meant to be subject to constraints of row and column sums sequences. Based on such a series of ensembles, a profile of distributions becomes a natural device for checking the validity of testing statistics or structural indexes. An energy based index is used for testing whether network data indeed contains structural geometry. A new version block-based nestedness index is also proposed. Its validity is checked and compared with the existing ones. A computing paradigm, called Data Mechanics, and its application on one real data network are illustrated throughout the developments and discussions in this paper

    A Time Series Model of Multiple Structural changes in Level, Trend and Variance

    Get PDF
    We consider a deterministically trending dynamic time series model in which multiple changes in level, trend and error variance are modeled explicitly and the number but not the timing of the changes are known. Estimation of the model is made possible by the use of the Gibbs sampler. The determination of the number of structural breaks and the form of structural change is considered as a problem of model selection and we compare the use of marginal likelihoods, posterior odds ratios and Schwarz' BIC model selection criterion to select the most appropriate model from the data. We evaluate the efficacy of the Bayesian approach using a small Monte Carlo experiment. As empirical examples, we investigate structural changes in the U.S. ex-post real interest rate and in a long time series of U.S. GDP.BIC, Gibbs sampling, multiple structural changes, posterior odds ratio

    Maximal inequality of Stochastic convolution driven by compensated Poisson random measures in Banach spaces

    Full text link
    Let (E,∥⋅∥)(E, \| \cdot\|) be a Banach space such that, for some q≥2q\geq 2, the function x↦∥x∥qx\mapsto \|x\|^q is of C2C^2 class and its first and second Fr\'{e}chet derivatives are bounded by some constant multiples of (q−1)(q-1)-th power of the norm and (q−2)(q-2)-th power of the norm and let SS be a C0C_0-semigroup of contraction type on (E,∥⋅∥)(E, \| \cdot\|). We consider the following stochastic convolution process \begin{align*} u(t)=\int_0^t\int_ZS(t-s)\xi(s,z)\,\tilde{N}(\mathrm{d} s,\mathrm{d} z), \;\;\; t\geq 0, \end{align*} where N~\tilde{N} is a compensated Poisson random measure on a measurable space (Z,Z)(Z,\mathcal{Z}) and ξ:[0,∞)×Ω×Z→E\xi:[0,\infty)\times\Omega\times Z\rightarrow E is an F⊗Z\mathbb{F}\otimes \mathcal{Z}-predictable function. We prove that there exists a c\`{a}dl\`{a}g modification a u~\tilde{u} of the process uu which satisfies the following maximal inequality \begin{align*} \mathbb{E} \sup_{0\leq s\leq t} \|\tilde{u}(s)\|^{q^\prime}\leq C\ \mathbb{E} \left(\int_0^t\int_Z \|\xi(s,z) \|^{p}\,N(\mathrm{d} s,\mathrm{d} z)\right)^{\frac{q^\prime}{p}}, \end{align*} for all q′≥q q^\prime \geq q and 1<p≤21<p\leq 2 with C=C(q,p)C=C(q,p).Comment: This version is only very slightly updated as compared to the one from September 201

    Deep Anchored Convolutional Neural Networks

    Full text link
    Convolutional Neural Networks (CNNs) have been proven to be extremely successful at solving computer vision tasks. State-of-the-art methods favor such deep network architectures for its accuracy performance, with the cost of having massive number of parameters and high weights redundancy. Previous works have studied how to prune such CNNs weights. In this paper, we go to another extreme and analyze the performance of a network stacked with a single convolution kernel across layers, as well as other weights sharing techniques. We name it Deep Anchored Convolutional Neural Network (DACNN). Sharing the same kernel weights across layers allows to reduce the model size tremendously, more precisely, the network is compressed in memory by a factor of L, where L is the desired depth of the network, disregarding the fully connected layer for prediction. The number of parameters in DACNN barely increases as the network grows deeper, which allows us to build deep DACNNs without any concern about memory costs. We also introduce a partial shared weights network (DACNN-mix) as well as an easy-plug-in module, coined regulators, to boost the performance of our architecture. We validated our idea on 3 datasets: CIFAR-10, CIFAR-100 and SVHN. Our results show that we can save massive amounts of memory with our model, while maintaining a high accuracy performance.Comment: This paper is accepted to 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW
    • …
    corecore