11,600 research outputs found

    From the Hartree equation to the Vlasov-Poisson system: strong convergence for a class of mixed states

    Get PDF
    We consider the evolution of NN fermions interacting through a Coulomb or gravitational potential in the mean-field limit as governed by the nonlinear Hartree equation with Coulomb or gravitational interaction. In the limit of large NN, we study the convergence in trace norm towards the classical Vlasov-Poisson equation for a special class of mixed quasi-free states.Comment: 21 pages. Typos corrected, references updated and detailed proof of Lemma 2.4 adde

    On the stability of the generalized, finite deformation correspondence model of peridynamics

    Get PDF
    A class of peridynamic material models known as constitutive correspondence models provide a bridge between classical continuum mechanics and peridynamics. These models are useful because they allow well-established local constitutive theories to be used within the nonlocal framework of peridynamics. A recent finite deformation correspondence theory (Foster and Xu, 2018) was developed and reported to improve stability properties of the original correspondence model (Silling et al., 2007). This paper presents a stability analysis that indicates the reported advantages of the new theory were overestimated. Homogeneous deformations are analyzed and shown to exibit unstable material behavior at the continuum level. Additionally, the effects of a particle discretization on the stability of the model are reported. Numerical examples demonstrate the large errors induced by the unstable behavior. Stabilization strategies and practical applications of the new finite deformation model are discussed

    Branch-depth: Generalizing tree-depth of graphs

    Get PDF
    We present a concept called the branch-depth of a connectivity function, that generalizes the tree-depth of graphs. Then we prove two theorems showing that this concept aligns closely with the notions of tree-depth and shrub-depth of graphs as follows. For a graph G=(V,E)G = (V,E) and a subset AA of EE we let Ξ»G(A)\lambda_G (A) be the number of vertices incident with an edge in AA and an edge in Eβˆ–AE \setminus A. For a subset XX of VV, let ρG(X)\rho_G(X) be the rank of the adjacency matrix between XX and Vβˆ–XV \setminus X over the binary field. We prove that a class of graphs has bounded tree-depth if and only if the corresponding class of functions Ξ»G\lambda_G has bounded branch-depth and similarly a class of graphs has bounded shrub-depth if and only if the corresponding class of functions ρG\rho_G has bounded branch-depth, which we call the rank-depth of graphs. Furthermore we investigate various potential generalizations of tree-depth to matroids and prove that matroids representable over a fixed finite field having no large circuits are well-quasi-ordered by the restriction.Comment: 34 pages, 2 figure

    Collective frequency variation in network synchronization and reverse PageRank

    Get PDF
    A wide range of natural and engineered phenomena rely on large networks of interacting units to reach a dynamical consensus state where the system collectively operates. Here we study the dynamics of self-organizing systems and show that for generic directed networks the collective frequency of the ensemble is {\it not} the same as the mean of the individuals' natural frequencies. Specifically, we show that the collective frequency equals a weighted average of the natural frequencies, where the weights are given by an out-flow centrality measure that is equivalent to a reverse PageRank centrality. Our findings uncover an intricate dependence of the collective frequency on both the structural directedness and dynamical heterogeneity of the network, and also reveal an unexplored connection between synchronization and PageRank, which opens the possibility of applying PageRank optimization to synchronization. Finally, we demonstrate the presence of collective frequency variation in real-world networks by considering the UK and Scandinavian power grids

    Coalgebra Learning via Duality

    Full text link
    Automata learning is a popular technique for inferring minimal automata through membership and equivalence queries. In this paper, we generalise learning to the theory of coalgebras. The approach relies on the use of logical formulas as tests, based on a dual adjunction between states and logical theories. This allows us to learn, e.g., labelled transition systems, using Hennessy-Milner logic. Our main contribution is an abstract learning algorithm, together with a proof of correctness and termination

    Counting Rules of Nambu-Goldstone Modes

    Get PDF
    When global continuous symmetries are spontaneously broken, there appear gapless collective excitations called Nambu-Goldstone modes (NGMs) that govern the low-energy property of the system. The application of this famous theorem ranges from high-energy, particle physics to condensed matter and atomic physics. When a symmetry breaking occurs in systems that lack the Lorentz invariance to start with, as is usually the case in condensed matter systems, the number of resulting NGMs can be fewer than that of broken symmetry generators, and the dispersion of NGMs is not necessarily linear. In this article, we review recently established formulas for NGMs associated with broken internal symmetries that work equally for relativistic and nonrelativistic systems. We also discuss complexities of NGMs originating from space-time symmetry breaking. In the process we cover many illuminating examples from various context. We also present a complementary point of view from the Lieb-Schultz-Mattis theorem.Comment: 14 pages, 1 figure. Invited review for the Annual Review of Condensed Matter Physics; Title change

    Further Theoretical Analysis on the Kβˆ’3Heβ†’Ξ›pnK^{-} {}^{3} \text{He} \to \Lambda p n Reaction for the KΛ‰NN\bar{K} N N Bound-State Search in the J-PARC E15 Experiment

    Get PDF
    Based on the scenario that a KΛ‰NN\bar{K} N N bound state is generated and it eventually decays into Ξ›p\Lambda p, we calculate the cross section of the Kβˆ’3Heβ†’Ξ›pnK^{-} {}^{3} \text{He} \to \Lambda p n reaction, which was recently measured in the J-PARC E15 experiment. We find that the behavior of the calculated differential cross section d2Οƒ/dMΞ›pdqΞ›pd ^{2} \sigma / d M_{\Lambda p} d q_{\Lambda p}, where MΞ›pM_{\Lambda p} and qΞ›pq_{\Lambda p} are the Ξ›p\Lambda p invariant mass and momentum transfer in the (Kβˆ’, n)(K^{-} , \, n) reaction in the laboratory frame, respectively, is consistent with the experiment. Furthermore, we can reproduce almost quantitatively the experimental data of the Ξ›p\Lambda p invariant mass spectrum in the momentum transfer window 350Β MeV/c<qΞ›p<650Β MeV/c350 \text{ MeV} /c < q_{\Lambda p} < 650 \text{ MeV} /c. These facts strongly suggest that the KΛ‰NN\bar{K} N N bound state was indeed generated in the J-PARC E15 experiment.Comment: 4 pages, 4 EPS figures, talk given at the 8th International Conference on Quarks and Nuclear Physics (QNP2018), Tsukuba, Japan, 13-17 November, 201

    Explain3D: Explaining Disagreements in Disjoint Datasets

    Get PDF
    Data plays an important role in applications, analytic processes, and many aspects of human activity. As data grows in size and complexity, we are met with an imperative need for tools that promote understanding and explanations over data-related operations. Data management research on explanations has focused on the assumption that data resides in a single dataset, under one common schema. But the reality of today's data is that it is frequently un-integrated, coming from different sources with different schemas. When different datasets provide different answers to semantically similar questions, understanding the reasons for the discrepancies is challenging and cannot be handled by the existing single-dataset solutions. In this paper, we propose Explain3D, a framework for explaining the disagreements across disjoint datasets (3D). Explain3D focuses on identifying the reasons for the differences in the results of two semantically similar queries operating on two datasets with potentially different schemas. Our framework leverages the queries to perform a semantic mapping across the relevant parts of their provenance; discrepancies in this mapping point to causes of the queries' differences. Exploiting the queries gives Explain3D an edge over traditional schema matching and record linkage techniques, which are query-agnostic. Our work makes the following contributions: (1) We formalize the problem of deriving optimal explanations for the differences of the results of semantically similar queries over disjoint datasets. (2) We design a 3-stage framework for solving the optimal explanation problem. (3) We develop a smart-partitioning optimizer that improves the efficiency of the framework by orders of magnitude. (4)~We experiment with real-world and synthetic data to demonstrate that Explain3D can derive precise explanations efficiently

    Robust globally divergence-free weak Galerkin finite element methods for natural convection problems

    Get PDF
    This paper proposes and analyzes a class of weak Galerkin (WG) finite element methods for stationary natural convection problems in two and three dimensions. We use piecewise polynomials of degrees k, k-1, and k(k>=1) for the velocity, pressure, and temperature approximations in the interior of elements, respectively, and piecewise polynomials of degrees l, k, l(l = k-1,k) for the numerical traces of velocity, pressure and temperature on the interfaces of elements. The methods yield globally divergence-free velocity solutions. Well-posedness of the discrete scheme is established, optimal a priori error estimates are derived, and an unconditionally convergent iteration algorithm is presented. Numerical experiments confirm the theoretical results and show the robustness of the methods with respect to Rayleigh number.Comment: 32 pages, 13 figure

    A proof of convergence of multi-class logistic regression network

    Get PDF
    This paper revisits the special type of a neural network known under two names. In the statistics and machine learning community it is known as a multi-class logistic regression neural network. In the neural network community, it is simply the soft-max layer. The importance is underscored by its role in deep learning: as the last layer, whose autput is actually the classification of the input patterns, such as images. Our exposition focuses on mathematically rigorous derivation of the key equation expressing the gradient. The fringe benefit of our approach is a fully vectorized expression, which is a basis of an efficient implementation. The second result of this paper is the positivity of the second derivative of the cross-entropy loss function as function of the weights. This result proves that optimization methods based on convexity may be used to train this network. As a corollary, we demonstrate that no L2L^2-regularizer is needed to guarantee convergence of gradient descent
    • …
    corecore