1,269 research outputs found

    Do Neural Nets Learn Statistical Laws behind Natural Language?

    Full text link
    The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM) effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.Comment: 21 pages, 11 figure

    Long-Range Correlation Underlying Childhood Language and Generative Models

    Full text link
    Long-range correlation, a property of time series exhibiting long-term memory, is mainly studied in the statistical physics domain and has been reported to exist in natural language. Using a state-of-the-art method for such analysis, long-range correlation is first shown to occur in long CHILDES data sets. To understand why, Bayesian generative models of language, originally proposed in the cognitive scientific domain, are investigated. Among representative models, the Simon model was found to exhibit surprisingly good long-range correlation, but not the Pitman-Yor model. Since the Simon model is known not to correctly reflect the vocabulary growth of natural language, a simple new model is devised as a conjunct of the Simon and Pitman-Yor models, such that long-range correlation holds with a correct vocabulary growth rate. The investigation overall suggests that uniform sampling is one cause of long-range correlation and could thus have a relation with actual linguistic processes

    PSBS: Practical Size-Based Scheduling

    Full text link
    Size-based schedulers have very desirable performance properties: optimal or near-optimal response time can be coupled with strong fairness guarantees. Despite this, such systems are very rarely implemented in practical settings, because they require knowing a priori the amount of work needed to complete jobs: this assumption is very difficult to satisfy in concrete systems. It is definitely more likely to inform the system with an estimate of the job sizes, but existing studies point to somewhat pessimistic results if existing scheduler policies are used based on imprecise job size estimations. We take the goal of designing scheduling policies that are explicitly designed to deal with inexact job sizes: first, we show that existing size-based schedulers can have bad performance with inexact job size information when job sizes are heavily skewed; we show that this issue, and the pessimistic results shown in the literature, are due to problematic behavior when large jobs are underestimated. Once the problem is identified, it is possible to amend existing size-based schedulers to solve the issue. We generalize FSP -- a fair and efficient size-based scheduling policy -- in order to solve the problem highlighted above; in addition, our solution deals with different job weights (that can be assigned to a job independently from its size). We provide an efficient implementation of the resulting protocol, which we call Practical Size-Based Scheduler (PSBS). Through simulations evaluated on synthetic and real workloads, we show that PSBS has near-optimal performance in a large variety of cases with inaccurate size information, that it performs fairly and it handles correctly job weights. We believe that this work shows that PSBS is indeed pratical, and we maintain that it could inspire the design of schedulers in a wide array of real-world use cases.Comment: arXiv admin note: substantial text overlap with arXiv:1403.599

    Abacus models for parabolic quotients of affine Weyl groups

    Full text link
    We introduce abacus diagrams that describe minimal length coset representatives in affine Weyl groups of types B, C, and D. These abacus diagrams use a realization of the affine Weyl group of type C due to Eriksson to generalize a construction of James for the symmetric group. We also describe several combinatorial models for these parabolic quotients that generalize classical results in affine type A related to core partitions.Comment: 28 pages, To appear, Journal of Algebra. Version 2: Updated with referee's comment

    Q-systems, Heaps, Paths and Cluster Positivity

    Full text link
    We consider the cluster algebra associated to the QQ-system for ArA_r as a tool for relating QQ-system solutions to all possible sets of initial data. We show that the conserved quantities of the QQ-system are partition functions for hard particles on particular target graphs with weights, which are determined by the choice of initial data. This allows us to interpret the simplest solutions of the Q-system as generating functions for Viennot's heaps on these target graphs, and equivalently as generating functions of weighted paths on suitable dual target graphs. The generating functions take the form of finite continued fractions. In this setting, the cluster mutations correspond to local rearrangements of the fractions which leave their final value unchanged. Finally, the general solutions of the QQ-system are interpreted as partition functions for strongly non-intersecting families of lattice paths on target lattices. This expresses all cluster variables as manifestly positive Laurent polynomials of any initial data, thus proving the cluster positivity conjecture for the ArA_r QQ-system. We also give an alternative formulation in terms of domino tilings of deformed Aztec diamonds with defects.Comment: 106 pages, 38 figure

    Continuum modelling and simulation of granular flows through their many phases

    Get PDF
    We propose and numerically implement a constitutive framework for granular media that allows the material to traverse through its many common phases during the flow process. When dense, the material is treated as a pressure sensitive elasto-viscoplastic solid obeying a yield criterion and a plastic flow rule given by the ÎĽ(I)\mu(I) inertial rheology of granular materials. When the free volume exceeds a critical level, the material is deemed to separate and is treated as disconnected, stress-free media. A Material Point Method (MPM) procedure is written for the simulation of this model and many demonstrations are provided in different geometries. By using the MPM framework, extremely large strains and nonlinear deformations, which are common in granular flows, are representable. The method is verified numerically and its physical predictions are validated against known results
    • …
    corecore