429,609 research outputs found

    How Scale Affects Structure in Java Programs

    Full text link
    Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.Comment: ACM Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA), October 2015. (Preprint

    Depinning with dynamic stress overshoots: A hybrid of critical and pseudohysteretic behavior

    Full text link
    A model of an elastic manifold driven through a random medium by an applied force F is studied focussing on the effects of inertia and elastic waves, in particular {\it stress overshoots} in which motion of one segment of the manifold causes a temporary stress on its neighboring segments in addition to the static stress. Such stress overshoots decrease the critical force for depinning and make the depinning transition hysteretic. We find that the steady state velocity of the moving phase is nevertheless history independent and the critical behavior as the force is decreased is in the same universality class as in the absence of stress overshoots: the dissipative limit which has been studied analytically. To reach this conclusion, finite-size scaling analyses of a variety of quantities have been supplemented by heuristic arguments. If the force is increased slowly from zero, the spectrum of avalanche sizes that occurs appears to be quite different from the dissipative limit. After stopping from the moving phase, the restarting involves both fractal and bubble-like nucleation. Hysteresis loops can be understood in terms of a depletion layer caused by the stress overshoots, but surprisingly, in the limit of very large samples the hysteresis loops vanish. We argue that, although there can be striking differences over a wide range of length scales, the universality class governing this pseudohysteresis is again that of the dissipative limit. Consequences of this picture for the statistics and dynamics of earthquakes on geological faults are briefly discussed.Comment: 43 pages, 57 figures (yes, that's a five followed by a seven), revte

    A Consistency Test of EFT Power Countings from Residual Cutoff Dependence

    Full text link
    A method to quantitatively assess the consistency of power-counting proposals in Effective Field Theories (EFT) which are non-perturbative at leading order is presented. The Renormalisation Group evolution of an observable predicts the functional form of its residual cutoff dependence on the breakdown scale of an EFT, on the low-momentum scales, and on the order of the calculation. Passing this test is a necessary but not sufficient consistency criterion for a suggested power counting whose exact nature is disputed. In Chiral Effective Field Theory (ChiEFT) with more than one nucleon, a lack of universally accepted analytic solutions obfuscates the convergence pattern in results. This led to proposals which predict different sets of Low Energy Coefficients (LECs) at the same chiral order, and at times even predict a different ordering long-range contributions. The method may independently check whether an observable is renormalised at a given order, and proves estimates of both the breakdown scale and the momentum-dependent order-by-order convergence pattern. Conversely, it helps identify those LECs (and long-range pieces) which ensure renormalised observables at a given order. I also discuss assumptions and the relation to Wilson's Renormalisation Group; useful observable and cutoff choices; the momentum window with likely best signals; its dependence on the values and forms of cutoffs as well as on the EFT parameters; the impact of fitting LECs to data; and caveats as well as limitations. Since the test is designed to minimise the use of data, it quantitatively falsifies if the EFT has been renormalised consistently. This complements other tests which quantify how an EFT compares to experiment. Its application in particular to the 3P0 and P2-3F2 partial waves of NN scattering in ChiEFT may elucidate persistent power-counting issues.Comment: 15 pages LaTeX2e (pdflatex) including 5 figures as .pdf files using includegraphics. Final version to appear in Europ. J. Phys. A topical issue "The Tower of Effective (Field) Theories and the Emergence of Nuclear Phenomena". arXiv admin note: substantial text overlap with arXiv:1511.00490 Author's note: substantial corrections in key argument and expansions. Version appearing in Eur Phys J

    An analysis of the evolving comoving number density of galaxies in hydrodynamical simulations

    Get PDF
    The cumulative comoving number-density of galaxies as a function of stellar mass or central velocity dispersion is commonly used to link galaxy populations across different epochs. By assuming that galaxies preserve their number-density in time, one can infer the evolution of their properties, such as masses, sizes, and morphologies. However, this assumption does not hold in the presence of galaxy mergers or when rank ordering is broken owing to variable stellar growth rates. We present an analysis of the evolving comoving number density of galaxy populations found in the Illustris cosmological hydrodynamical simulation focused on the redshift range 0z30\leq z \leq 3. Our primary results are as follows: 1) The inferred average stellar mass evolution obtained via a constant comoving number density assumption is systematically biased compared to the merger tree results at the factor of \sim2(4) level when tracking galaxies from redshift z=0z=0 out to redshift z=2(3)z=2(3); 2) The median number density evolution for galaxy populations tracked forward in time is shallower than for galaxy populations tracked backward in time; 3) A similar evolution in the median number density of tracked galaxy populations is found regardless of whether number density is assigned via stellar mass, stellar velocity dispersion, or dark matter halo mass; 4) Explicit tracking reveals a large diversity in galaxies' assembly histories that cannot be captured by constant number-density analyses; 5) The significant scatter in galaxy linking methods is only marginally reduced by considering a number of additional physical and observable galaxy properties as realized in our simulation. We provide fits for the forward and backward median evolution in stellar mass and number density and discuss implications of our analysis for interpreting multi-epoch galaxy property observations.Comment: 18 pages, 11 figures, submitted to MNRAS, comments welcom

    What is a Cool-Core Cluster? A Detailed Analysis of the Cores of the X-ray Flux-Limited HIFLUGCS Cluster Sample

    Full text link
    We use the largest complete sample of 64 galaxy clusters (HIghest X-ray FLUx Galaxy Cluster Sample) with available high-quality X-ray data from Chandra, and apply 16 cool-core diagnostics to them, some of them new. We also correlate optical properties of brightest cluster galaxies (BCGs) with X-ray properties. To segregate cool core and non-cool-core clusters, we find that central cooling time, t_cool, is the best parameter for low redshift clusters with high quality data, and that cuspiness is the best parameter for high redshift clusters. 72% of clusters in our sample have a cool core (t_cool < 7.7 h_{71}^{-1/2} Gyr) and 44% have strong cool cores (t_cool <1.0 h_{71}^{-1/2} Gyr). For the first time we show quantitatively that the discrepancy in classical and spectroscopic mass deposition rates can not be explained with a recent formation of the cool cores, demonstrating the need for a heating mechanism to explain the cooling flow problem. [Abridged]Comment: 45 pages, 19 figures, 7 tables. Accepted for publication in A&A. Contact Person: Rupal Mittal ([email protected]
    corecore