1,756 research outputs found

    Tearing Out the Income Tax by the (Grass)Roots

    Get PDF
    Landscapes are increasingly fragmented, and conservation programs have started to look at network approaches for maintaining populations at a larger scale. We present an agent-based model of predator–prey dynamics where the agents (i.e. the individuals of either the predator or prey population) are able to move between different patches in a landscaped network. We then analyze population level and coexistence probability given node-centrality measures that characterize specific patches. We show that both predator and prey species benefit from living in globally well-connected patches (i.e. with high closeness centrality). However, the maximum number of prey species is reached, on average, at lower closeness centrality levels than for predator species. Hence, prey species benefit from constraints imposed on species movement in fragmented landscapes since they can reproduce with a lesser risk of predation, and their need for using anti-predatory strategies decreases.authorCount :

    A self-parametrizing partition model approach to tomographic inverse problems

    Get PDF
    Partition modelling is a statistical method for nonlinear regression and classification, and is particularly suited to dealing with spatially variable parameters. Previous applications include disease mapping in medical statistics. Here we extend this method to the seismic tomography problem. The procedure involves a dynamic parametrization for the model which is able to adapt to an uneven spatial distribution of the information on the model parameters contained in the observed data. The approach provides a stable solution with no need for explicit regularization, i.e. there is neither user supplied damping term nor tuning of trade-off parameters. The method is an ensemble inference approach within a Bayesian framework. Many potential solutions are generated, and information is extracted from the ensemble as a whole. In terms of choosing a single model, it is straightforward to perform Monte Carlo integration to produce the expected Earth model. The inherent model averaging process naturally smooths out unwarranted structure in the Earth model, but maintains local discontinuities if well constrained by the data. Calculation of uncertainty estimates is also possible using the ensemble of models, and experiments with synthetic data suggest that they are good representations of the true uncertainty

    Counting reducible, powerful, and relatively irreducible multivariate polynomials over finite fields

    Full text link
    We present counting methods for some special classes of multivariate polynomials over a finite field, namely the reducible ones, the s-powerful ones (divisible by the s-th power of a nonconstant polynomial), and the relatively irreducible ones (irreducible but reducible over an extension field). One approach employs generating functions, another one uses a combinatorial method. They yield exact formulas and approximations with relative errors that essentially decrease exponentially in the input size.Comment: to appear in SIAM Journal on Discrete Mathematic

    Less is More: Exploiting the Standard Compiler Optimization Levels for Better Performance and Energy Consumption

    Get PDF
    This paper presents the interesting observation that by performing fewer of the optimizations available in a standard compiler optimization level such as -O2, while preserving their original ordering, significant savings can be achieved in both execution time and energy consumption. This observation has been validated on two embedded processors, namely the ARM Cortex-M0 and the ARM Cortex-M3, using two different versions of the LLVM compilation framework; v3.8 and v5.0. Experimental evaluation with 71 embedded benchmarks demonstrated performance gains for at least half of the benchmarks for both processors. An average execution time reduction of 2.4% and 5.3% was achieved across all the benchmarks for the Cortex-M0 and Cortex-M3 processors, respectively, with execution time improvements ranging from 1% up to 90% over the -O2. The savings that can be achieved are in the same range as what can be achieved by the state-of-the-art compilation approaches that use iterative compilation or machine learning to select flags or to determine phase orderings that result in more efficient code. In contrast to these time consuming and expensive to apply techniques, our approach only needs to test a limited number of optimization configurations, less than 64, to obtain similar or even better savings. Furthermore, our approach can support multi-criteria optimization as it targets execution time, energy consumption and code size at the same time.Comment: 15 pages, 3 figures, 71 benchmarks used for evaluatio

    Benchmarking computer platforms for lattice QCD applications

    Full text link
    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC.Comment: 3 pages, Lattice03, machines and algorithm

    Low Frequency Tilt Seismology with a Precision Ground Rotation Sensor

    Get PDF
    We describe measurements of the rotational component of teleseismic surface waves using an inertial high-precision ground-rotation-sensor installed at the LIGO Hanford Observatory (LHO). The sensor has a noise floor of 0.4 nrad/Hz/ \sqrt{\rm Hz} at 50 mHz and a translational coupling of less than 1 Ό\murad/m enabling translation-free measurement of small rotations. We present observations of the rotational motion from Rayleigh waves of six teleseismic events from varied locations and with magnitudes ranging from M6.7 to M7.9. These events were used to estimate phase dispersion curves which shows agreement with a similar analysis done with an array of three STS-2 seismometers also located at LHO

    Transdimensional inversion of receiver functions and surface wave dispersion

    Get PDF
    We present a novel method for joint inversion of receiver functions and surface wave dispersion data, using a transdimensional Bayesian formulation. This class of algorithm treats the number of model parameters (e.g. number of layers) as an unknown in the problem. The dimension of the model space is variable and a Markov chain Monte Carlo (McMC) scheme is used to provide a parsimonious solution that fully quantifies the degree of knowledge one has about seismic structure (i.e constraints on the model, resolution, and trade-offs). The level of data noise (i.e. the covariance matrix of data errors) effectively controls the information recoverable from the data and here it naturally determines the complexity of the model (i.e. the number of model parameters). However, it is often difficult to quantify the data noise appropriately, particularly in the case of seismic waveform inversion where data errors are correlated. Here we address the issue of noise estimation using an extended Hierarchical Bayesian formulation, which allows both the variance and covariance of data noise to be treated as unknowns in the inversion. In this way it is possible to let the data infer the appropriate level of data fit. In the context of joint inversions, assessment of uncertainty for different data types becomes crucial in the evaluation of the misfit function. We show that the Hierarchical Bayes procedure is a powerful tool in this situation, because it is able to evaluate the level of information brought by different data types in the misfit, thus removing the arbitrary choice of weighting factors. After illustrating the method with synthetic tests, a real data application is shown where teleseismic receiver functions and ambient noise surface wave dispersion measurements from the WOMBAT array (South-East Australia) are jointly inverted to provide a probabilistic 1D model of shear-wave velocity beneath a given station
    • 

    corecore