12,671 research outputs found

    Towards Landslide Predictions: Two Case Studies

    Full text link
    In a previous work [Helmstetter, 2003], we have proposed a simple physical model to explain the accelerating displacements preceding some catastrophic landslides, based on a slider-block model with a state and velocity dependent friction law. This model predicts two regimes of sliding, stable and unstable leading to a critical finite-time singularity. This model was calibrated quantitatively to the displacement and velocity data preceding two landslides, Vaiont (Italian Alps) and La Clapi\`ere (French Alps), showing that the former (resp. later) landslide is in the unstable (resp. stable) sliding regime. Here, we test the predictive skills of the state-and-velocity-dependent model on these two landslides, using a variety of techniques. For the Vaiont landslide, our model provides good predictions of the critical time of failure up to 20 days before the collapse. Tests are also presented on the predictability of the time of the change of regime for la Clapi\`ere landslide.Comment: 30 pages with 12 eps figure

    Transient vibration analysis of a completely free plate using modes obtained by Gorman's superposition method

    Get PDF
    This paper shows that the transient response of a plate undergoing flexural vibration can be calculated accurately and efficiently using the natural frequencies and modes obtained from the superposition method. The response of a completely free plate is used to demonstrate this. The case considered is one where all supports of a simply supported thin rectangular plate under self weight are suddenly removed. The resulting motion consists of a combination of the natural modes of a completely free plate. The modal superposition method is used for determining the transient response, and the natural frequencies and mode shapes of the plates used are obtained by Gorman's superposition method. These are compared with corresponding results based on the modes using the Rayleigh–Ritz method using the ordinary and degenerated free–free beam functions. There is an excellent agreement between the results from both approaches but the superposition method has shown faster convergence and the results may serve as benchmarks for the transient response of completely free plates

    Computation of atomic astrophysical opacities

    Full text link
    The revision of the standard Los Alamos opacities in the 1980-1990s by a group from the Lawrence Livermore National Laboratory (OPAL) and the Opacity Project (OP) consortium was an early example of collaborative big-data science, leading to reliable data deliverables (atomic databases, monochromatic opacities, mean opacities, and radiative accelerations) widely used since then to solve a variety of important astrophysical problems. Nowadays the precision of the OPAL and OP opacities, and even of new tables (OPLIB) by Los Alamos, is a recurrent topic in a hot debate involving stringent comparisons between theory, laboratory experiments, and solar and stellar observations in sophisticated research fields: the standard solar model (SSM), helio and asteroseismology, non-LTE 3D hydrodynamic photospheric modeling, nuclear reaction rates, solar neutrino observations, computational atomic physics, and plasma experiments. In this context, an unexpected downward revision of the solar photospheric metal abundances in 2005 spoiled a very precise agreement between the helioseismic indicators (the radius of the convection zone boundary, the sound-speed profile, and helium surface abundance) and SSM benchmarks, which could be somehow reestablished with a substantial opacity increase. Recent laboratory measurements of the iron opacity in physical conditions similar to the boundary of the solar convection zone have indeed predicted significant increases (30-400%), although new systematic improvements and comparisons of the computed tables have not yet been able to reproduce them. We give an overview of this controversy, and within the OP approach, discuss some of the theoretical shortcomings that could be impairing a more complete and accurate opacity accountingComment: 31 pages, 10 figures. This review is originally based on a talk given at the 12th International Colloquium on Atomic Spectra and Oscillator Strengths for Astrophysical and Laboratory Plasmas, Sao Paulo, Brazil, July 2016. It has been published in the Atoms online journa

    Implementation of standard testbeds for numerical relativity

    Get PDF
    We discuss results that have been obtained from the implementation of the initial round of testbeds for numerical relativity which was proposed in the first paper of the Apples with Apples Alliance. We present benchmark results for various codes which provide templates for analyzing the testbeds and to draw conclusions about various features of the codes. This allows us to sharpen the initial test specifications, design a new test and add theoretical insight.Comment: Corrected versio

    Pseudogap and high-temperature superconductivity from weak to strong coupling. Towards quantitative theory

    Full text link
    This is a short review of the theoretical work on the two-dimensional Hubbard model performed in Sherbrooke in the last few years. It is written on the occasion of the twentieth anniversary of the discovery of high-temperature superconductivity. We discuss several approaches, how they were benchmarked and how they agree sufficiently with each other that we can trust that the results are accurate solutions of the Hubbard model. Then comparisons are made with experiment. We show that the Hubbard model does exhibit d-wave superconductivity and antiferromagnetism essentially where they are observed for both hole and electron-doped cuprates. We also show that the pseudogap phenomenon comes out of these calculations. In the case of electron-doped high temperature superconductors, comparisons with angle-resolved photoemission experiments are nearly quantitative. The value of the pseudogap temperature observed for these compounds in recent photoemission experiments has been predicted by theory before it was observed experimentally. Additional experimental confirmation would be useful. The theoretical methods that are surveyed include mostly the Two-Particle Self-Consistent Approach, Variational Cluster Perturbation Theory (or variational cluster approximation), and Cellular Dynamical Mean-Field Theory.Comment: 32 pages, 51 figures. Slight modifications to text, figures and references. A PDF file with higher-resolution figures is available at http://www.physique.usherbrooke.ca/senechal/LTP-toc.pd

    A Parallel Algorithm for Solving the 3d Schrodinger Equation

    Full text link
    We describe a parallel algorithm for solving the time-independent 3d Schrodinger equation using the finite difference time domain (FDTD) method. We introduce an optimized parallelization scheme that reduces communication overhead between computational nodes. We demonstrate that the compute time, t, scales inversely with the number of computational nodes as t ~ N_nodes^(-0.95 +/- 0.04). This makes it possible to solve the 3d Schrodinger equation on extremely large spatial lattices using a small computing cluster. In addition, we present a new method for precisely determining the energy eigenvalues and wavefunctions of quantum states based on a symmetry constraint on the FDTD initial condition. Finally, we discuss the usage of multi-resolution techniques in order to speed up convergence on extremely large lattices.Comment: 18 pages, 7 figures; published versio
    • 

    corecore