134,410 research outputs found

    Yield pillar design for United States longwall mining.

    Get PDF
    The longwall mining in U.S. has continuously set both the world production and safety records over the years. In order to ensure the future success of longwall mining, continuous improvement on this new mining technology is very important. This study deals with the problems related to longwall pillar design by proposing a new pillar design method based on the yield pillar concept. In order to simplify the geological and mining conditions, the new pillar design method classifies the in-situ roof and floor conditions into four different categories: (1) strong roof and strong floor, (2) strong roof and weak floor, (3) weak roof and strong floor, and (4) weak roof and weak floor conditions. According to different roof and floor conditions, the new method can be used to design various types of three-entry system for longwall panel development. Comparison among different types of design indicates that the stiff-yield pillar design is the most favorable design and can be used as an alternative, especially under deeper cover. The new pillar design method is developed based on the finite element model simulation, stability study and nonlinear regression analysis. The finite element model used in this study considers the material properties and time-dependent behaviors of rock. To better organize the finite element model simulation, the orthogonal experiment design is used in this study to arrange the finite element models and proved to be very effective. Using the finite element model simulation, the functions and mechanisms of the yield pillar have been studied. The important variables that affect the stability of longwall entry-pillar system are identified also. In this study, the new pillar design method has been also compared with the other available longwall pillar design methods. In addition, an application example is used to illustrate the basic design procedures involved in the new pillar design method

    Empirical evidence for a celestial origin of the climate oscillations and its implications

    Full text link
    We investigate whether or not the decadal and multi-decadal climate oscillations have an astronomical origin. Several global surface temperature records since 1850 and records deduced from the orbits of the planets present very similar power spectra. Eleven frequencies with period between 5 and 100 years closely correspond in the two records. Among them, large climate oscillations with peak-to-trough amplitude of about 0.1 oC^oC and 0.25 oC^oC, and periods of about 20 and 60 years, respectively, are synchronized to the orbital periods of Jupiter and Saturn. Schwabe and Hale solar cycles are also visible in the temperature records. A 9.1-year cycle is synchronized to the Moon's orbital cycles. A phenomenological model based on these astronomical cycles can be used to well reconstruct the temperature oscillations since 1850 and to make partial forecasts for the 21st^{st} century. It is found that at least 60\% of the global warming observed since 1970 has been induced by the combined effect of the above natural climate oscillations. The partial forecast indicates that climate may stabilize or cool until 2030-2040. Possible physical mechanisms are qualitatively discussed with an emphasis on the phenomenon of collective synchronization of coupled oscillators.Comment: 18 pages, 15 figures, 2 table

    Characterization of Microtremor Records Using Simulated Microtremors

    Get PDF
    The paper attempts to illustrate the potential application of simulation techniques for interpretation and characterization of microtremors. Simulations are performed with two types of source distribution models, both involving a large number of Dirac wave type sources randomly activated on the surface of a horizontally layered ground underlain by the half-space. Attempt is made to utilize the technique for the interpretation of microtremors at KASAI site in Chiba prefecture (Japan). The site has a deep base layer and weak impedance contrast. The parameter RF, defined as the ratio between horizontal and vertical input forces at the source, is used as a measure of the proportion of Love wave components in combination with Rayleigh waves contained in simulated microtremors. The microtremor records at KASAI seem to correspond to the simulation with RF =O. 1, indicating predominance of the Rayleigh wave components

    A two-step approach to model precipitation extremes in California based on max-stable and marginal point processes

    Full text link
    In modeling spatial extremes, the dependence structure is classically inferred by assuming that block maxima derive from max-stable processes. Weather stations provide daily records rather than just block maxima. The point process approach for univariate extreme value analysis, which uses more historical data and is preferred by some practitioners, does not adapt easily to the spatial setting. We propose a two-step approach with a composite likelihood that utilizes site-wise daily records in addition to block maxima. The procedure separates the estimation of marginal parameters and dependence parameters into two steps. The first step estimates the marginal parameters with an independence likelihood from the point process approach using daily records. Given the marginal parameter estimates, the second step estimates the dependence parameters with a pairwise likelihood using block maxima. In a simulation study, the two-step approach was found to be more efficient than the pairwise likelihood approach using only block maxima. The method was applied to study the effect of El Ni\~{n}o-Southern Oscillation on extreme precipitation in California with maximum daily winter precipitation from 35 sites over 55 years. Using site-specific generalized extreme value models, the two-step approach led to more sites detected with the El Ni\~{n}o effect, narrower confidence intervals for return levels and tighter confidence regions for risk measures of jointly defined events.Comment: Published at http://dx.doi.org/10.1214/14-AOAS804 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A practical scheme for error control using feedback

    Get PDF
    We describe a scheme for quantum error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols (for example Ahn et. al, PRA, 65, 042301 (2001)), is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit-flips. We also consider implementation in a solid-state quantum computation architecture and estimate the maximal error rate which could be corrected with current technology.Comment: 12 pages, 3 figures. Minor typographic change
    • …
    corecore