159 research outputs found

    2001 Bhuj-Kachchh earthquake: surface faulting and its relation with neotectonics and regional structures, Gujarat, Western India

    Get PDF
    Primary and secondary surface deformation related to the 2001 Bhuj-Kachchh earthquake suggests that thrusting movement took place along an E-W fault near the western extension of the South Wagad Fault, a synthetic fault of the Kachchh Mainland Fault (KMF). Despite early reconnaissance reports that concluded there was no primary surface faulting, we describe an 830 m long, 15-35 cm high, east-west-trending thrust fault scarp near where the seismogenic fault plane would project to the surface, near Bharodiya village (between 23°34.912'N, 70°23.942'E and 23°34.304'N, 70°24.884'E). Along most of the scarp Jurassic bedrock is thrust over Quaternary deposits, but the fault scarp also displaces Holocene alluvium and an earth dam, with dips of 13° to 36° south. Secondary co-seismic features, mainly liquefaction and lateral spreading, dominate the area south of the thrust. Transverse right-lateral movement along the «Manfara Fault» and a parallel fault near Bharodiya suggests segmentation of the E-W master faults. Primary (thrust) surface rupture had a length of 0.8 km, maximum displacement of about 35 cm, and average displacement of about 15 cm. Secondary (strike-slip) faulting was more extensive, with a total end-to-end length of 15 km, maximum displacement of 35 cm, and average displacement of about 20 cm

    Evaluating Quaternary activity versus inactivity on faults and folds using geomorphological mapping and trenching: Seismic hazard implications

    Get PDF
    The incorporation of active faults in seismic hazard analyses may have a significant impact on the feasibility, design and cost of major engineering projects (e.g., nuclear facilities, dams), especially when located in the site vicinity. The regulatory definition of active versus inactive fault is generally based on whether the fault has ruptured or not after a specific chronological bound (i.e. fault recency). This work presents a methodology, mainly based on geomorphological mapping and trenching, for determining whether specific faults can be considered as active or inactive. The approach is illustrated through the analysis of several faults located in the Spanish Pyrenees (Loiti, Leyre, La Trinidad, Ruesta faults). The 29 km long Loiti Thrust was included in the Neotectonic Map of Spain as a probable neotectonic structure. Previous works, based on geomorphological investigations, incorporated the 28 km long Leyre Thrust as a significant seismic source in a probabilistic seismic hazard analysis, which challenged the seismic design of nearby large dams. The production of detailed geomorphological strip maps along the faults allowed the recognition of specific sites where the faults are covered by Quaternary deposits. The establishment of chronosequences (pediments-terrace sequences)and the available geochronological data helped identifying the most adequate morpho-stratigraphic units for satisfactorily evaluating fault activity vs. inactivity. The excavation of trenches at the selected sites provided unambiguous information on the presence or lack of deformation in the Quaternary cover overlying the fault, and the origin of scarps (tectonic versus erosional). Trenches were also useful for collecting samples and reliably measuring the relative height of terraces overlain by thick colluvium. The evidence gathered by these methods were complemented with the numerical dating of non-deformed slope deposits covering a fault, the analysis of the longitudinal profiles of old pediment surfaces located in the proximity of a fault, the examination of a cave situated next to a fault searching for speleoseismological evidence, and regional geodetic and seismotectonic data (GPS measurements, earthquake focal mechanisms). The integration of all the data, and especially the trenches dug in non-deformed old terrace deposits (>100 ka)truncating the faults, indicates that the analysed faults can be considered inactive and that previous neotectonic postulations were based on non-valid geomorphological interpretations

    Tectonic geomorphology and late Quaternary deformation on the Ragged Mountain fault, Yakutat microplate, south coastal Alaska

    Get PDF
    The 33 km-long Ragged Mountain fault (RMF) forms the northwestern corner of the Yakutat Terrane, which is colliding with the North American plate in south coastal Alaska at ~5.5 cm/yr. The fault zone contains three types of scarps in a zone up to 175 m wide: (1) antislope scarps on the lower range front, (2) a sinuous thrust scarp at the toe of the range front, and (3) a swarm of flexural-slip scarps on the footwall. Trenches across the first two scarp types reveal evidence for two Holocene surface ruptures, plus several late Pleistocene ruptures. In the antislope scarp trench, ruptures occurred at 0.5–3.9 ka; slightly younger than 8.3 ka; and at 18.1–21.8 ka (recurrence intervals 4.4–8 kyr and 9.8–13.3 kyr). Displacements per event ranged from 15 to 40 cm. In the thrust trench ruptures are dated at 2.8–5.9 ka; 5.9–17.2 ka, and 17.2–44.9 ka (mean recurrence intervals 7.2 kyr and 19.5 kyr). Displacements per event ranged from 26 to 77 cm. We interpret the thrust fault as the primary seismogenic structure, and its largest trench displacement (77 cm) equates to the average displacement expected for a 33 km-long reverse rupture. The flexural-slip scarp, in contrast, was rapidly formed ca. 4 ka but its sag pond sediments have continued to slowly fold up to present. The southern third of the fault is dominated by large gravitational failures of the range front (as large as 2.5 km wide, 0.6-0.7 km long, and 200–250 m thick), which head in a linear, 40 m-deep range-crest trough filled with lakes, a classic expression of deep-seated gravitational slope deformation

    Identifying the boundaries of sinkholes and subsidence areas via trenching and establishing setback distances

    Get PDF
    One of the most effective mitigation strategies in sinkhole areas is the exclusion of sinkholes and their vicinity to construction. The application of this preventive measure requires precise mapping of the boundaries of the areas affected by subsidence and the establishment of adequate setback distances, which is an important policy issue with significant economic implications. Through the investigation of several buried sinkholes in the mantled evaporite karst of the Ebro Valley by trenching, this work illustrates that the actual extent of the subsidence areas may be much larger than that inferred from surface mapping and geophysical surveys. The objective and accurate subsurface information acquired from trenches on the outer edge of the deformed ground revealed sinkhole radii 2–3 times larger than initially estimated, increasing one order of magnitude the sinkhole area. Trenches can therefore help to reduce mapping uncertainties and the size of setbacks. Moreover, the trenching technique, in combination with geochronological data and retrodeformation analyses, provides critical information on the subsidence phenomena and the characteristics of the sinkholes relevant to hazard assessment. Since recommended setback distances found in the existing literature are highly variable and rather arbitrary, we include a discussion here on the main factors that should be considered when defining setback zones for sinkholes

    A new degree of freedom for memory allocation in clusters

    Full text link
    Improvements in parallel computing hardware usually involve increments in the number of available resources for a given application such as the number of computing cores and the amount of memory. In the case of shared-memory computers, the increase in computing resources and available memory is usually constrained by the coherency protocol, whose overhead rises with system size, limiting the scalability of the final system. In this paper we propose an efficient and cost-effective way to increase the memory available for a given application by leveraging free memory in other computers in the cluster. Our proposal is based on the observation that many applications benefit from having more memory resources but do not require more computing cores, thus reducing the requirements for cache coherency and allowing a simpler implementation and better scalability. Simulation results show that, when additional mechanisms intended to hide remote memory latency are used, execution time of applications that use our proposal is similar to the time required to execute them in a computer populated with enough local memory, thus validating the feasibility of our proposal. We are currently building a prototype that implements our ideas. The first results from real executions in this prototype demonstrate not only that our proposal works but also that it can efficiently execute applications that make use of remote memory resources. © 2011 Springer Science+Business Media, LLC.This work has been supported by PROMETEO from Generalitat Valenciana (GVA) under Grant PROMETEO/2008/060.Montaner Mas, H.; Silla JimĂ©nez, F.; Fröning, H.; Duato MarĂ­n, JF. (2012). A new degree of freedom for memory allocation in clusters. Cluster Computing. 15(2):101-123. https://doi.org/10.1007/s10586-010-0150-7S1011231523leaf Systems: http://www.3leafsystems.comAcharya, A., Setia, S.: Availability and utility of idle memory in workstation clusters. ACM SIGMETRICS Perform. Eval. Rev. 27(1), 35–46 (1999). doi: 10.1145/301464.301478Anderson, T., Culler, D., Patterson, D.: A case for NOW (Networks of Workstations). IEEE MICRO 15(1), 54–64 (1995). doi: 10.1109/40.342018HyperTransport Technology Consortium. HyperTransport I/O Link Specification Revision 3.10 (2008). Available at http://www.hypertransport.orgBienia, C., Kumar, S., et al.: The parsec benchmark suite: Characterization and architectural implications. In: Proceedings of the 17th PACT (2008)Chapman, M., Heiser, G.: vNUMA: A virtual shared-memory multiprocessor. In: Proceedings of the 2009 USENIX Annual Technical Conference, San Diego, USA, 2000, pp. 349–362. (2009)Charles, P., Grothoff, C., Saraswat, V., et al.: X10: an object-oriented approach to non-uniform cluster computing. ACM SIGPLAN Not. 40(10), 519–538 (2005)Consortium, H.: HyperTransport High Node Count, Slides. http://www.hypertransport.org/default.cfm?page=HighNodeCountSpecificationConway, P., Hughes, B.: The AMD opteron northbridge architecture. IEEE MICRO 27(2), 10–21 (2007). doi: 10.1109/MM.2007.43Conway, P., Kalyanasundharam, N., Donley, G., et al.: Blade computing with the AMD Opteron processor (Magny-Cours). Hot chips 21 (2009)Duato, J., Silla, F., Yalamanchili, S., et al.: Extending HyperTransport protocol for improved scalability. First International Workshop on HyperTransport Research and Applications (2009)Feeley, M.J., Morgan, W.E., Pighin, E.P., Karlin, A.R., Levy, H.M., Thekkath, C.A.: Implementing global memory management in a workstation cluster. In: SOSP ’95: Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles, pp. 201–212. ACM, New York (1995). doi: 10.1145/224056.224072Fröning, H., Litz, H.: Efficient hardware support for the partitioned global address space. In: 10th Workshop on Communication Architecture for Clusters (2010)Fröning, H., Nuessle, M., Slogsnat, D., Litz, H., BrĂŒening, U.: The HTX-board: a rapid prototyping station. In: 3rd annual FPGAworld Conference (2006)Garcia-Molina, H., Salem, K.: Main memory database systems: an overview. IEEE Trans. Knowl. Data Eng. 4(6), 509–516 (1992). doi: 10.1109/69.180602Gaussian 03: http://www.gaussian.comGray, J., Liu, D.T., Nieto-Santisteban, M., et al.: Scientific data management in the coming decade. SIGMOD Rec. 34(4), 34–41 (2005). doi: 10.1145/1107499.1107503IBM journal of Research and Development staff: Overview of the IBM Blue Gene/P project. IBM J. Res. Dev. 52(1/2), 199–220 (2008)IBM z Series: http://www.ibm.com/systems/zIn-Memory Database Systems (IMDSs) Beyond the Terabyte Size Boudary: http://www.mcobject.com/130/EmbeddedDatabaseWhitePapers.htmKeltcher, C., McGrath, K., Ahmed, A., Conway, P.: The AMD opteron processor for multiprocessor servers. Micro IEEE 23(2), 66–76 (2003). doi: 10.1109/MM.2003.1196116Kottapalli, S., Baxter, J.: Nehalem-EX CPU architecture. Hot chips 21 (2009)Liang, S., Noronha, R., Panda, D.: Swapping to remote memory over infiniband: an approach using a high performance network block device. In: Cluster Computing, 2005. IEEE International, pp. 1–10. (2005) doi: 10.1109/CLUSTR.2005.347050Litz, H., Fröning, H., Nuessle, M., BrĂŒening, U.: A hypertransport network interface controller for ultra-low latency message transfers. HyperTransport Consortium White Paper (2007)Litz, H., Fröning, H., Nuessle, M., BrĂŒening, U.: VELO: A novel communication engine for ultra-low latency message transfers. In: 37th International Conference on Parallel Processing, 2008. ICPP ’08, pp. 238–245 (2008). doi: 10.1109/ICPP.2008.85Magnusson, P., Christensson, M., Eskilson, J., et al.: Simics: a full system simulation platform. Computer 35(2), 50–58 (2002). doi: 10.1109/2.982916Martin, M., Sorin, D., Beckmann, B., et al.: Multifacet’s general execution-driven multiprocessor simulator (GEMS) toolset. ACM SIGARCH Comput. Archit. News 33(4), 92–99 (2005) doi: 10.1145/1105734.1105747MBA3 NC Series Catalog: http://www.fujitsu.com/global/services/computing/storage/hdd/ehdd/mba3073nc-mba3300nc.htmlMcCalpin, J.D.: Memory bandwidth and machine balance in current high performance computers. In: IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter, pp. 19–25 (1995)NUMAChip: http://www.numachip.com/Oguchi, M., Kitsuregawa, M.: Using available remote memory dynamically for parallel data mining application on ATM-connected PC cluster. In: IPDPS 2000. Proceedings, 14th International, pp. 411–420 (2000). doi: 10.1109/IPDPS.2000.846014Oleszkiewicz, J., Xiao, L., Liu, Y.: Parallel network RAM: effectively utilizing global cluster memory for large data-intensive parallel programs. In: International Conference on Parallel Processing, 2004. ICPP 2004, vol. 1, pp. 353–360 (2004). doi: 10.1109/ICPP.2004.1327942Ronstrom, M., Thalmann, L.: MySQL cluster architecture overview. Technical White Paper. MySQL (2004)ScaleMP: http://www.scalemp.comSGI: Technical advances in the SGI Altix UV architecture, White Paper. http://www.sgi.com/products/servers/altix/uv/Slogsnat, D., Giese, A., NĂŒssle, M., BrĂŒning, U.: An open-source HyperTransport core. ACM Trans. Reconfigurable Technol. Syst. 1(3), 1–21 (2008). doi: 10.1007/s10586-010-0150-7Szalay, A.S., Gray, J., vandenBerg, J.: Petabyte Scale Data Mining: Dream or Reality? CoRR cs.DB/0208013 (2002)Tuck, J., Ceze, L., Torrellas, J.: Scalable cache miss handling for high memory-level parallelism. In: Microarchitecture, 2006. MICRO-39. 39th Annual IEEE/ACM International Symposium on (2006)Violin Memory: http://violin-memory.comDynamic Logical Partitioning. White Paper: http://www.ibm.com/systems/p/hardware/whitepapers/dlpar.htmlYelick, K.: Computer architecture: Opportunities and challenges for scalable applications. Sandia CSRI Workshop on Next-generation scalable applications: When MPI-only is not enough (2008)Yelick, K.: Programming models: Opportunities and challenges for scalable applications. Sandia CSRI Workshop on Next-generation scalable applications: When MPI-only is not enough (2008

    Physics-Based Earthquake Simulations in Slow-Moving Faults: A Case Study From the Eastern Betic Shear Zone (SE Iberian Peninsula)

    Get PDF
    In regions with slow-moving faults, the incompleteness of earthquake and fault data complicates the study of seismic hazard. The instrumental and historical seismic catalogs cover a short period compared with the long-time interval between major events. Paleoseismic evidence allows us to increase the time frame of actual observations, but data is still scarce and imprecise. Physics-based earthquake simulations overcome the limitations of actual earthquake catalogs and generate long-term synthetic seismicity. The RSQSim earthquake simulator used in our study reproduces the earthquake physical processes based on a 3D fault model that contains the kinematics, the long-term slip rates and the rate-and-state friction properties of the main seismogenic sources of a region. The application of earthquake simulations to the Eastern Betic Shear Zone, a slow fault system at southeastern Spain, allows the compilation of 100 kyr-synthetic catalogs of MW > 4.0 events. Multisection earthquakes and complete ruptures of some faults in this region, preferentially on strike-slip dominant ruptures, are possible according to our simulations. The largest MW > 6.5 events are likely as a result of jumping ruptures between the Carboneras and the Palomares faults, with recurrence times of < 20,000 years; and less frequently between the Alhama de Murcia and the Los Tollos faults. A great variability of interevent times is observed between successive synthetic seismic cycles, in addition to the occurrence of complex co-ruptures between faults. Consequently, the occurrence of larger earthquakes, even MW ≄ 7.0, cannot be ruled out, contrasting with the low to moderate magnitudes recorded in the instrumental and historical earthquake catalog

    Archaeoseismology: Methodological issues and procedure

    Get PDF
    Archaeoseismic research contributes important data on past earthquakes. A limitation of the usefulness of archaeoseismology is due to the lack of continuous discussion about the methodology. The methodological issues are particularly important because archaeoseismological investigations of past earthquakes make use of a large variety of methods. Typical in situ investigations include: (1) reconstruction of the local archaeological stratigraphy aimed at defining the correct position and chronology of a destruction layer, presumably related to an earthquake; (2) analysis of the deformations potentially due to seismic shaking or secondary earthquake effects, detectable on walls; (3) analysis of the depositional characteristics of the collapsed material; (4) investigations of the local geology and geomorphology to define possible natural cause(s) of the destruction; (5) investigations of the local factors affecting the ground motion amplifications; and (6) estimation of the dynamic excitation, which affected the site under investigation. Subsequently, a 'territorial' approach testing evidence of synchronous destruction in a certain region may delineate the extent of the area struck by the earthquake. The most reliable results of an archaeoseismological investigation are obtained by application of modern geoarchaeological practice (archaeological stratigraphy plus geological–geomorphological data), with the addition of a geophysical-engineering quantitative approach and (if available) historical information. This gives a basic dataset necessary to perform quantitative analyses which, in turn, corroborate the archaeoseismic hypothesis. Since archaeoseismological investigations can reveal the possible natural causes of destruction at a site, they contribute to the wider field of environmental archaeology, that seeks to define the history of the relationship between humans and the environment. Finally, through the improvement of the knowledge on the past seismicity, these studies can contribute to the regional estimation of seismic hazard
    • 

    corecore