25 research outputs found

    Traditional and modified Newmark displacement methods after the 2022 Ms 6.8 Luding earthquake (Eastern Tibetan Plateau)

    Full text link
    peer reviewedThe Newmark displacement (ND) method, which reproduces the interactions between waves, solids, and fluids during an earthquake, has experienced numerous modifications. We compare the performances of a traditional and a modified version of the ND method through the analysis of co-seismic landslides triggered by the 2022 Ms 6.8 Luding earthquake (Sichuan, China). We implemented 23 ND scenarios with each equation, assuming different landslide depths, as well as various soil-rock geomechanical properties derived from previous studies in regions of similar lithology. These scenarios allowed verifying the presence or absence of such landslides and predict the likely occurrence locations. We evaluated the topographic and slope aspect amplification effects on both equations. The oldest equation has a better landslide predictive ability, as it considers both slope stability and earthquake intensity. Contrarily, the newer version of the ND method has a greater emphasis on slope stability compared to the earthquake intensity and hence tends to give high ND values only when the critical acceleration is weak. The topographic amplification does not improve the predictive capacity of these equations, most likely because few or no massive landslides were triggered from mountain peaks. This approach allows structural, focal mechanism, and site effects to be considered when designing ND models, which could help to explain and predict new landslide distribution patterns such as the abundance of landslides on the NE, E, S, and SE-facing slopes observed in the Luding case

    County comprehensive geohazard modelling based on the grid maximum method

    No full text
    Sichuan Province is characterized by great differences in topography, lithologic structure and frequent occurrence of various local disasters. Therefore, it is of great significance to carry out evaluations of the vulnerability of geological disasters. Rockfall and debris flows are landslides in a broad sense. Taking Danba County, Sichuan Province, as a case study, the spatial probability distributions of collapse, landslide and debris flow are comprehensively considered from the perspective of the susceptibility of different types of landslides to regional geological disasters. Based on ArcGIS, 10 key control factors of geological hazards, such as elevation and slope, were selected by a high-precision digital elevation model, and the susceptibility of comprehensive geological hazards was evaluated by an information content model. Finally, the Cell Statistics function of ArcGIS was used to realize the synthesis and comprehensive vulnerability of the maximum value method of multiple raster layers, and the ROC curve was further used to verify the accuracy of the vulnerability model of landslide categories in a single area. According to the natural break point method, the very low-, low-, medium-, high- and very high-prone areas were divided, and the high- and very high-prone areas were mainly concentrated in Zhanggu Town, Taipingqiao Township and Jiaju Town.This paper shows that the information model can evaluate a single type of geological hazard and that the grid maximum method is an effective evaluation method to obtain the comprehensive vulnerability

    A novel historical landslide detection approach based on LiDAR and lightweight attention U-Net

    Get PDF
    Rapid and accurate identification of landslides is an essential part of landslide hazard assessment, and in particular it is useful for land use planning, disaster prevention, and risk control. Recent alternatives to manual landslide mapping are moving in the direction of artificial intelligence—aided recognition of these surface processes. However, so far, the technological advancements have not produced robust automated mapping tools whose domain of validity holds in any area across the globe. For instance, capturing historical landslides in densely vegetated areas is still a challenge. This study proposed a deep learning method based on Light Detection and Ranging (LiDAR) data for automatic identification of historical landslides. Additionally, it tested this method in the Jiuzhaigou earthquake-hit region of Sichuan Province (China). Specifically, we generated a Red Relief Image Map (RRIM), which was obtained via high-precision airborne LiDAR data, and on the basis of this information we trained a Lightweight Attention U-Net (LAU-Net) to map a total of 1949 historical landslides. Overall, our model recognized the aforementioned landslides with high accuracy and relatively low computational costs. We compared multiple performance indexes across several deep learning routines and different data types. The results showed that the Multiple-Class based Semantic Image Segmentation (MIOU) and the F1_score of the LAU-Net and RRIM reached 82.29% and 87.45%, which represented the best performance among the methods we teste

    Periodically twinned nanotowers and nanodendrites of mercury selenide synthesized via a solution-liquid-solid route

    No full text
    Two types of mercury selenide nanostructures, nanotowers and nanoscale dendrites, have been created with good control and high yield by a solution-liquid-solid process. Alternating twinned structures have been achieved in both the nanotowers and the nanodendrites, which originate from self-oscillations of local reaction variables sustained by the competition between the rates of supply and deposition of HgSe in the liquid mercury droplets

    In Vitro Vascular-Protective Effects of a Tilapia By-Product Oligopeptide on Angiotensin II-Induced Hypertensive Endothelial Injury in HUVEC by Nrf2/NF-κB Pathways

    No full text
    Angiotensin II (Ang II) is closely involved in endothelial injury during the development of hypertension. In this study, the protective effects of the tilapia by-product oligopeptide Leu-Ser-Gly-Tyr-Gly-Pro (LSGYGP) on oxidative stress and endothelial injury in Angiotensin II (Ang II)-stimulated human umbilical vein endothelial cells (HUVEC) were evaluated. LSGYGP dose-dependently suppressed the fluorescence intensities of nitric oxide (NO) and reactive oxygen species (ROS), inhibited the nuclear factor-kappa B (NF-κB) pathway, and reduced inducible nitric oxide synthase (iNOS), cyclooxygenase-2 (COX-2), and endothelin-1 (ET-1) expression, as shown by western blot. In addition, it attenuated the expression of gamma-glutamyltransferase (GGT) and heme oxygenase 1 (HO-1), as well as increasing superoxide dismutase (SOD) and glutathione (GSH) expression through the nuclear factor erythroid 2-related factor 2 (Nrf2) pathway. Other experiments revealed that LSGYGP increased the apoptotic inhibition ratio between cleaved-caspase-3/procaspase-3, reduced expressions of pro-apoptotic ratio between Bcl-2/Bax, inhibited phosphorylation of mitogen-activated protein kinases (MAPK), and increased phosphorylation of the serine/threonine kinase (Akt) pathway. Furthermore, LSGYGP significantly decreased Ang II-induced DNA damage in a comet assay, and molecular docking results showed that the steady interaction between LSGYGP with NF-κB may be attributed to hydrogen bonds. These results suggest that this oligopeptide is effective in protecting against Ang II-induced HUVEC injury through the reduction of oxidative stress and alleviating endothelial damage. Thus, it has the potential for the therapeutic treatment of hypertension-associated diseases

    Deconstructing iterative optimization

    No full text
    Iterative optimization is a popular compiler optimization approach that has been studied extensively over the past decade. In this article, we deconstruct iterative optimization by evaluating whether it works across datasets and by analyzing why it works. Up to now, most iterative optimization studies are based on a premise which was never truly evaluated: that it is possible to learn the best compiler optimizations across datasets. In this article, we evaluate this question for the first time with a very large number of datasets. We therefore compose KDataSets, a dataset suite with 1000 datasets for 32 programs, which we release to the public. We characterize the diversity of KDataSets, and subsequently use it to evaluate iterative optimization. For all 32 programs, we find that there exists at least one combination of compiler optimizations that achieves at least 83% or more of the best possible speedup across all datasets on two widely used compilers (Intel's ICC and GNU's GCC). This optimal combination is program-specific and yields speedups up to 3.75x (averaged across datasets of a program) over the highest optimization level of the compilers (-O3 for GCC and -fast for ICC). This finding suggests that optimizing programs across datasets might be much easier than previously anticipated. In addition, we evaluate the idea of introducing compiler choice as part of iterative optimization. We find that it can further improve the performance of iterative optimization because different programs favor different compilers. We also investigate why iterative optimization works by analyzing the optimal combinations. We find that only a handful optimizations yield most of the speedup. Finally, we show that optimizations interact in a complex and sometimes counterintuitive way through two case studies, which confirms that iterative optimization is an irreplaceable and important compiler strategy

    Preventive Effect of YGDEY from Tilapia Fish Skin Gelatin Hydrolysates against Alcohol-Induced Damage in HepG2 Cells through ROS-Mediated Signaling Pathways

    No full text
    According to a previous study, YGDEY from tilapia fish skin gelatin hydrolysates has strong free radical scavenging activity. In the present study, the protective effect of YGDEY against oxidative stress induced by ethanol in HepG2 cells was investigated. First, cells were incubated with YGDEY (10, 20, 50, and 100 μM) to assess cytotoxicity, and there was no significant change in cell viability. Next, it was established that YGDEY decreased the production of reactive oxygen species (ROS). Western blot results indicated that YGDEY increased the levels of superoxide dismutase (SOD) and glutathione (GSH) and decreased the expression of gamma-glutamyltransferase (GGT) in HepG2 cells. It was then revealed that YGDEY markedly reduced the expressions of bax and cleaved-caspase-3 (c-caspase-3); inhibited phosphorylation of Akt, IκB-α, p65, and p38; and increased the level of bcl-2. Moreover, the comet assay showed that YGDEY effectively decreased the amount of ethanol-induced DNA damage. Thus, YGDEY protected HepG2 cells from alcohol-induced injury by inhibiting oxidative stress, and this may be associated with the Akt/nuclear factor-κB (NF-κB)/mitogen-activated protein kinase (MAPK) signal transduction pathways. These results demonstrate that YGDEY from tilapia fish skin gelatin hydrolysates protects HepG2 cells from oxidative stress, making it a potential functional food ingredient

    Performance portability across heterogeneous SoCs using a generalized library-based approach

    No full text
    Because of tight power and energy constraints, industry is progressively shifting toward heterogeneous system-on-chip (SoC) architectures composed of a mix of general-purpose cores along with a number of accelerators. However, such SoC architectures can be very challenging to efficiently program for the vast majority of programmers, due to numerous programming approaches and languages. Libraries, on the other hand, provide a simple way to let programmers take advantage of complex architectures, which does not require programmers to acquire new accelerator-specific or domain-specific languages. Increasingly, library-based, also called algorithm-centric, programming approaches propose to generalize the usage of libraries and to compose programs around these libraries, instead of using libraries as mere complements. In this article, we present a software framework for achieving performance portability by leveraging a generalized library-based approach. Inspired by the notion of a component, as employed in software engineering and HW/SW codesign, we advocate nonexpert programmers to write simple wrapper code around existing libraries to provide simple but necessary semantic information to the runtime. To achieve performance portability, the runtime employs machine learning (simulated annealing) to select the most appropriate accelerator and its parameters for a given algorithm. This selection factors in the possibly complex composition of algorithms used in the application, the communication among the various accelerators, and the tradeoff between different objectives (i.e., accuracy, performance, and energy). Using a set of benchmarks run on a real heterogeneous SoC composed of a multicore processor and a GPU, we show that the runtime overhead is fairly small at 5.1% for the GPU and 6.4% for the multi-core. We then apply our accelerator selection approach to a simulated SoC platform containing multiple inexact accelerators. We show that accelerator selection together with hardware parameter tuning achieves an average 46.2% energy reduction and a speedup of 2.1× while meeting the desired application error target

    Practical iterative optimization for the data center

    No full text
    Iterative optimization is a simple but powerful approach that searches the best possible combination of compiler optimizations for a given workload. However, iterative optimization is plagued by several practical issues that prevent it from being widely used in practice: a large number of runs are required to find the best combination, the optimum combination is dataset dependent, and the exploration process incurs significant overhead that needs to be compensated for by performance benefits. Therefore, although iterative optimization has been shown to have a significant performance potential, it seldom is used in production compilers. In this article, we propose iterative optimization for the data center (IODC): we show that the data center offers a context in which all of the preceding hurdles can be overcome. The basic idea is to spawn different combinations across workers and recollect performance statistics at the master, which then evolves to the optimum combination of compiler optimizations. IODC carefully manages costs and benefits, and it is transparent to the end user. To bring IODC to practice, we evaluate it in the presence of co-runners to better reflect real-life data center operation with multiple applications co-running per server. We enhance IODC with the capability to find compatible co-runners along with a mechanism to dynamically adjust the level of aggressiveness to improve its robustness in the presence of co-running applications. We evaluate IODC using both Map Reduce and compute-intensive throughput server applications. To reflect the large number of users interacting with the system, we gather a very large collection of datasets (up to hundreds of millions of unique datasets per program), for a total storage of 16.4TB and 850 days of CPU time. We report an average performance improvement of 1.48x and up to 2.08x for five MapReduce applications, and 1.12x and up to 1.39x for nine server applications. Furthermore, our experiments demonstrate that IODC is effective in the presence of co-runners, improving performance by greater than 13% compared to the worst possible co-runner schedule
    corecore