25 research outputs found

    Holocene climate change and anthropogenic activity records in Svalbard: a unique perspective based on Chinese research from Ny-Ålesund

    Get PDF
    Climate change in the Arctic region is more rapid than that in other areas owing to Arctic amplification. To better understand climate change and the driving mechanisms, long-term historical reconstructions throughout the Holocene and high-resolution records of the past few hundred years are required. Intense anthropogenic activities in the Arctic have had a great impact on the local environment. Here, we review the Holocene climate change record, responses of the ecosystems to climate change, and the anthropogenic impacts on the environment based mainly on Chinese research from Ny-Ålesund. Climate reconstruction studies from Svalbard have revealed several cold episodes during the Holocene, which are consistent with ice rafting events in the North Atlantic region and glacier activity from Greenland, Iceland, and Svalbard. The ecosystem also showed corresponding responses to climate change, especially during the late Holocene. Over recent decades, anthropogenic activities have caused serious pollution and deterioration to the local environment in Svalbard in areas frequented by people. Greater environmental protection is therefore needed to reduce the anthropogenic impacts on the local environment

    Primary and potential secondary risks of landslide outburst floods

    Get PDF
    Outburst floods triggered by breaching of landslide dams may cause severe loss of life and property downstream. Accurate identification and assessment of such floods, especially when leading to secondary impacts, are critical. In 2018, the Baige landslide in the Tibetan Plateau twice blocked the Jinsha River, eventually resulting in a severe outburst flood. The Baige landslide remains active, and it is possible that a breach happens again. Based on numerical simulation using a hydrodynamic model, remote sensing, and field investigation, we reproduce the outburst flood process and assess the hazard associated with future floods. The results show that the hydrodynamic model could accurately simulate the outburst flood process, with overall accuracy and Kappa accuracy for the flood extent of 0.956 and 0.911. Three future dam break scenarios were considered with landslide dams of heights 30 m, 35 m, and 51 m. The potential storage capacity and length of upstream flow back up in the upstream valley for these heights were 142 × 106m3/32 km, 182 × 106m3/40 km, and 331 × 106m3/50 km. Failure of these three dams leads to maximum inundation extents of 0.18 km2, 0.34 km2, and 0.43 km2, which is significant out-of-bank flow and serious infrastructure impacts. These results demonstrate the seriousness of secondary hazards associated with this region

    Quantitative risk analysis of toppling slope considering seismic risk

    No full text
    Risk analysis and assessment are important tools to solve the inherent uncertainty of slopes. At present, there are few studies on systematic quantitative risk analysis of slopes considering the uncertainty of external load and internal geotechnical mechanical parameters at the same time. This paper takes the toppling slopebehind the power plant of the Zhala hydropower station in Tibet as an example. Based on the probability density function (PDF) of site seismic peak acceleration and the fitting function of slope failure probability under different seismic peak accelerations, the overall slope failure probability is calculated by numerical integration, and the influence range of the slope is simulated by the discrete element method (DEM). Then, vulnerability analysis and quantitative risk calculation of elements at risk are carried out. Finally, the ALARP criterion is used for risk assessment. The results show that, considering the seismic risk, the failure probability of the slope is 0.061 9 in the 50-year design reference period. The slope poses a great threat to the ground powerhouse of hydropower stations, and the corresponding economic risk is 54.82 million RMB. According to the ALARP criterion, the slope risk is in the unacceptable area, and measures should be taken to prevent or avoid the risk. The research results have guiding significance for decision-making and risk management of slope treatment engineering

    An Improved Method for the Evaluation and Local Multi-Scale Optimization of the Automatic Extraction of Slope Units in Complex Terrains

    No full text
    Slope units (SUs) are sub-watersheds bounded by ridge and valley lines. A slope unit reflects the physical relationship between landslides and geomorphological features and is especially useful for landslide sensitivity modeling. There have been significant algorithmic advances in the automatic delineation of SUs. But the intrinsic difficulties of determining input parameters and correcting for unreasonable SUs have hindered their wide application. An improved method of the evaluation and local multi-scale optimization for the automatic extraction of SUs is proposed. The Sus’ groups more consistent with the topographic features were achieved through a stepwise approach from a global optimum to a local refining. First, the preliminary subdivisions of multiple SUs were obtained based on the r.slopeunit software. The optimal subdivision scale was obtained by a collaborative evaluation approach capable of simultaneously measuring objective minimum discrepancies and seeking a global optimum. Second, under the selected optimal scale, unreasonable SUs such as over-subdivided slope units (OSSUs) and under-subdivided slope units (USSUs) were further distinguished. The local average similarity (LS) metric for each SU was designed based on calculating the SU’s area, common boundary and neighborhood variability. The inflection points of the cumulative frequency curve of LS were calculated as the distinguishing intervals for those unrealistic SUs by maximum interclass variance threshold. Third, a new effective optimization mechanism containing the re-subdivision of USSUs and merging of OSSUs was put into effect. We thus obtained SUs composed of terrain subdivisions with multiple scales, which is currently one of the few available methods for non-single scales. The statistical distributions of density, size and shapes demonstrate the excellent performance of the refined SUs in capturing the variability of complex terrains. Benefiting from the sufficient integrating approach of diverse features for each object, it is a significant advantage that the processing object can be transferred from general entirety to each precise individual

    Surface Detection of Solid Wood Defects Based on SSD Improved with ResNet

    No full text
    Due to the lack of forest resources in China and the low detection efficiency of wood surface defects, the output of solid wood panels is not high. Therefore, this paper proposes a method for detecting surface defects of solid wood panels based on a Single Shot MultiBox Detector algorithm (SSD) to detect typical wood surface defects. The wood panel images are acquired by an independently designed image acquisition system. The SSD model included the first five layers of the VGG16 network, the SSD feature mapping layer, the feature detection layer, and the Non-Maximum Suppression (NMS) module. We used TensorFlow to train the network and further improved it on the basis of the SSD network structure. As the basic network part of the improved SSD model, the deep residual network (ResNet) replaced the VGG network part of the original SSD network to optimize the input features of the regression and classification tasks of the predicted bounding box. The solid wood panels selected in this paper are Chinese fir and pine. The defects include live knots, dead knots, decay, mildew, cracks, and pinholes. A total of more than 5000 samples were collected, and the data set was expanded to 100,000 through data enhancement methods. After using the improved SSD model, the average detection accuracy of the defects we obtained was 89.7%, and the average detection time was 90 ms. Both the detection accuracy and the detection speed were improved

    An Improved Method for the Evaluation and Local Multi-Scale Optimization of the Automatic Extraction of Slope Units in Complex Terrains

    No full text
    Slope units (SUs) are sub-watersheds bounded by ridge and valley lines. A slope unit reflects the physical relationship between landslides and geomorphological features and is especially useful for landslide sensitivity modeling. There have been significant algorithmic advances in the automatic delineation of SUs. But the intrinsic difficulties of determining input parameters and correcting for unreasonable SUs have hindered their wide application. An improved method of the evaluation and local multi-scale optimization for the automatic extraction of SUs is proposed. The Sus’ groups more consistent with the topographic features were achieved through a stepwise approach from a global optimum to a local refining. First, the preliminary subdivisions of multiple SUs were obtained based on the r.slopeunit software. The optimal subdivision scale was obtained by a collaborative evaluation approach capable of simultaneously measuring objective minimum discrepancies and seeking a global optimum. Second, under the selected optimal scale, unreasonable SUs such as over-subdivided slope units (OSSUs) and under-subdivided slope units (USSUs) were further distinguished. The local average similarity (LS) metric for each SU was designed based on calculating the SU’s area, common boundary and neighborhood variability. The inflection points of the cumulative frequency curve of LS were calculated as the distinguishing intervals for those unrealistic SUs by maximum interclass variance threshold. Third, a new effective optimization mechanism containing the re-subdivision of USSUs and merging of OSSUs was put into effect. We thus obtained SUs composed of terrain subdivisions with multiple scales, which is currently one of the few available methods for non-single scales. The statistical distributions of density, size and shapes demonstrate the excellent performance of the refined SUs in capturing the variability of complex terrains. Benefiting from the sufficient integrating approach of diverse features for each object, it is a significant advantage that the processing object can be transferred from general entirety to each precise individual

    Wood Defect Detection Based on Depth Extreme Learning Machine

    No full text
    The deep learning feature extraction method and extreme learning machine (ELM) classification method are combined to establish a depth extreme learning machine model for wood image defect detection. The convolution neural network (CNN) algorithm alone tends to provide inaccurate defect locations, incomplete defect contour and boundary information, and inaccurate recognition of defect types. The nonsubsampled shearlet transform (NSST) is used here to preprocess the wood images, which reduces the complexity and computation of the image processing. CNN is then applied to manage the deep algorithm design of the wood images. The simple linear iterative clustering algorithm is used to improve the initial model; the obtained image features are used as ELM classification inputs. ELM has faster training speed and stronger generalization ability than other similar neural networks, but the random selection of input weights and thresholds degrades the classification accuracy. A genetic algorithm is used here to optimize the initial parameters of the ELM to stabilize the network classification performance. The depth extreme learning machine can extract high-level abstract information from the data, does not require iterative adjustment of the network weights, has high calculation efficiency, and allows CNN to effectively extract the wood defect contour. The distributed input data feature is automatically expressed in layer form by deep learning pre-training. The wood defect recognition accuracy reached 96.72% in a test time of only 187 ms

    Detection System for U-Shaped Bellows Convolution Pitches Based on a Laser Line Scanner

    No full text
    An expansion joint is mainly composed of bellows and other components; it is attached on a container shell or pipe to compensate for the additional stress caused by temperature differences and mechanical vibrations. In China, the expansion joint fatigue tests are often used to assess the quality of products. After fatigue tests, convolution pitch will be changed. The amount of change is an important index that can be used to evaluate bellows expansion joints. However, the convolution pitch detection is mainly done manually and randomly by inspection agencies before shipping to the end users. This common practice is not efficient and is often subjective. This paper introduced a novel method for automatically detecting the change of the convolution pitch based on a laser line scanner and data processing technology. The laser line scanner is combined with a precision motorized stage to obtain the point cloud data of the bellows. After denoising and fitting, a peak-finding algorithm is applied to search for the crest of a convolution. The method to find the convolution pitch and the decision that needs to be made to ensure product eligibility are described in detail. A DN500 expansion joint is used as a sample to illustrate the efficiency of the system. The application of the technique intuitively allows a higher precision and relative efficiency in quality inspection of bellows expansion joints. It has also been implemented in the Special Equipment Safety Supervision and Inspection Institute of Jiangsu province with great success

    Mapping Outburst Floods Using a Collaborative Learning Method Based on Temporally Dense Optical and SAR Data: A Case Study with the Baige Landslide Dam on the Jinsha River, Tibet

    No full text
    Outburst floods resulting from giant landslide dams can cause devastating damage to hundreds or thousands of kilometres of a river. Accurate and timely delineation of flood inundated areas is essential for disaster assessment and mitigation. There have been significant advances in flood mapping using remote sensing images in recent years, but little attention has been devoted to outburst flood mapping. The short-duration nature of these events and observation constraints from cloud cover have significantly challenged outburst flood mapping. This study used the outburst flood of the Baige landslide dam on the Jinsha River on 3 November 2018 as an example to propose a new flood mapping method that combines optical images from Sentinel-2, synthetic aperture radar (SAR) images from Sentinel-1 and a Digital Elevation Model (DEM). First, in the cloud-free region, a comparison of four spectral indexes calculated from time series of Sentinel-2 images indicated that the normalized difference vegetation index (NDVI) with the threshold of 0.15 provided the best separation flooded area. Subsequently, in the cloud-covered region, an analysis of dual-polarization RGB false color composites images and backscattering coefficient differences of Sentinel-1 SAR data were found an apparent response to ground roughness’s changes caused by the flood. We carried out the flood range prediction model based on the random forest algorithm. Training samples consisted of 13 feature vectors obtained from the Hue-Saturation-Value color space, backscattering coefficient differences/ratio, DEM data, and a label set from the flood range prepared from Sentinel-2 images. Finally, a field investigation and confusion matrix tested the prediction accuracy of the end-of-flood map. The overall accuracy and Kappa coefficient were 92.3%, 0.89 respectively. The full extent of the outburst floods was successfully obtained within five days of its occurrence. The multi-source data merging framework and the massive sample preparation method with SAR images proposed in this paper, provide a practical demonstration for similar machine learning applications using remote sensing

    LOW BIT RATE VIDEO QUALITY ASSESSMENT BASED ON PERCEPTUAL CHARACTERISTICS

    No full text
    Peak signal-to-noise ratio (PSNR) is not a good measure of perceived picture quality, especially at low bit rates of coding. This paper proposes a new approach for computing perceptual distortion for visual signal in order to provide an objective measure for perceptual quality at low bit rate coding in typically mobile communications. The regions with three major perceptually disturbing artefacts, namely, damaged edge, blockiness and ringing, are detected as the basis of assessment. The correlation of the metric with human perception has been demonstrated with low bit rate CIF test data. 1
    corecore