251 research outputs found

    Natural and Forced North Atlantic Hurricane Potential Intensity Change in CMIP5 Models

    Get PDF
    Possible future changes of North Atlantic hurricane intensity and the attribution of past hurricane intensity changes in the historical period are investigated using phase 5 of the Climate Model Intercomparison Project (CMIP5), multimodel, multiensemble simulations. For this purpose, the potential intensity (PI), the theoretical upper limit of the tropical cyclone intensity given the large-scale environment, is used. The CMIP5 models indicate that the PI change as a function of sea surface temperature (SST) variations associated with the Atlantic multidecadal variability (AMV) is more effective than that associated with climate change. Thus, relatively small changes in SST due to natural multidecadal variability can lead to large changes in PI, and the model-simulated multidecadal PI change during the historical period has been largely dominated by AMV. That said, the multimodel mean PI for the Atlantic main development region shows a significant increase toward the end of the twenty-first century under both the RCP4.5 and RCP8.5 emission scenarios. This is because of enhanced surface warming, which would place the North Atlantic PI largely above the historical mean by the mid-twenty-first century, based on CMIP5 model projection. The authors further attribute the historical PI changes to aerosols and greenhouse gas (GHG) forcing using CMIP5 historical single-forcing simulations. The model simulations indicate that aerosol forcing has been more effective in causing PI changes than the corresponding GHG forcing; the decrease in PI due to aerosols and increase due to GHG largely cancel each other. Thus, PI increases in the recent 30 years appears to be dominated by multidecadal natural variability associated with the positive phase of the AMV

    Weakly Supervised Semantic Segmentation for Large-Scale Point Cloud

    Full text link
    Existing methods for large-scale point cloud semantic segmentation require expensive, tedious and error-prone manual point-wise annotations. Intuitively, weakly supervised training is a direct solution to reduce the cost of labeling. However, for weakly supervised large-scale point cloud semantic segmentation, too few annotations will inevitably lead to ineffective learning of network. We propose an effective weakly supervised method containing two components to solve the above problem. Firstly, we construct a pretext task, \textit{i.e.,} point cloud colorization, with a self-supervised learning to transfer the learned prior knowledge from a large amount of unlabeled point cloud to a weakly supervised network. In this way, the representation capability of the weakly supervised network can be improved by the guidance from a heterogeneous task. Besides, to generate pseudo label for unlabeled data, a sparse label propagation mechanism is proposed with the help of generated class prototypes, which is used to measure the classification confidence of unlabeled point. Our method is evaluated on large-scale point cloud datasets with different scenarios including indoor and outdoor. The experimental results show the large gain against existing weakly supervised and comparable results to fully supervised methods\footnote{Code based on mindspore: https://github.com/dmcv-ecnu/MindSpore\_ModelZoo/tree/main/WS3\_MindSpore}
    • …
    corecore