3,364 research outputs found

    The 2010 spring drought reduced primary productivity in southwestern China

    Get PDF
    Many parts of the world experience frequent and severe droughts. Summer drought can significantly reduce primary productivity and carbon sequestration capacity. The impacts of spring droughts, however, have received much less attention. A severe and sustained spring drought occurred in southwestern China in 2010. Here we examine the influence of this spring drought on the primary productivity of terrestrial ecosystems using data on climate, vegetation greenness and productivity. We first assess the spatial extent, duration and severity of the drought using precipitation data and the Palmer drought severity index. We then examine the impacts of the drought on terrestrial ecosystems using satellite data for the period 2000–2010. Our results show that the spring drought substantially reduced the enhanced vegetation index (EVI) and gross primary productivity (GPP) during spring 2010 (March–May). Both EVI and GPP also substantially declined in the summer and did not fully recover from the drought stress until August. The drought reduced regional annual GPP and net primary productivity (NPP) in 2010 by 65 and 46 Tg C yr−1, respectively. Both annual GPP and NPP in 2010 were the lowest over the period 2000–2010. The negative effects of the drought on annual primary productivity were partly offset by the remarkably high productivity in August and September caused by the exceptionally wet conditions in late summer and early fall and the farming practices adopted to mitigate drought effects. Our results show that, like summer droughts, spring droughts can also have significant impacts on vegetation productivity and terrestrial carbon cycling

    Tem-adapter: adapting image-text pretraining for video question answer

    Get PDF
    Video-language pre-trained models have shown remarkable success in guiding video question-answering (VideoQA) tasks. However, due to the length of video sequences, training large-scale video-based models incurs considerably higher costs than training image-based ones. This motivates us to leverage the knowledge from image-based pretraining, despite the obvious gaps between image and video domains. To bridge these gaps, in this paper, we propose Tem-adapter, which enables the learning of temporal dynamics and complex semantics by a visual Temporal Aligner and a textual Semantic Aligner. Unlike conventional pretrained knowledge adaptation methods that only concentrate on the downstream task objective, the Temporal Aligner introduces an extra language-guided autoregressive task aimed at facilitating the learning of temporal dependencies, with the objective of predicting future states based on historical clues and language guidance that describes event progression. Besides, to reduce the semantic gap and adapt the textual representation for better event description, we introduce a Semantic Aligner that first designs a template to fuse question and answer pairs as event descriptions and then learns a Transformer decoder with the whole video sequence as guidance for refinement. We evaluate Tem-adapter and different pre-train transferring methods on two VideoQA benchmarks, and the significant performance improvement demonstrates the effectiveness of our method

    Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer

    Full text link
    Video-language pre-trained models have shown remarkable success in guiding video question-answering (VideoQA) tasks. However, due to the length of video sequences, training large-scale video-based models incurs considerably higher costs than training image-based ones. This motivates us to leverage the knowledge from image-based pretraining, despite the obvious gaps between image and video domains. To bridge these gaps, in this paper, we propose Tem-Adapter, which enables the learning of temporal dynamics and complex semantics by a visual Temporal Aligner and a textual Semantic Aligner. Unlike conventional pretrained knowledge adaptation methods that only concentrate on the downstream task objective, the Temporal Aligner introduces an extra language-guided autoregressive task aimed at facilitating the learning of temporal dependencies, with the objective of predicting future states based on historical clues and language guidance that describes event progression. Besides, to reduce the semantic gap and adapt the textual representation for better event description, we introduce a Semantic Aligner that first designs a template to fuse question and answer pairs as event descriptions and then learns a Transformer decoder with the whole video sequence as guidance for refinement. We evaluate Tem-Adapter and different pre-train transferring methods on two VideoQA benchmarks, and the significant performance improvement demonstrates the effectiveness of our method.Comment: ICCV 202

    catena-Poly[[bis­(nitrato-κO)cadmium]bis­[μ-1,3-bis­[(1H-1,2,4-triazol-1-yl)meth­yl]benzene-κ2 N 4:N 4′]]

    Get PDF
    In the title compound, [Cd(NO3)2(C12H12N6)2]n, the CdII cation is located on an inversion center and is six-coordinated by four N atoms from four 1,3-bis­[(1H-1,2,4-triazol-1-yl)meth­yl]benzene (L) ligands and two O atoms from two nitrate anions in a slightly distorted octa­hedral geometry. The ligands link different CdII ions into a ribbon-like structure along [001]. Two O atoms of the nitrate anion are disordered over two sets of sites with site occupancies of 0.575 (8) and 0.425 (8)

    Label Adversarial Learning for Skeleton-level to Pixel-level Adjustable Vessel Segmentation

    Full text link
    You can have your cake and eat it too. Microvessel segmentation in optical coherence tomography angiography (OCTA) images remains challenging. Skeleton-level segmentation shows clear topology but without diameter information, while pixel-level segmentation shows a clear caliber but low topology. To close this gap, we propose a novel label adversarial learning (LAL) for skeleton-level to pixel-level adjustable vessel segmentation. LAL mainly consists of two designs: a label adversarial loss and an embeddable adjustment layer. The label adversarial loss establishes an adversarial relationship between the two label supervisions, while the adjustment layer adjusts the network parameters to match the different adversarial weights. Such a design can efficiently capture the variation between the two supervisions, making the segmentation continuous and tunable. This continuous process allows us to recommend high-quality vessel segmentation with clear caliber and topology. Experimental results show that our results outperform manual annotations of current public datasets and conventional filtering effects. Furthermore, such a continuous process can also be used to generate an uncertainty map representing weak vessel boundaries and noise
    • …
    corecore