228 research outputs found

    A Dark Target Algorithm for the GOSAT TANSO-CAI Sensor in Aerosol Optical Depth Retrieval over Land

    Get PDF
    Cloud and Aerosol Imager (CAI) onboard the Greenhouse Gases Observing Satellite (GOSAT) is a multi-band sensor designed to observe and acquire information on clouds and aerosols. In order to retrieve aerosol optical depth (AOD) over land from the CAI sensor, a Dark Target (DT) algorithm for GOSAT CAI was developed based on the strategy of the Moderate Resolution Imaging Spectroradiometer (MODIS) DT algorithm. When retrieving AOD from satellite platforms, determining surface contributions is a major challenge. In the MODIS DT algorithm, surface signals in the visible wavelengths are estimated based on the relationships between visible channels and shortwave infrared (SWIR) near the 2.1 µm channel. However, the CAI only has a 1.6 µm band to cover the SWIR wavelengths. To resolve the difficulties in determining surface reflectance caused by the lack of 2.1 μm band data, we attempted to analyze the relationship between reflectance at 1.6 µm and at 2.1 µm. We did this using the MODIS surface reflectance product and then connecting the reflectances at 1.6 µm and the visible bands based on the empirical relationship between reflectances at 2.1 µm and the visible bands. We found that the reflectance relationship between 1.6 µm and 2.1 µm is typically dependent on the vegetation conditions, and that reflectances at 2.1 µm can be parameterized as a function of 1.6 µm reflectance and the Vegetation Index (VI). Based on our experimental results, an Aerosol Free Vegetation Index (AFRI2.1)-based regression function connecting the 1.6 µm and 2.1 µm bands was summarized. Under light aerosol loading (AOD at 0.55 µm < 0.1), the 2.1 µm reflectance derived by our method has an extremely high correlation with the true 2.1 µm reflectance (r-value = 0.928). Similar to the MODIS DT algorithms (Collection 5 and Collection 6), a CAI-applicable approach that uses AFRI2.1 and the scattering angle to account for the visible surface signals was proposed. It was then applied to the CAI sensor for AOD retrieval; the retrievals were validated by comparisons with ground-level measurements from Aerosol Robotic Network (AERONET) sites. Validations show that retrievals from the CAI have high agreement with the AERONET measurements, with an r-value of 0.922, and 69.2% of the AOD retrieved data falling within the expected error envelope of ± (0.1 + 15% AODAERONET)

    Numerical simulation of the non-uniform flow in a full-annulus multi-stage axial compressor with the harmonic balance method

    Get PDF
    To improve the understanding of unsteady flow in modern advanced axial compressor, unsteady simulations on full-annulus multi-stage axial compressor are carried out with the harmonic balance method. Since the internal flow in turbomachinery is naturally periodic, the harmonic balance method can be used to reduce the computational cost. In order to verify the accuracy of the harmonic balance method, the numerical results are first compared with the experimental results. The results show that the internal flow field and the operating characteristics of the multi-stage axial compressor obtained by the harmonic balance method coincide with the experimental results with the relative error in the range of 3%. Through the analysis of the internal flow field of the axial compressor, it can be found that the airflow in the clearance of adjacent blade rows gradually changes from axisymmetric to non-axisymmetric and then returns to almost completely axisymmetric distribution before the downstream blade inlet, with only a slight non-axisymmetric distribution, which can be ignored. Moreover, the slight non-axisymmetric distribution will continue to accumulate with the development of the flow and, finally, form a distinct circumferential non-uniform flow field in latter stages, which may be the reason why the traditional single-passage numerical method will cause certain errors in multi-stage axial compressor simulations

    Test-Time Training for Semantic Segmentation with Output Contrastive Loss

    Full text link
    Although deep learning-based segmentation models have achieved impressive performance on public benchmarks, generalizing well to unseen environments remains a major challenge. To improve the model's generalization ability to the new domain during evaluation, the test-time training (TTT) is a challenging paradigm that adapts the source-pretrained model in an online fashion. Early efforts on TTT mainly focus on the image classification task. Directly extending these methods to semantic segmentation easily experiences unstable adaption due to segmentation's inherent characteristics, such as extreme class imbalance and complex decision spaces. To stabilize the adaptation process, we introduce contrastive loss (CL), known for its capability to learn robust and generalized representations. Nevertheless, the traditional CL operates in the representation space and cannot directly enhance predictions. In this paper, we resolve this limitation by adapting the CL to the output space, employing a high temperature, and simplifying the formulation, resulting in a straightforward yet effective loss function called Output Contrastive Loss (OCL). Our comprehensive experiments validate the efficacy of our approach across diverse evaluation scenarios. Notably, our method excels even when applied to models initially pre-trained using domain adaptation methods on test domain data, showcasing its resilience and adaptability.\footnote{Code and more information could be found at~ \url{https://github.com/dazhangyu123/OCL}
    corecore