20 research outputs found

    Intercomparison of Vaisala RS92 and RS41 Radiosonde Temperature Sensors under Controlled Laboratory Conditions

    Get PDF
    Radiosoundings are essential for weather and climate applications, as well as for calibration and validation of remote sensing observations. Vaisala RS92 radiosondes have been widely used on a global scale until 2016; although in the fall of 2013, Vaisala introduced the RS41 model to progressively replace the RS92. To ensure the highest quality and homogeneity of measurements following the transition from RS92 to RS41, intercomparisons of the two radiosonde models are needed. A methodology was introduced to simultaneously test and compare the two radiosonde models inside climatic chambers, in terms of noise, calibration accuracy, and bias in temperature measurements. A pair of RS41 and RS92 radiosondes has been tested at ambient pressure under very different temperature and humidity conditions, reproducing the atmospheric conditions that a radiosonde can meet at the ground before launch. The radiosondes have also been tested before and after fast (within ≈ 10 s) temperature changes of about ±20 °C, simulating a scenario similar to steep thermal changes that radiosondes can meet when passing from indoor to outdoor environment during the pre-launch phase. The results show that the temperature sensor of RS41 is less affected by noise and more accurate than that of RS92, with noise values less than 0.06 °C for RS41 and less than 0.1 °C for RS92. The deviation from the reference value, referred to as calibration error, is within ±0.1 °C for RS41 and the related uncertainty (hereafter with coverage factor k = 1) is less than 0.06 °C, while RS92 is affected by a cold bias in the calibration, which ranges from 0.1 °C up to a few tenths of a degree, with a calibration uncertainty less than 0.1 °C. The temperature bias between RS41 and RS92 is within ±0.1 °C, while its uncertainty is less than 0.1 °C. The fast and steep thermal changes that radiosondes can meet during the pre-launch phase might lead to a noise increase in temperature sensors during radiosoundings, up to 0.1 °C for RS41 and up to 0.3 °C for RS92, with a similar increase in their calibration uncertainty, as well as an increase in the uncertainty of their bias up to 0.3 °C

    Use of automatic radiosonde launchers to measure temperature and humidity profiles from the GRUAN perspective

    Get PDF
    In the last two decades, technological progress has not only seen improvements to the quality of atmospheric upper-air observations but also provided the opportunity to design and implement automated systems able to replace measurement procedures typically performed manually. Radiosoundings, which remain one of the primary data sources for weather and climate applications, are still largely performed around the world manually, although increasingly fully automated upper-air observations are used, from urban areas to the remotest locations, which minimize operating costs and challenges in performing radiosounding launches. This analysis presents a first step to demonstrating the reliability of the automatic radiosonde launchers (ARLs) provided by Vaisala, Meteomodem and Meisei. The metadata and datasets collected by a few existing ARLs operated by the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) certified or candidate sites (SodankylĂ€, Payerne, Trappes, Potenza) have been investigated and a comparative analysis of the technical performance (i.e. manual versus ARL) is reported. The performance of ARLs is evaluated as being similar or superior to those achieved with the traditional manual launches in terms of percentage of successful launches, balloon burst and ascent speed. For both temperature and relative humidity, the ground-check comparisons showed a negative bias of a few tenths of a degree and % RH, respectively. Two datasets of parallel soundings between manual and ARL-based measurements, using identical sonde models, provided by SodankylĂ€ and Faa'a stations, showed mean differences between the ARL and manual launches smaller than ±0.2 K up to 10 hPa for the temperature profiles. For relative humidity, differences were smaller than 1 % RH for the SodankylĂ€ dataset up to 300 hPa, while they were smaller than 0.7 % RH for Faa'a station. Finally, the observation-minus-background (O–B) mean and root mean square (rms) statistics for German RS92 and RS41 stations, which operate a mix of manual and ARL launch protocols, calculated using the European Centre for Medium-Range Weather Forecasts (ECMWF) forecast model, are very similar, although RS41 shows larger rms(O–B) differences for ARL stations, in particular for temperature and wind. A discussion of the potential next steps proposed by GRUAN community and other parties is provided, with the aim to lay the basis for the elaboration of a strategy to fully demonstrate the value of ARLs and guarantee that the provided products are traceable and suitable for the creation of GRUAN data products

    Intercomparison of Vaisala RS92 and RS41 Radiosonde Temperature Sensors under Controlled Laboratory Conditions

    No full text
    Radiosoundings are essential for weather and climate applications, as well as for calibration and validation of remote sensing observations. Vaisala RS92 radiosondes have been widely used on a global scale until 2016; although in the fall of 2013, Vaisala introduced the RS41 model to progressively replace the RS92. To ensure the highest quality and homogeneity of measurements following the transition from RS92 to RS41, intercomparisons of the two radiosonde models are needed. A methodology was introduced to simultaneously test and compare the two radiosonde models inside climatic chambers, in terms of noise, calibration accuracy, and bias in temperature measurements. A pair of RS41 and RS92 radiosondes has been tested at ambient pressure under very different temperature and humidity conditions, reproducing the atmospheric conditions that a radiosonde can meet at the ground before launch. The radiosondes have also been tested before and after fast (within ≈ 10 s) temperature changes of about ±20 °C, simulating a scenario similar to steep thermal changes that radiosondes can meet when passing from indoor to outdoor environment during the pre-launch phase. The results show that the temperature sensor of RS41 is less affected by noise and more accurate than that of RS92, with noise values less than 0.06 °C for RS41 and less than 0.1 °C for RS92. The deviation from the reference value, referred to as calibration error, is within ±0.1 °C for RS41 and the related uncertainty (hereafter with coverage factor k = 1) is less than 0.06 °C, while RS92 is affected by a cold bias in the calibration, which ranges from 0.1 °C up to a few tenths of a degree, with a calibration uncertainty less than 0.1 °C. The temperature bias between RS41 and RS92 is within ±0.1 °C, while its uncertainty is less than 0.1 °C. The fast and steep thermal changes that radiosondes can meet during the pre-launch phase might lead to a noise increase in temperature sensors during radiosoundings, up to 0.1 °C for RS41 and up to 0.3 °C for RS92, with a similar increase in their calibration uncertainty, as well as an increase in the uncertainty of their bias up to 0.3 °C

    Study of Droplet Activation in Thin Clouds Using Ground-Based Raman Lidar and Ancillary Remote Sensors

    No full text
    A methodology for the study of cloud droplet activation based on the measurements performed with ground-based multi-wavelength Raman lidars and ancillary remote sensors collected at CNR-IMAA observatory, Potenza, South Italy, is presented. The study is focused on the observation of thin warm clouds. Thin clouds are often also optically thin: this allows the cloud top detection and the full profiling of cloud layers using ground-based Raman lidar. Moreover, broken clouds are inspected to take advantage of their discontinuous structure in order to study the variability of optical properties and water vapor content in the transition from cloudy regions to cloudless regions close to the cloud boundaries. A statistical study of this variability leads to identify threshold values for the optical properties, enabling the discrimination between clouds and cloudless regions. These values can be used to evaluate and improve parameterizations of droplet activation within numerical models. A statistical study of the co-located Doppler radar moments allows to retrieve droplet size and vertical velocities close to the cloud base. First evidences of a correlation between droplet vertical velocities measured at the cloud base and the aerosol effective radius observed in the cloud-free regions of the broken clouds are found

    Study of Droplet Activation in Thin Clouds Using Ground-Based Raman Lidar and Ancillary Remote Sensors

    No full text
    A methodology for the study of cloud droplet activation based on the measurements performed with ground-based multi-wavelength Raman lidars and ancillary remote sensors collected at CNR-IMAA observatory, Potenza, South Italy, is presented. The study is focused on the observation of thin warm clouds. Thin clouds are often also optically thin: this allows the cloud top detection and the full profiling of cloud layers using ground-based Raman lidar. Moreover, broken clouds are inspected to take advantage of their discontinuous structure in order to study the variability of optical properties and water vapor content in the transition from cloudy regions to cloudless regions close to the cloud boundaries. A statistical study of this variability leads to identify threshold values for the optical properties, enabling the discrimination between clouds and cloudless regions. These values can be used to evaluate and improve parameterizations of droplet activation within numerical models. A statistical study of the co-located Doppler radar moments allows to retrieve droplet size and vertical velocities close to the cloud base. First evidences of a correlation between droplet vertical velocities measured at the cloud base and the aerosol effective radius observed in the cloud-free regions of the broken clouds are found

    Sensitivity of trends to estimation methods and quantification of subsampling effects in global radiosounding temperature and humidity time series

    Get PDF
    Climate trends estimated using historicalradiosoundingtime series may be significantly affected by the choice of the regression method to use, as well as by a subsampling of the dataset often adopted in specific applications. These are contributions to the uncertainty of trend estimations, which have been quantified in literature, although on specific pairs of regression methods, and in the not very recent past characterized by smaller trends in temperature than those observed over the last two decades. This paper investigates the sensitivity of trend estimations to four linear regression methods (parametric and nonparametric) and to the artificial subsampling of the same dataset using historical radiosounding time series from 1978 onwards, available in the version 2 of the Integrated Global Radiosonde Archive (IGRA). Results show that long-term decadal trends may have not negligible uncertainties related to the choice of the regression method, the percentage of data available, the amount of missing data and the number of stations selected in the dataset. The choice of the regression methods increases uncertainties in the decadal trends ranging from -0.10 to -0.01 K.da(-1)for temperature in the lower stratosphere at 100 hPa and from 0.2 to 0.8% da(-1)for relative humidity (RH) in the middle troposphere at 300 hPa. Differences can also increase up to 0.4 K.da(-1)at 300 hPa when the amount of missing data exceeds 50% of the original dataset for temperature, while for RH, significant differences are observed in the lower troposphere at 925 hPa for almost all datasets. Finally, subsampling effects on trend estimation are quantified by artificially reducing the size of the IGRA dataset: Results show that subsampling effects on trend estimations when at least 60 stations, up to 76% of data available, are considered for temperature and at least 40 stations for RH

    Uncertainties on Climate Extreme Indices Estimated From U.S. Climate Reference Network (USCRN) Near-Surface Temperatures

    No full text
    Changes in the frequency of temperature extremes are often attributed to global warming. The recent availability of near-surface temperature data records from reference networks, such as the U.S. Climate Reference Network (USCRN), enables the quantification of measurement uncertainties. Within an activity of the Copernicus Climate Change Service, the estimation of the measurement uncertainty has been provided for USCRN temperature data, using metadata made available by the National Oceanic and Atmospheric Administration (NOAA). In this paper, four climate extreme indices (Frost Days, Summer Days, Ice Days, Tropical Nights) and the related uncertainties are calculated for the period 2006-2020 from the USCRN data set and compared with traditional indices. Moreover, the asymmetric USCRN measurement uncertainties are propagated to estimate the uncertainties of climate indices. The comparison shows expanded uncertainties homogeneously distributed with the latitude and typically within 15 days per year for Frost Days and within 10 days for Ice Days, while smaller uncertainties are estimated for Summer Days and Tropical Nights, with values typically within six to seven days per year. Positive uncertainties are typically larger than negative ones for all the indices. The values of Frost and Ice Days with the related uncertainties for USCRN have also been compared with the corresponding values calculated from reanalyses data, showing differences typically within 60 days for median values, quite often smaller than USCRN and inconsistent within the related uncertainties, Overall, the results show that USCRN measurement uncertainties increase confidence in the estimation of climate extreme indices and decisions for adaptation
    corecore