477 research outputs found

    Relationship between the longwave cloud radiative forcing at the surface and the top of the atmosphere

    Get PDF
    In order to achieve global coverage, any surface radiation climatology has to be based on satellite observations. In the last decade several schemes have been devised to obtain the surface solar insolation from top of the atmosphere reflected solar radiation. More recently, attempts have been made to infer the components of longwave radiation at the surface from satellite sounder data using a radiative transfer model. In addition to the radiative transfer scheme, these methods require assumptions about the effective emitting temperature of cloud tops and bases. Modeling studies have shown that although there are strong correlations between the solar upwelling radiative flux and surface flux, this is not true of the longwave. However, if the clear sky component is considered separately such that the cloud longwave forcing at the top and at the surface are compared, a slightly different picture emerges. During the FIRE Cirrus IFO, surface radiation measurements were made at several sites and coincident satellite overpass data was also collected. It may be possible to extract the longwave cloud radiative forcing at the top and surface from these data. If relationships are verifiable by observations, this information can be useful for the extraction of the surface longwave radiation from satellite data. The radiative transfer schemes used to convert upwelling spectral radiances into a downwelling longwave radiation can provide the clear sky component. The cloud radiative forcing at the top of the atmosphere can then modify the surface fluxes according to relationships shown. It should be noted that this procedure may be considered only for temporal averages and not for instantaneous deductions of surface fluxes. This would be most useful in compiling monthly mean regional climatologies of the surface longwave fluxes

    The role of global cloud climatologies in validating numerical models

    Get PDF
    The net upward longwave surface radiation is exceedingly difficult to measure from space. A hybrid method using General Circulation Model (GCM) simulations and satellite data from the Earth Radiation Budget Experiment (ERBE) and the International Satellite Cloud Climatology Project (ISCCP) was used to produce global maps of this quantity over oceanic areas. An advantage of this technique is that no independent knowledge or assumptions regarding cloud cover for a particular month are required. The only information required is a relationship between the cloud radiation forcing (CRF) at the top of the atmosphere and that at the surface, which is obtained from the GCM simulation. A flow diagram of the technique and results are given

    Perturbation of the zonal radiation balance by a stratosphere aerosol layer

    Get PDF
    The effect of stratospheric aerosols on the earth's monthly zonal radiation balance is investigated using a model layer consisting of 75% H2SO4, which is the primary constituent of the background aerosol layer

    Longwave radiation parameterization for UCLA/GLAS GCM

    Get PDF
    This document describes the parameterization of longwave radiation in the UCLA/GLAS general circulation model. Transmittances have been computed from the work of Arking and Chou for water vapor and carbon dioxide and ozone absorptances are computed using a formula due to Rodgers. Cloudiness has been introduced into the code in a manner in which fractional cover and random or maximal overlap can be accommodated. The entire code has been written in a form that is amenable to vectorization on CYBER and CRAY computers. Sample clear sky computations for five standard profiles using the 15- and 9-level versions of the model have been included

    Infrared radiative transfer through a regular array of cuboidal clouds

    Get PDF
    Infrared radiative transfer through a regular array of cuboidal clouds is studied and the interaction of the sides of the clouds with each other and the ground is considered. The theory is developed for black clouds and is extended to scattering clouds using a variable azimuth two-stream approximation. It is shown that geometrical considerations often dominate over the microphysical aspects of radiative transfer through the clouds. For example, the difference in simulated 10 micron brightness temperature between black isothermal cubic clouds and cubic clouds of optical depth 10, is less than 2 deg for zenith angles less than 50 deg for all cloud fractions when viewed parallel to the array. The results show that serious errors are made in flux and cooling rate computations if broken clouds are modeled as planiform. Radiances computed by the usual practice of area-weighting cloudy and clear sky radiances are in error by 2 to 8 K in brightness temperature for cubic clouds over a wide range of cloud fractions and zenith angles. It is also shown that the lapse rate does not markedly affect the exiting radiances for cuboidal clouds of unit aspect ratio and optical depth 10

    Comparative accuracy of the Albedo, transmission and absorption for selected radiative transfer approximations

    Get PDF
    Illustrations of both the relative and absolute accuracy of eight different radiative transfer approximations as a function of optical thickness, solar zenith angle and single scattering albedo are given. Computational results for the plane albedo, total transmission and fractional absorption were obtained for plane-parallel atmospheres composed of cloud particles. These computations, which were obtained using the doubling method, are compared with comparable results obtained using selected radiative transfer approximations. Comparisons were made between asymptotic theory for thick layers and the following widely used two stream approximations: Coakley-Chylek's models 1 and 2, Meador-Weaver, Eddington, delta-Eddington, PIFM and delta-discrete ordinates

    Using Long Short-Term Memory Networks to Make and Train Neural Network Based Pseudo Random Number Generator

    Get PDF
    Neural Networks have been used in many decision-making models and been employed in computer vision, and natural language processing. Several works have also used Neural Networks for developing Pseudo-Random Number Generators [2, 4, 5, 7, 8]. However, despite great performance in the National Institute of Standards and Technology (NIST) statistical test suite for randomness, they fail to discuss how the complexity of a neural network affects such statistical results. This work introduces: 1) a series of new Long Short- Term Memory Network (LSTM) based and Fully Connected Neural Network (FCNN – baseline [2] + variations) Pseudo Random Number Generators (PRNG) and 2) an LSTMbased predictor. The thesis also performs adversarial training to determine two things: 1) How the use of sequence models such as LSTMs after adversarial training affects the performance on NIST tests. 2) To study how the complexity of the fully connected network-based generator in [2] and the LSTM-based generator affects NIST results. Experiments were done on four different sets of generators and predictors, i) Fully Connected Neural Network Generator (FC NN Gen) – Convolutional Neural Network Predictor (CNN Pred), ii) FC NN Gen - LSTM Pred, iii) LSTM-based Gen – CNN. Pred, iv) LSTM-based Gen – LSTM Pred, where FC NN Gen and CNN Pred were taken as the baseline from [2] while LSTM-based Gen and LSTM Pred were proposed. Based on the experiments, LSTM Predictor overall gave much consistent and even better results on the NIST test suite than the CNN Predictor from [2]. It was observed that using LSTM generator showed a higher pass rate for NIST test on average when paired with LSTM Predictor but a very low fluctuating trend. On the other hand, an increasing trend was observed for the average NIST test passing rate when the same generator was trained with CNN Predictor in an adversarial environment. The baseline [2] and its variations however only displayed a fluctuating trend, but with better results with the adversarial training with the LSTM-based Predictor than the CNN Predictor

    Algorithm-Level Optimizations for Scalable Parallel Graph Processing

    Get PDF
    Efficiently processing large graphs is challenging, since parallel graph algorithms suffer from poor scalability and performance due to many factors, including heavy communication and load-imbalance. Furthermore, it is difficult to express graph algorithms, as users need to understand and effectively utilize the underlying execution of the algorithm on the distributed system. The performance of graph algorithms depends not only on the characteristics of the system (such as latency, available RAM, etc.), but also on the characteristics of the input graph (small-world scalefree, mesh, long-diameter, etc.), and characteristics of the algorithm (sparse computation vs. dense communication). The best execution strategy, therefore, often heavily depends on the combination of input graph, system and algorithm. Fine-grained expression exposes maximum parallelism in the algorithm and allows the user to concentrate on a single vertex, making it easier to express parallel graph algorithms. However, this often loses information about the machine, making it difficult to extract performance and scalability from fine-grained algorithms. To address these issues, we present a model for expressing parallel graph algorithms using a fine-grained expression. Our model decouples the algorithm-writer from the underlying details of the system, graph, and execution and tuning of the algorithm. We also present various graph paradigms that optimize the execution of graph algorithms for various types of input graphs and systems. We show our model is general enough to allow graph algorithms to use the various graph paradigms for the best/fastest execution, and demonstrate good performance and scalability for various different graphs, algorithms, and systems to 100,000+ cores

    Transport of infrared radiation in cuboidal clouds

    Get PDF
    The transport of infrared radiation in a single cuboidal cloud using a vertical two steam approximation was modeled. The emittance of the top face of the model cloud is always less than that for a plane parallel cloud of the same optical depth. The hemisphere flux escaping from the cloud top has a gradient from the center to the edges which brighten when the cloud is over warmer ground. Cooling rate calculations in the 8 to 13.6 micrometer region show that there is cooling from the sides of the cloud at all levels even when there is heating of the core from the ground below. The radiances exiting from model cuboidal clouds were computed by path integration over the source function obtained with the two stream approximation. It is suggested that the brightness temperature measured from finite clouds will overestimate the cloud top temperature
    • …
    corecore