190 research outputs found

    O SEGUNDO FAUSTO: UM POEMA ENCICLOPÉDICO-BARROQUIZANTE

    Get PDF
    O SEGUNDO FAUSTO: UM POEMA ENCICLOPÉDICO-BARROQUIZANT

    Data Assimilation by Artificial Neural Networks for an Atmospheric General Circulation Model

    Get PDF
    Numerical weather prediction (NWP) uses atmospheric general circulation models (AGCMs) to predict weather based on current weather conditions. The process of entering observation data into mathematical model to generate the accurate initial conditions is called data assimilation (DA). It combines observations, forecasting, and filtering step. This paper presents an approach for employing artificial neural networks (NNs) to emulate the local ensemble transform Kalman filter (LETKF) as a method of data assimilation. This assimilation experiment tests the Simplified Parameterizations PrimitivE-Equation Dynamics (SPEEDY) model, an atmospheric general circulation model (AGCM), using synthetic observational data simulating localizations of meteorological balloons. For the data assimilation scheme, the supervised NN, the multilayer perceptrons (MLPs) networks are applied. After the training process, the method, forehead-calling MLP-DA, is seen as a function of data assimilation. The NNs were trained with data from first 3 months of 1982, 1983, and 1984. The experiment is performed for January 1985, one data assimilation cycle using MLP-DA with synthetic observations. The numerical results demonstrate the effectiveness of the NN technique for atmospheric data assimilation. The results of the NN analyses are very close to the results from the LETKF analyses, the differences of the monthly average of absolute temperature analyses are of order 10–2. The simulations show that the major advantage of using the MLP-DA is better computational performance, since the analyses have similar quality. The CPU-time cycle assimilation with MLP-DA analyses is 90 times faster than LETKF cycle assimilation with the mean analyses used to run the forecast experiment

    Turbulent Parameterization for Ccatt-Brams by GPU

    Get PDF
    Uma estratégia para aumentar o desempenho de códigos computacionais é o uso de computação híbrida, que combina CPU com GPU e/ou FPGA. Duas parametrizações de turbulência, Smagorinsky e Mellor-Yamada, foram codificadas em GPU para o código CATT-BRAMS:. O padrão CUDA foi usado na implementação. Os resultados mostram um aumento significativo no speed-up do modelo
    corecore