45 research outputs found

    Building Ensemble-Based Data Assimilation Systems for High-Dimensional Models

    Get PDF
    Different strategies for implementing ensemble-based data assimilation systems are discussed. Ensemble filters like ensemble Kalman filters and particle filters can be implemented so that they are nearly independent from the model into which they assimilate observations. This allows to develop implementations that clearly separate the data assimilation algorithm from the numerical model. For coupling the model with a data assimilation software one possibility is to use disk files to exchange the model state information between model and ensemble data assimilation methods. This offline coupling does not require changes in the model code, except for a possible component to simulate model error during the ensemble integration. However, using disk files can be inefficient, in particular when the time for the model integrations is not significantly larger than the time to restart the model for each ensemble member and to read and write the ensemble state information with the data assimilation program. In contrast, an online coupling strategy can be computational much more efficient. In this coupling strategy, subroutine calls for the data assimilation are directly inserted into the source code of an existing numerical model and augment the numerical model to become a data assimilative model. This strategy avoids model restarts as well as excessive writing of ensemble information into disk files. To allow for ensemble integrations, one of the subroutines modifies the parallelization of the model or adds one, if a model is not already parallelized. Then, the data assimilation can be performed efficiently using parallel computers. As the required modifications to the model code are very limited, this strategy allows one to quickly extent a model to a data assimilation system. In particular, the numerics of a model do not need to be changed and the model itself does not need to be a subroutine. The online coupling shows an excellent computational scalability on supercomputers and is well suited for high-dimensional numerical models. Further, a clear separation of the model and data assimilation components allows to continue the development of both components separately. Thus, new data assimilation methods can be easily added to the data assimilation system. Using the example of the parallel data assimilation framework [PDAF, http://pdaf.awi.de] and the ocean model NEMO, it is demonstrated how the online coupling can be achieved with minimal changes to the numerical model

    Efficient Ensemble-Based Data Assimilation for High-Dimensional Models with the Parallel Data Assimilation Framework PDAF

    Get PDF
    Discussed is how we can build a data-assimilative model by augmenting a forecast model by data assimilation functionality for efficient ensemble data assimilation. The implementation strategy uses a direct connection between a coupled simulation model and ensemble data assimilation software provided by the open-source Parallel Data Assimilation Framework (PDAF, http://pdaf.awi.de), which also provides fully-implemented and parallelized ensemble filters. The combination of a model with PDAF yields a data assimilation program with high flexibility and parallel scalability with only small changes to the model. The direct connection is obtained by first extending the source code of the coupled model so that it is able to run an ensemble of model states. In addition, a filtering step is added using a combination of in-memory access and parallel communication to create an online-coupled ensemble assimilation program. The direct connection avoids the common need to stop and restart a whole forecast model to perform the assimilation of observations in the analysis step of ensemble-based filter methods like ensemble Kalman or particle filters. Instead, the analysis step is performed in between time steps and is independent of the actual model coupler. This strategy can be applied with forced uncoupled models or coupled Earth system models, where it even allows for cross-domain data assimilation. The structure, features and performance of the data assimilation systems is discussed on the example of the ocean circulation models MITgcm and NEMO

    The smoother extension of the nonlinear ensemble transform filter

    Get PDF
    The recently-proposed nonlinear ensemble transform filter (NETF) is extended to a fixed lag smoother. The NETF approximates Bayes' theorem by applying a square root update. The smoother (NETS) is derived and formulated in a joint framework with the filter. The new smoother method is evaluated using the low-dimensional, highly nonlinear Lorenz-96 model and a square-box configuration of the NEMO ocean model, which is nonlinear and has a higher dimensionality. The new smoother is evaluated within the same assimilation framework against the local error subspace transform Kalman filter (LESTKF) and its smoother extension (LESTKS), which are state of the art ensemble square-root Kalman techniques. In the case of the Lorenz-96 model, both the filter NETF and its smoother extension NETS provide lower errors than the LESTKF and LESTKS for sufficiently large ensembles. In addition, the NETS shows a distinct dependence on the smoother lag, which results in a stronger error increase beyond the optimal lag of minimum error. For the experiment using NEMO, the smoothing in the NETS effectively reduces the errors in the state estimates, compared to the filter. For different state variables very similar optimal smoothing lags are found, which allows for a simultaneous tuning of the lag. In comparison to the LESTKS, the smoothing with the NETS yields a smaller relative error reduction with respect to the filter result, and the optimal lag of the NETS is shorter in both experiments. This is explained by the distinct update mechanisms of both filters. The comparison of both experiments shows that the NETS can provide better state estimates with similar smoother lags if the model exhibits a sufficiently high degree of nonlinearity or if the observations are not restricted to be Gaussian with a linear observation operator

    Building Ensemble-Based Data Assimilation Systems for High-Dimensional Models with the Parallel Data Assimilation Framework PDAF

    Get PDF
    Data assimilation applications with high-dimensional numerical models show extreme requirements on computational resources. Thus, good scalability of the assimilation system is necessary to make these applications feasible. Sequential data assimilation methods based on ensemble forecasts, like ensemble-based Kalman filters and particle filters, provide such good scalability, because the forecast of each ensemble member can be performed independently. This parallelism has to be combined with the parallelization of both the numerical model and the data assimilation algorithm. While the filter algorithms can be implemented so that they are nearly independent from the model into which they assimilate observations, they need to be coupled to the numerical model. Using separate programs for the model and the data assimilation step coupled by disk files to exchange the model state information between model and ensemble data assimilation methods can be inefficient for high-dimensional models. More efficient is an online coupling strategy in which subroutine calls for the data assimilation are directly inserted into the model source code and augment the numerical model to become a data assimilative model. This strategy avoids model restarts as well as excessive writing of ensemble information into disk files and can hence lead to excellent computational scalability on supercomputers. The required modifications to the model code are very limited, such this strategy allows one to quickly extent a model to a data assimilation system. The online coupling is provided by the Parallel Data Assimilation Framework (PDAF, http://pdaf.awi.de), which is designed to simplify the implementation of scalable data assimilation systems based on existing numerical models. Further, it includes several optimized parallel filter algorithms. We will discuss the coupling strategy, features, and scalability of data assimilation systems based on PDAF

    Genetic regulation of mouse liver metabolite levels.

    Get PDF
    We profiled and analyzed 283 metabolites representing eight major classes of molecules including Lipids, Carbohydrates, Amino Acids, Peptides, Xenobiotics, Vitamins and Cofactors, Energy Metabolism, and Nucleotides in mouse liver of 104 inbred and recombinant inbred strains. We find that metabolites exhibit a wide range of variation, as has been previously observed with metabolites in blood serum. Using genome-wide association analysis, we mapped 40% of the quantified metabolites to at least one locus in the genome and for 75% of the loci mapped we identified at least one candidate gene by local expression QTL analysis of the transcripts. Moreover, we validated 2 of 3 of the significant loci examined by adenoviral overexpression of the genes in mice. In our GWAS results, we find that at significant loci the peak markers explained on average between 20 and 40% of variation in the metabolites. Moreover, 39% of loci found to be regulating liver metabolites in mice were also found in human GWAS results for serum metabolites, providing support for similarity in genetic regulation of metabolites between mice and human. We also integrated the metabolomic data with transcriptomic and clinical phenotypic data to evaluate the extent of co-variation across various biological scales

    2006-2007 Mostly Mozart

    Get PDF
    Date & Time: Thursday, March 1, 2007 at 7:30 pm & Friday, March 2, 2007 at 7:30 pmhttps://spiral.lynn.edu/conservatory_mostlymusic/1000/thumbnail.jp
    corecore