338 research outputs found

    Parameterization of Convective Transport in a Lagrangian Particle Dispersion Model and Its Evaluation

    Get PDF
    This paper presents the revision and evaluation of the interface between the convective parameterization by Emanuel and ĆœivkovicÂŽ-Rothman and the Lagrangian particle dispersion model “FLEXPART” based on meteorological data from the European Centre for Medium-Range Weather Forecasts (ECMWF). The convection scheme relies on the ECMWF grid-scale temperature and humidity and provides a matrix necessary for the vertical convective particle displacement. The benefits of the revised interface relative to its previous version are presented. It is shown that, apart from minor fluctuations caused by the stochastic convective redistribution of the particles, the well-mixed criterion is fulfilled in simulations that include convection. Although for technical reasons the calculation of the displacement matrix differs somewhat between the forward and the backward simulations in time, the mean relative difference between the convective mass fluxes in forward and backward simulations is below 3% and can therefore be tolerated. A comparison of the convective mass fluxes and precipitation rates with those archived in the 40-yr ECMWF Reanalysis (ERA-40) data reveals that the convection scheme in FLEXPART produces upward mass fluxes and precipitation rates that are generally smaller by about 25% than those from ERA-40. This result is interpreted as positive, because precipitation is known to be overestimated by the ECMWF model. Tracer transport simulations with and without convection are compared with surface and aircraft measurements from two tracer experiments and to 222Rn measurements from two aircraft campaigns. At the surface no substantial differences between the model runs with and without convection are found, but at higher altitudes the model runs with convection produced better agreement with the measurements in most of the cases and indifferent results in the others. However, for the tracer experiments only few measurements at higher altitudes are available, and for the aircraft campaigns the 222Rn emissions are highly uncertain. Other datasets better suitable for the validation of convective transport in models are not available. Thus, there is a clear need for reliable datasets suitable to validate vertical transport in models

    Near-Optimal Approximate Shortest Paths and Transshipment in Distributed and Streaming Models

    Full text link
    We present a method for solving the transshipment problem - also known as uncapacitated minimum cost flow - up to a multiplicative error of 1+Δ1 + \varepsilon in undirected graphs with non-negative edge weights using a tailored gradient descent algorithm. Using O~(⋅)\tilde{O}(\cdot) to hide polylogarithmic factors in nn (the number of nodes in the graph), our gradient descent algorithm takes O~(Δ−2)\tilde O(\varepsilon^{-2}) iterations, and in each iteration it solves an instance of the transshipment problem up to a multiplicative error of polylog⁥n\operatorname{polylog} n. In particular, this allows us to perform a single iteration by computing a solution on a sparse spanner of logarithmic stretch. Using a randomized rounding scheme, we can further extend the method to finding approximate solutions for the single-source shortest paths (SSSP) problem. As a consequence, we improve upon prior work by obtaining the following results: (1) Broadcast CONGEST model: (1+Δ)(1 + \varepsilon)-approximate SSSP using O~((n+D)Δ−3)\tilde{O}((\sqrt{n} + D)\varepsilon^{-3}) rounds, where D D is the (hop) diameter of the network. (2) Broadcast congested clique model: (1+Δ)(1 + \varepsilon)-approximate transshipment and SSSP using O~(Δ−2)\tilde{O}(\varepsilon^{-2}) rounds. (3) Multipass streaming model: (1+Δ)(1 + \varepsilon)-approximate transshipment and SSSP using O~(n)\tilde{O}(n) space and O~(Δ−2)\tilde{O}(\varepsilon^{-2}) passes. The previously fastest SSSP algorithms for these models leverage sparse hop sets. We bypass the hop set construction; computing a spanner is sufficient with our method. The above bounds assume non-negative edge weights that are polynomially bounded in nn; for general non-negative weights, running times scale with the logarithm of the maximum ratio between non-zero weights.Comment: Accepted to SIAM Journal on Computing. Preliminary version in DISC 2017. Abstract shortened to fit arXiv's limitation to 1920 character

    Resilience of Organic versus Conventional Farming Systems in Tropical Africa: The Kenyan Experience

    Get PDF
    In Kenya, agriculture is largely carried out by smallholder farmers, in a mixed farming noncommercialised setting where application of synthetic fertilisers and pesticides is minimal. Agricultural production is low and constrained by declining soil fertility, pest and diseases and increasingly unpredictable weather due to global warming. This calls for more resilient farming systems

    Empirically Analyzing the Effect of Dataset Biases on Deep Face Recognition Systems

    Get PDF
    It is unknown what kind of biases modern in the wild face datasets have because of their lack of annotation. A direct consequence of this is that total recognition rates alone only provide limited insight about the generalization ability of a Deep Convolutional Neural Networks (DCNNs). We propose to empirically study the effect of different types of dataset biases on the generalization ability of DCNNs. Using synthetically generated face images, we study the face recognition rate as a function of interpretable parameters such as face pose and light. The proposed method allows valuable details about the generalization performance of different DCNN architectures to be observed and compared. In our experiments, we find that: 1) Indeed, dataset bias has a significant influence on the generalization performance of DCNNs. 2) DCNNs can generalize surprisingly well to unseen illumination conditions and large sampling gaps in the pose variation. 3) Using the presented methodology we reveal that the VGG-16 architecture outperforms the AlexNet architecture at face recognition tasks because it can much better generalize to unseen face poses, although it has significantly more parameters. 4) We uncover a main limitation of current DCNN architectures, which is the difficulty to generalize when different identities to not share the same pose variation. 5) We demonstrate that our findings on synthetic data also apply when learning from real-world data. Our face image generator is publicly available to enable the community to benchmark other DCNN architectures.Comment: Accepted to CVPR 2018 Workshop on Analysis and Modeling of Faces and Gestures (AMFG

    Informed MCMC with Bayesian Neural Networks for Facial Image Analysis

    Full text link
    Computer vision tasks are difficult because of the large variability in the data that is induced by changes in light, background, partial occlusion as well as the varying pose, texture, and shape of objects. Generative approaches to computer vision allow us to overcome this difficulty by explicitly modeling the physical image formation process. Using generative object models, the analysis of an observed image is performed via Bayesian inference of the posterior distribution. This conceptually simple approach tends to fail in practice because of several difficulties stemming from sampling the posterior distribution: high-dimensionality and multi-modality of the posterior distribution as well as expensive simulation of the rendering process. The main difficulty of sampling approaches in a computer vision context is choosing the proposal distribution accurately so that maxima of the posterior are explored early and the algorithm quickly converges to a valid image interpretation. In this work, we propose to use a Bayesian Neural Network for estimating an image dependent proposal distribution. Compared to a standard Gaussian random walk proposal, this accelerates the sampler in finding regions of the posterior with high value. In this way, we can significantly reduce the number of samples needed to perform facial image analysis.Comment: Accepted to the Bayesian Deep Learning Workshop at NeurIPS 201

    Morphable Face Models - An Open Framework

    Full text link
    In this paper, we present a novel open-source pipeline for face registration based on Gaussian processes as well as an application to face image analysis. Non-rigid registration of faces is significant for many applications in computer vision, such as the construction of 3D Morphable face models (3DMMs). Gaussian Process Morphable Models (GPMMs) unify a variety of non-rigid deformation models with B-splines and PCA models as examples. GPMM separate problem specific requirements from the registration algorithm by incorporating domain-specific adaptions as a prior model. The novelties of this paper are the following: (i) We present a strategy and modeling technique for face registration that considers symmetry, multi-scale and spatially-varying details. The registration is applied to neutral faces and facial expressions. (ii) We release an open-source software framework for registration and model-building, demonstrated on the publicly available BU3D-FE database. The released pipeline also contains an implementation of an Analysis-by-Synthesis model adaption of 2D face images, tested on the Multi-PIE and LFW database. This enables the community to reproduce, evaluate and compare the individual steps of registration to model-building and 3D/2D model fitting. (iii) Along with the framework release, we publish a new version of the Basel Face Model (BFM-2017) with an improved age distribution and an additional facial expression model

    Markov Chain Monte Carlo for Automated Face Image Analysis

    Get PDF
    We present a novel fully probabilistic method to interpret a single face image with the 3D Morphable Model. The new method is based on Bayesian inference and makes use of unreliable image-based information. Rather than searching a single optimal solution, we infer the posterior distribution of the model parameters given the target image. The method is a stochastic sampling algorithm with a propose-and-verify architecture based on the Metropolis–Hastings algorithm. The stochastic method can robustly integrate unreliable information and therefore does not rely on feed-forward initialization. The integrative concept is based on two ideas, a separation of proposal moves and their verification with the model (Data-Driven Markov Chain Monte Carlo), and filtering with the Metropolis acceptance rule. It does not need gradients and is less prone to local optima than standard fitters. We also introduce a new collective likelihood which models the average difference between the model and the target image rather than individual pixel differences. The average value shows a natural tendency towards a normal distribution, even when the individual pixel-wise difference is not Gaussian. We employ the new fitting method to calculate posterior models of 3D face reconstructions from single real-world images. A direct application of the algorithm with the 3D Morphable Model leads us to a fully automatic face recognition system with competitive performance on the Multi-PIE database without any database adaptation

    Analyzing and Reducing the Damage of Dataset Bias to Face Recognition With Synthetic Data

    Get PDF
    It is well known that deep learning approaches to facerecognition suffer from various biases in the available train-ing data. In this work, we demonstrate the large potentialof synthetic data for analyzing and reducing the negativeeffects of dataset bias on deep face recognition systems. Inparticular we explore two complementary application areasfor synthetic face images: 1) Using fully annotated syntheticface images we can study the face recognition rate as afunction of interpretable parameters such as face pose. Thisenables us to systematically analyze the effect of differenttypes of dataset biases on the generalization ability of neu-ral network architectures. Our analysis reveals that deeperneural network architectures can generalize better to un-seen face poses. Furthermore, our study shows that currentneural network architectures cannot disentangle face poseand facial identity, which limits their generalization ability.2) We pre-train neural networks with large-scale syntheticdata that is highly variable in face pose and the number offacial identities. After a subsequent fine-tuning with real-world data, we observe that the damage of dataset bias inthe real-world data is largely reduced. Furthermore, wedemonstrate that the size of real-world datasets can be re-duced by 75% while maintaining competitive face recogni-tion performance. The data and software used in this workare publicly available
    • 

    corecore