1,913 research outputs found

    Using an Arbitrary Moment Predictor to Investigate the Optimal Choice of Prognostic Moments in Bulk Cloud Microphysics Schemes

    Get PDF
    Most bulk cloud microphysics schemes predict up to three standard properties of hydrometeor size distributions, namely, the mass mixing ratio, number concentration, and reflectivity factor in order of increasing scheme complexity. However, it is unclear whether this combination of properties is optimal for obtaining the best simulation of clouds and precipitation in models. In this study, a bin microphysics scheme has been modified to act like a bulk microphysics scheme. The new scheme can predict an arbitrary combination of two or three moments of the hydrometeor size distributions. As a first test of the arbitrary moment predictor (AMP), box model simulations of condensation, evaporation, and collision-coalescence are conducted for a variety of initial cloud droplet distributions and for a variety of configurations of AMP. The performance of AMP is assessed relative to the bin scheme from which it was built. The results show that no double- or triple-moment configuration of AMP can simultaneously minimize the error of all cloud droplet distribution moments. In general, predicting low-order moments helps to minimize errors in the cloud droplet number concentration, but predicting high-order moments tends to minimize errors in the cloud mass mixing ratio. The results have implications for which moments should be predicted by bulk microphysics schemes for the cloud droplet category

    Neutrality: A Necessity for Self-Adaptation

    Full text link
    Self-adaptation is used in all main paradigms of evolutionary computation to increase efficiency. We claim that the basis of self-adaptation is the use of neutrality. In the absence of external control neutrality allows a variation of the search distribution without the risk of fitness loss.Comment: 6 pages, 3 figures, LaTe

    On Classes of Functions for which No Free Lunch Results Hold

    Full text link
    In a recent paper it was shown that No Free Lunch results hold for any subset F of the set of all possible functions from a finite set X to a finite set Y iff F is closed under permutation of X. In this article, we prove that the number of those subsets can be neglected compared to the overall number of possible subsets. Further, we present some arguments why problem classes relevant in practice are not likely to be closed under permutation.Comment: 8 pages, 1 figure, see http://www.neuroinformatik.ruhr-uni-bochum.de

    On PAC-Bayesian Bounds for Random Forests

    Full text link
    Existing guarantees in terms of rigorous upper bounds on the generalization error for the original random forest algorithm, one of the most frequently used machine learning methods, are unsatisfying. We discuss and evaluate various PAC-Bayesian approaches to derive such bounds. The bounds do not require additional hold-out data, because the out-of-bag samples from the bagging in the training process can be exploited. A random forest predicts by taking a majority vote of an ensemble of decision trees. The first approach is to bound the error of the vote by twice the error of the corresponding Gibbs classifier (classifying with a single member of the ensemble selected at random). However, this approach does not take into account the effect of averaging out of errors of individual classifiers when taking the majority vote. This effect provides a significant boost in performance when the errors are independent or negatively correlated, but when the correlations are strong the advantage from taking the majority vote is small. The second approach based on PAC-Bayesian C-bounds takes dependencies between ensemble members into account, but it requires estimating correlations between the errors of the individual classifiers. When the correlations are high or the estimation is poor, the bounds degrade. In our experiments, we compute generalization bounds for random forests on various benchmark data sets. Because the individual decision trees already perform well, their predictions are highly correlated and the C-bounds do not lead to satisfactory results. For the same reason, the bounds based on the analysis of Gibbs classifiers are typically superior and often reasonably tight. Bounds based on a validation set coming at the cost of a smaller training set gave better performance guarantees, but worse performance in most experiments

    Rate Coefficients for the Collisional Excitation of Molecules: Estimates from an Artificial Neural Network

    Full text link
    An artificial neural network (ANN) is investigated as a tool for estimating rate coefficients for the collisional excitation of molecules. The performance of such a tool can be evaluated by testing it on a dataset of collisionally-induced transitions for which rate coefficients are already known: the network is trained on a subset of that dataset and tested on the remainder. Results obtained by this method are typically accurate to within a factor ~ 2.1 (median value) for transitions with low excitation rates and ~ 1.7 for those with medium or high excitation rates, although 4% of the ANN outputs are discrepant by a factor of 10 more. The results suggest that ANNs will be valuable in extrapolating a dataset of collisional rate coefficients to include high-lying transitions that have not yet been calculated. For the asymmetric top molecules considered in this paper, the favored architecture is a cascade-correlation network that creates 16 hidden neurons during the course of training, with 3 input neurons to characterize the nature of the transition and one output neuron to provide the logarithm of the rate coefficient.Comment: 23 pages including 9 figures. Accepted for publication in Ap

    Escritores Judeus Brasileiros: Um Percurso em Andamento

    Get PDF

    Smooth Monotonic Networks

    Full text link
    Monotonicity constraints are powerful regularizers in statistical modelling. They can support fairness in computer supported decision making and increase plausibility in data-driven scientific models. The seminal min-max (MM) neural network architecture ensures monotonicity, but often gets stuck in undesired local optima during training because of vanishing gradients. We propose a simple modification of the MM network using strictly-increasing smooth non-linearities that alleviates this problem. The resulting smooth min-max (SMM) network module inherits the asymptotic approximation properties from the MM architecture. It can be used within larger deep learning systems trained end-to-end. The SMM module is considerably simpler and less computationally demanding than state-of-the-art neural networks for monotonic modelling. Still, in our experiments, it compared favorably to alternative neural and non-neural approaches in terms of generalization performance
    • …
    corecore