385 research outputs found

    Regional wave propagation using the discontinuous Galerkin method

    Get PDF
    We present an application of the discontinuous Galerkin (DG) method to regional wave propagation. The method makes use of unstructured tetrahedral meshes, combined with a time integration scheme solving the arbitrary high-order derivative (ADER) Riemann problem. This ADER-DG method is high-order accurate in space and time, beneficial for reliable simulations of high-frequency wavefields over long propagation distances. Due to the ease with which tetrahedral grids can be adapted to complex geometries, undulating topography of the Earth's surface and interior interfaces can be readily implemented in the computational domain. The ADER-DG method is benchmarked for the accurate radiation of elastic waves excited by an explosive and a shear dislocation source. We compare real data measurements with synthetics of the 2009 L'Aquila event (central Italy). We take advantage of the geometrical flexibility of the approach to generate a European model composed of the 3-D <i>EPcrust</i> model, combined with the depth-dependent <i>ak135</i> velocity model in the upper mantle. The results confirm the applicability of the ADER-DG method for regional scale earthquake simulations, which provides an alternative to existing methodologies

    Second Order PAC-Bayesian Bounds for the Weighted Majority Vote

    Full text link
    We present a novel analysis of the expected risk of weighted majority vote in multiclass classification. The analysis takes correlation of predictions by ensemble members into account and provides a bound that is amenable to efficient minimization, which yields improved weighting for the majority vote. We also provide a specialized version of our bound for binary classification, which allows to exploit additional unlabeled data for tighter risk estimation. In experiments, we apply the bound to improve weighting of trees in random forests and show that, in contrast to the commonly used first order bound, minimization of the new bound typically does not lead to degradation of the test error of the ensemble

    The role of the gamma function shape parameter in determining differences between condensation rates in bin and bulk microphysics schemes

    Get PDF
    The condensation and evaporation rates predicted by bin and bulk microphysics schemes in the same model framework are compared in a statistical way using simulations of non-precipitating shallow cumulus clouds. Despite other fundamental disparities between the bin and bulk condensation parameterizations, the differences in condensation rates are predominantly explained by accounting for the width of the cloud droplet size distributions simulated by the bin scheme. While the bin scheme does not always predict a cloud droplet size distribution that is well represented by a gamma distribution function (which is assumed by bulk schemes), this fact appears to be of secondary importance for explaining why the two schemes predict different condensation and evaporation rates. The width of the cloud droplet size is not well constrained by observations, and thus it is difficult to know how to appropriately specify it in bulk microphysics schemes. However, this study shows that enhancing our observations of this width and its behavior in clouds is important for accurately predicting condensation and evaporation rates

    The geometry of nonlinear least squares with applications to sloppy models and optimization

    Full text link
    Parameter estimation by nonlinear least squares minimization is a common problem with an elegant geometric interpretation: the possible parameter values of a model induce a manifold in the space of data predictions. The minimization problem is then to find the point on the manifold closest to the data. We show that the model manifolds of a large class of models, known as sloppy models, have many universal features; they are characterized by a geometric series of widths, extrinsic curvatures, and parameter-effects curvatures. A number of common difficulties in optimizing least squares problems are due to this common structure. First, algorithms tend to run into the boundaries of the model manifold, causing parameters to diverge or become unphysical. We introduce the model graph as an extension of the model manifold to remedy this problem. We argue that appropriate priors can remove the boundaries and improve convergence rates. We show that typical fits will have many evaporated parameters. Second, bare model parameters are usually ill-suited to describing model behavior; cost contours in parameter space tend to form hierarchies of plateaus and canyons. Geometrically, we understand this inconvenient parametrization as an extremely skewed coordinate basis and show that it induces a large parameter-effects curvature on the manifold. Using coordinates based on geodesic motion, these narrow canyons are transformed in many cases into a single quadratic, isotropic basin. We interpret the modified Gauss-Newton and Levenberg-Marquardt fitting algorithms as an Euler approximation to geodesic motion in these natural coordinates on the model manifold and the model graph respectively. By adding a geodesic acceleration adjustment to these algorithms, we alleviate the difficulties from parameter-effects curvature, improving both efficiency and success rates at finding good fits.Comment: 40 pages, 29 Figure

    The International Workshop on Osteoarthritis Imaging Knee MRI Segmentation Challenge: A Multi-Institute Evaluation and Analysis Framework on a Standardized Dataset

    Full text link
    Purpose: To organize a knee MRI segmentation challenge for characterizing the semantic and clinical efficacy of automatic segmentation methods relevant for monitoring osteoarthritis progression. Methods: A dataset partition consisting of 3D knee MRI from 88 subjects at two timepoints with ground-truth articular (femoral, tibial, patellar) cartilage and meniscus segmentations was standardized. Challenge submissions and a majority-vote ensemble were evaluated using Dice score, average symmetric surface distance, volumetric overlap error, and coefficient of variation on a hold-out test set. Similarities in network segmentations were evaluated using pairwise Dice correlations. Articular cartilage thickness was computed per-scan and longitudinally. Correlation between thickness error and segmentation metrics was measured using Pearson's coefficient. Two empirical upper bounds for ensemble performance were computed using combinations of model outputs that consolidated true positives and true negatives. Results: Six teams (T1-T6) submitted entries for the challenge. No significant differences were observed across all segmentation metrics for all tissues (p=1.0) among the four top-performing networks (T2, T3, T4, T6). Dice correlations between network pairs were high (>0.85). Per-scan thickness errors were negligible among T1-T4 (p=0.99) and longitudinal changes showed minimal bias (<0.03mm). Low correlations (<0.41) were observed between segmentation metrics and thickness error. The majority-vote ensemble was comparable to top performing networks (p=1.0). Empirical upper bound performances were similar for both combinations (p=1.0). Conclusion: Diverse networks learned to segment the knee similarly where high segmentation accuracy did not correlate to cartilage thickness accuracy. Voting ensembles did not outperform individual networks but may help regularize individual models.Comment: Submitted to Radiology: Artificial Intelligence; Fixed typo

    Using Comparative Preference Statements in Hypervolume-Based Interactive Multiobjective Optimization

    Get PDF
    International audienceThe objective functions in multiobjective optimization problems are often non-linear, noisy, or not available in a closed form and evolutionary multiobjective optimization (EMO) algorithms have been shown to be well applicable in this case. Here, our objective is to facilitate interactive decision making by saving function evaluations outside the "interesting" regions of the search space within a hypervolume-based EMO algorithm. We focus on a basic model where the Decision Maker (DM) is always asked to pick the most desirable solution among a set. In addition to the scenario where this solution is chosen directly, we present the alternative to specify preferences via a set of so-called comparative preference statements. Examples on standard test problems show the working principles, the competitiveness, and the drawbacks of the proposed algorithm in comparison with the recent iTDEA algorithm

    Advancing Model-Building for Many-Objective Optimization Estimation of Distribution Algorithms

    Get PDF
    Proceedings of: 3rd European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation (EvoNUM 2010) [associated to: EvoApplications 2010. European Conference on the Applications of Evolutionary Computation]. Istambul, Turkey, April 7-9, 2010In order to achieve a substantial improvement of MOEDAs regarding MOEAs it is necessary to adapt their model-building algorithms. Most current model-building schemes used so far off-the-shelf machine learning methods. These methods are mostly error-based learning algorithms. However, the model-building problem has specific requirements that those methods do not meet and even avoid. In this work we dissect this issue and propose a set of algorithms that can be used to bridge the gap of MOEDA application. A set of experiments are carried out in order to sustain our assertionsThis work was supported by projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM CONTEXTS S2009/TIC-1485 and DPS2008-07029-C02-0Publicad

    Measurement of p + d -> 3He + eta in S(11) Resonance

    Full text link
    We have measured the reaction p + d -> 3He + eta at a proton beam energy of 980 MeV, which is 88.5 MeV above threshold using the new ``germanium wall'' detector system. A missing--mass resolution of the detector system of 2.6% was achieved. The angular distribution of the meson is forward peaked. We found a total cross section of (573 +- 83(stat.) +- 69(syst.))nb. The excitation function for the present reaction is described by a Breit Wigner form with parameters from photoproduction.Comment: 8 pages, 2 figures, corrected typos in heade

    Electron correlations for ground state properties of group IV semiconductors

    Full text link
    Valence energies for crystalline C, Si, Ge, and Sn with diamond structure have been determined using an ab-initio approach based on information from cluster calculations. Correlation contributions, in particular, have been evaluated in the coupled electron pair approximation (CEPA), by means of increments obtained for localized bond orbitals and for pairs and triples of such bonds. Combining these results with corresponding Hartree-Fock (HF) data, we recover about 95 % of the experimental cohesive energies. Lattice constants are overestimated at the HF level by about 1.5 %; correlation effects reduce these deviations to values which are within the error bounds of this method. A similar behavior is found for the bulk modulus: the HF values which are significantly too high are reduced by correlation effects to about 97 % of the experimental values.Comment: 22 pages, latex, 2 figure

    Quantum computation with trapped polar molecules

    Full text link
    We propose a novel physical realization of a quantum computer. The qubits are electric dipole moments of ultracold diatomic molecules, oriented along or against an external electric field. Individual molecules are held in a 1-D trap array, with an electric field gradient allowing spectroscopic addressing of each site. Bits are coupled via the electric dipole-dipole interaction. Using technologies similar to those already demonstrated, this design can plausibly lead to a quantum computer with 104\gtrsim 10^4 qubits, which can perform 105\sim 10^5 CNOT gates in the anticipated decoherence time of 5\sim 5 s.Comment: 4 pages, RevTeX 4, 2 figures. Edited for length and converted to RevTeX, but no substantial changes from earlier pdf versio
    corecore