1,151 research outputs found

    Removing EU milk quotas, soft landing versus hard landing

    Get PDF
    This paper analyses EU dairy policy reforms and mainly focus on EU milk quota removal scenarios. The model used to evaluate the scenario is a spatial equilibrium model of the dairy sector. It integrates the main competitor of the EU on world markets, Oceania, as well as the main importing regions in the rest of the world. The paper first assesses the impact of the Luxembourg scenario in the prospect of a new WTO agreement in the future. It then provide a quantitative assessment of the impact of the abolition of EU milk quotas on the EU dairy sector either through a gradual phasing out or through an abrupt abolition of milk quotas. Compared to a status-quo policy, the Luxembourg policy leads to a 7.6 percent milk price decrease and a 1.9 percent milk production increase. A gradual increase of milk quotas as recently proposed by the European Commission (+ 7% over 6 years) generate a 9% drop in the EU milk price (compared to the Luxembourg scenario) and an increase in production by 3.5%. A complete elimination of quotas leads to an additional 1% increase in production and an additional 3% drop in the EU milk price. As compared to the baseline scenario, in the Luxembourg scenario in 2014-15, producers gain 1.3 billion ¿, whereas in the same year they lose 2.6 billion ¿ in the soft landing scenario. As such the direct payments are more than sufficient to compensate producers for the loss of producer surplus in the Luxembourg scenario, but fall short to achieve full compensation in the soft landing scenario

    SpECTRE: A Task-based Discontinuous Galerkin Code for Relativistic Astrophysics

    Get PDF
    We introduce a new relativistic astrophysics code, SpECTRE, that combines a discontinuous Galerkin method with a task-based parallelism model. SpECTRE's goal is to achieve more accurate solutions for challenging relativistic astrophysics problems such as core-collapse supernovae and binary neutron star mergers. The robustness of the discontinuous Galerkin method allows for the use of high-resolution shock capturing methods in regions where (relativistic) shocks are found, while exploiting high-order accuracy in smooth regions. A task-based parallelism model allows efficient use of the largest supercomputers for problems with a heterogeneous workload over disparate spatial and temporal scales. We argue that the locality and algorithmic structure of discontinuous Galerkin methods will exhibit good scalability within a task-based parallelism framework. We demonstrate the code on a wide variety of challenging benchmark problems in (non)-relativistic (magneto)-hydrodynamics. We demonstrate the code's scalability including its strong scaling on the NCSA Blue Waters supercomputer up to the machine's full capacity of 22,380 nodes using 671,400 threads.Comment: 41 pages, 13 figures, and 7 tables. Ancillary data contains simulation input file

    Massively parallel approximate Gaussian process regression

    Get PDF
    We explore how the big-three computing paradigms -- symmetric multi-processor (SMC), graphical processing units (GPUs), and cluster computing -- can together be brought to bare on large-data Gaussian processes (GP) regression problems via a careful implementation of a newly developed local approximation scheme. Our methodological contribution focuses primarily on GPU computation, as this requires the most care and also provides the largest performance boost. However, in our empirical work we study the relative merits of all three paradigms to determine how best to combine them. The paper concludes with two case studies. One is a real data fluid-dynamics computer experiment which benefits from the local nature of our approximation; the second is a synthetic data example designed to find the largest design for which (accurate) GP emulation can performed on a commensurate predictive set under an hour.Comment: 24 pages, 6 figures, 1 tabl

    End-to-end learning of brain tissue segmentation from imperfect labeling

    Full text link
    Segmenting a structural magnetic resonance imaging (MRI) scan is an important pre-processing step for analytic procedures and subsequent inferences about longitudinal tissue changes. Manual segmentation defines the current gold standard in quality but is prohibitively expensive. Automatic approaches are computationally intensive, incredibly slow at scale, and error prone due to usually involving many potentially faulty intermediate steps. In order to streamline the segmentation, we introduce a deep learning model that is based on volumetric dilated convolutions, subsequently reducing both processing time and errors. Compared to its competitors, the model has a reduced set of parameters and thus is easier to train and much faster to execute. The contrast in performance between the dilated network and its competitors becomes obvious when both are tested on a large dataset of unprocessed human brain volumes. The dilated network consistently outperforms not only another state-of-the-art deep learning approach, the up convolutional network, but also the ground truth on which it was trained. Not only can the incredible speed of our model make large scale analyses much easier but we also believe it has great potential in a clinical setting where, with little to no substantial delay, a patient and provider can go over test results.Comment: Published as a conference paper at IJCNN 2017 Preprint versio

    Upscaling and Development of Linear Array Focused Laser Differential Interferometry for Simultaneous 1D Velocimetry and Spectral Profiling in High-Speed Flows

    Get PDF
    In this research a new configuration of linear array-focused laser differential interferometry (LA-FLDI) is described. This measurement expands on previous implementations of LA-FLDI through the use of an additional Wollaston prism. This additional prism expands the typical single LA-FLDI column into two columns of FLDI point pairs. The additional column of probed locations allows for increased spatial sampling of frequency spectra as well as the addition of simultaneous wall normal velocimetry measurements. The new configuration is used to measure the velocity profile and frequency content across a Mach 2 turbulent boundary layer at six wall normal locations simultaneously. Features of the measured spectra are shown to agree with expectations and the obtained boundary layer velocity profile is compared with previously obtained PIV measurements. The increase in simultaneously probed points provided by LA-FLDI is ideal for impulse facilities where spatial scanning via measurement system translation is not possible for a single run and techniques such as PIV may not be feasible. Initial testing was also carried out to determine if FLDI-based velocimetry can provide reasonable velocity profiles for adverse pressure gradients and over distributed roughness. Finally, a prototype photodiode array is proposed to simplify the optical setup for LA-FLDI and initial test results are provided comparing the impulse response of the prototype array to that of the amplified photodetectors currently in use

    Fourier analysis of finite element preconditioned collocation schemes

    Get PDF
    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes

    Toward connecting core-collapse supernova theory with observations: I. Shock revival in a 15 Msun blue supergiant progenitor with SN 1987A energetics

    Full text link
    We study the evolution of the collapsing core of a 15 Msun blue supergiant supernova progenitor from the core bounce until 1.5 seconds later. We present a sample of hydrodynamic models parameterized to match the explosion energetics of SN 1987A. We find the spatial model dimensionality to be an important contributing factor in the explosion process. Compared to two-dimensional simulations, our three-dimensional models require lower neutrino luminosities to produce equally energetic explosions. We estimate that the convective engine in our models is 4% more efficient in three dimensions than in two dimensions. We propose that the greater efficiency of the convective engine found in three-dimensional simulations might be due to the larger surface-to-volume ratio of convective plumes, which aids in distributing energy deposited by neutrinos. We do not find evidence of the standing accretion shock instability nor turbulence being a key factor in powering the explosion in our models. Instead, the analysis of the energy transport in the post-shock region reveals characteristics of penetrative convection. The explosion energy decreases dramatically once the resolution is inadequate to capture the morphology of convection on large scales. This shows that the role of dimensionality is secondary to correctly accounting for the basic physics of the explosion. We also analyze information provided by particle tracers embedded in the flow, and find that the unbound material has relatively long residency times in two-dimensional models, while in three dimensions a significant fraction of the explosion energy is carried by particles with relatively short residency times.Comment: accepted for publication in Astrophysical Journa
    • …
    corecore