995 research outputs found
Characteristics of Swine Farm Sales Data under a De Facto Moratorium
Agribusiness, Farm Management, Livestock Production/Industries,
Determinants of Swine Farm Sale Prices Under a de facto Moratorium
farm sale prices, constrained supply, premiums, appraised value, determinants of premium, Agricultural Finance, Farm Management, Land Economics/Use, Livestock Production/Industries,
Faster integer multiplication using Plain Vanilla FFT primes
Assuming a conjectural upper bound for the least prime in an arithmetic progression, we show that n-bit integers may be multiplied in O(n log n 4log*n) bit operations
The occurrence of adverse events in low-risk non-survivors in pediatric intensive care patients: an exploratory study
We studied the occurrence of adverse events (AEs) in low-risk non-survivors (LNs), compared to low-risk survivors (LSs), high-risk non-survivors (HNs), and high-risk survivors (HSs) in two pediatric intensive care units (PICUs). The study was performed as a retrospective patient record review study, using a PICU-trigger tool. A random sample of 48 PICU patients (0–18 years) was chosen, stratified into four subgroups of 12 patients: LNs, LSs, HNs, and HSs. Primary outcome was the occurrence of AEs. The severity, preventability, and nature of the indentified AEs were determined. In total, 45 AEs were found in 20 patients. The occurrence of AEs in the LN group was significantly higher compared to that in the LS group and HN group (AE occurrence: LN 10/12 patients, LS 1/12 patients; HN 2/12 patients; HS 7/12 patients; LN-LS difference, p < 0.001; LN-HN difference, p < 0.01). The AE rate in the LN group was significantly higher compared to that in the LS and HN groups (median [IQR]: LN 0.12 [0.07–0.29], LS 0 [0–0], HN 0 [0–0], and HS 0.03 [0.0–0.17] AE/PICU day; LN-LS difference, p < 0.001; LN-HN difference, p < 0.01). The distribution of the AEs among the four groups was as follows: 25 AEs (LN), 2 AEs (LS), 8 AEs (HN), and 10 AEs (HS). Fifteen of forty-five AEs were preventable. In 2/12 LN patients, death occurred after a preventable AE. Conclusion: The occurrence of AEs in LNs was higher compared to that in LSs and HNs. Some AEs were severe and preventable and contributed to mortality.(Table presented.
Distributed Online Learning for Joint Regret with Communication Constraints
We consider distributed online learning for joint regret with communication constraints. In this setting, there are multiple agents that are connected in a graph. Each round, an adversary first activates one of the agents to issue a prediction and provides a corresponding gradient, and then the agents are allowed to send a b-bit message to their neighbors in the graph. All agents cooperate to control the joint regret, which is the sum of the losses of the activated agents minus the losses evaluated at the best fixed common comparator parameters u. We observe that it is suboptimal for agents to wait for gradients that take too long to arrive. Instead, the graph should be partitioned into local clusters that communicate among themselves. Our main result is a new method that can adapt to the optimal graph partition for the adversarial activations and gradients, where the graph partition is selected from a set of candidate partitions. A crucial building block along the way is a new algorithm for online convex optimization with delayed gradient information that is comparator-adaptive, meaning that its joint regret scales with the norm of the comparator ||u||. We further provide near-optimal gradient compression schemes depending on the ratio of b and the dimension times the diameter of the graph
MetaGrad: Adaptation using Multiple Learning Rates in Online Learning
We provide a new adaptive method for online convex optimization, MetaGrad, that is robust to general convex losses but achieves faster rates for a broad class of special functions, including exp-concave and strongly convex functions, but also various types of stochastic and non-stochastic functions without any curvature. We prove this by drawing a connection to the Bernstein condition, which is known to imply fast rates in offline statistical learning. MetaGrad further adapts automatically to the size of the gradients. Its main feature is that it simultaneously considers multiple learning rates, which are weighted directly proportional to their empirical performance on the data using a new meta-algorithm. We provide three versions of MetaGrad. The full matrix version maintains a full covariance matrix and is applicable to learning tasks for which we can afford update time quadratic in the dimension. The other two versions provide speed-ups for high-dimensional learning tasks with an update time that is linear in the dimension: one is based on sketching, the other on running a separate copy of the basic algorithm per coordinate. We evaluate all versions of MetaGrad on benchmark online classification and regression tasks, on which they consistently outperform both online gradient descent and AdaGrad
MetaGrad: Adaptation using Multiple Learning Rates in Online Learning
We provide a new adaptive method for online convex optimization, MetaGrad, that is robust to general convex losses but achieves faster rates for a broad class of special functions, including exp-concave and strongly convex functions, but also various types of stochastic and non-stochastic functions without any curvature. We prove this by drawing a connection to the Bernstein condition, which is known to imply fast rates in offline statistical learning. MetaGrad further adapts automatically to the size of the gradients. Its main feature is that it simultaneously considers multiple learning rates, which are weighted directly proportional to their empirical performance on the data using a new meta-algorithm. We provide three versions of MetaGrad. The full matrix version maintains a full covariance matrix and is applicable to learning tasks for which we can afford update time quadratic in the dimension. The other two versions provide speed-ups for high-dimensional learning tasks with an update time that is linear in the dimension: one is based on sketching, the other on running a separate copy of the basic algorithm per coordinate. We evaluate all versions of MetaGrad on benchmark online classification and regression tasks, on which they consistently outperform both online gradient descent and AdaGrad
Notitie Denktank Overlijdensschade. Nieuwe richting benadering en berekening overlijdensschade
In 2009 is een werkgroep onder de naam Denktank Overlijdensschade gestart met het bestuderen van een ander, aan de huidige tijd aangepast model voor de berekening van overlijdensschade. Doelstelling was te komen tot een, ook voor nabestaanden, transparantie systematiek welke recht doet aan de vorderingsgerechtigdheid van de nabestaanden. In 2014 heeft de Denktank Overlijdensschade haar werkzaamheden voltooid met het opleveren van een nieuwe rekenmethodiek. In deze Notitie wordt beschreven hoe de Denktank tot deze nieuwe benadering van het berekenen van overlijdensschade is gekomen, welke onderzoeken daaraan ten grondslag liggen en wat de uiteindelijke rekenregel is, die nu voorgesteld wordt. Kern van de nieuwe methodiek is het uitgangspunt dat het gezin als economische eenheid wordt beschouwd, voor én na het overlijden
Entire solutions of hydrodynamical equations with exponential dissipation
We consider a modification of the three-dimensional Navier--Stokes equations
and other hydrodynamical evolution equations with space-periodic initial
conditions in which the usual Laplacian of the dissipation operator is replaced
by an operator whose Fourier symbol grows exponentially as \ue ^{|k|/\kd} at
high wavenumbers . Using estimates in suitable classes of analytic
functions, we show that the solutions with initially finite energy become
immediately entire in the space variables and that the Fourier coefficients
decay faster than \ue ^{-C(k/\kd) \ln (|k|/\kd)} for any . The
same result holds for the one-dimensional Burgers equation with exponential
dissipation but can be improved: heuristic arguments and very precise
simulations, analyzed by the method of asymptotic extrapolation of van der
Hoeven, indicate that the leading-order asymptotics is precisely of the above
form with . The same behavior with a universal constant
 is conjectured for the Navier--Stokes equations with exponential
dissipation in any space dimension. This universality prevents the strong
growth of intermittency in the far dissipation range which is obtained for
ordinary Navier--Stokes turbulence. Possible applications to improved spectral
simulations are briefly discussed.Comment: 29 pages, 3 figures, Comm. Math. Phys., in pres
- …
