12,404 research outputs found
Real-World Repetition Estimation by Div, Grad and Curl
We consider the problem of estimating repetition in video, such as performing
push-ups, cutting a melon or playing violin. Existing work shows good results
under the assumption of static and stationary periodicity. As realistic video
is rarely perfectly static and stationary, the often preferred Fourier-based
measurements is inapt. Instead, we adopt the wavelet transform to better handle
non-static and non-stationary video dynamics. From the flow field and its
differentials, we derive three fundamental motion types and three motion
continuities of intrinsic periodicity in 3D. On top of this, the 2D perception
of 3D periodicity considers two extreme viewpoints. What follows are 18
fundamental cases of recurrent perception in 2D. In practice, to deal with the
variety of repetitive appearance, our theory implies measuring time-varying
flow and its differentials (gradient, divergence and curl) over segmented
foreground motion. For experiments, we introduce the new QUVA Repetition
dataset, reflecting reality by including non-static and non-stationary videos.
On the task of counting repetitions in video, we obtain favorable results
compared to a deep learning alternative
Parameterized model order reduction of delayed systems using an interpolation approach with amplitude and frequency scaling coefficients
When the geometric dimensions become electrically large or signal waveform rise times decrease, time delays must be included in the modeling. We present an innovative PMOR technique for neutral delayed differential systems, which is based on an efficient and reliable combination of univariate model order reduction methods, amplitude and frequency scaling coefficients and positive interpolation schemes. It is able to provide parameterized reduced order models passive by construction over the design space of interest. Pertinent numerical examples validate the proposed PMOR approach
Recommended from our members
Quantifying the value of ecosystem services: a case study of honeybee pollination in the UK
There is concern that insect pollinators, such as honey bees, are currently declining in abundance, and are under serious threat from environmental changes such as habitat loss and climate change; the use of pesticides in intensive agriculture, and emerging diseases. This paper aims to evaluate how much public support there would be in preventing further decline to maintain the current number of bee colonies in the UK. The contingent valuation method (CVM) was used to obtain the willingness to pay (WTP) for a theoretical pollinator protection policy. Respondents were asked whether they would be WTP to support such a policy and how much would they pay? Results show that the mean WTP to support the bee protection policy was £1.37/week/household. Based on there being 24.9 million households in the UK, this is equivalent to £1.77 billion per year. This total value can show the importance of maintaining the overall pollination service to policy makers. We compare this total with estimates obtained using a simple market valuation of pollination for the UK
Moving Beyond Concentrations: The Challenge of Limiting Temperature Change
The UN Framework Convention on Climate Change shifted the attention of the policy community from stabilizing greenhouse gas emissions to stabilizing atmospheric greenhouse gas concentrations. While this represents a step forward, it does not go far enough. We find that, given the uncertainty in the climate system, focusing on atmospheric concentrations is likely to convey a false sense of precision. The causal chain between human activity and impacts is laden with uncertainty. From a benefit-cost perspective, it would be desirable to minimize the sum of mitigation costs and damages. Unfortunately, our ability to quantify and value impacts is limited. For the time being, we must rely on a surrogate. Focusing on temperature rather than on concentrations provides much more information on what constitutes an ample margin of safety. Concentrations mask too many uncertainties that are crucial for policy making.
Bounds for stop-loss premiums of stochastic sums (with applications to life contingencies).
In this paper we present in a general setting lower and upper bounds for the stop-loss premium of a (stochastic) sum of dependent random variables. Therefore, use is made of the methodology of comonotonic variables and the convex ordering of risks, introduced by Kaas et al. (2000) and Dhaene et al. (2002a, 2002b), combined with actuarial conditioning. The lower bound approximates very accurate the real value of the stop-loss premium. However, the comonotonic upper bounds perform rather badly for some retentions. Therefore, we construct sharper upper bounds based upon the traditional comonotonic bounds. Making use of the ideas of Rogers and Shi (1995), the first upper bound is obtained as the comonotonic lower bound plus an error term. Next this bound is refined by making the error term dependent on the retention in the stop-loss premium. Further, we study the case that the stop-loss premium can be decomposed into two parts. One part which can be evaluated exactly and another part to which comonotonic bounds are applied. As an application we study the bounds for the stop-loss premium of a random variable representing the stochastically discounted value of a series of cash flows with a fixed and stochastic time horizon. The paper ends with numerical examples illustrating the presented approximations. We apply the proposed bounds for life annuities, using Makeham's law, when also the stochastic nature of interest rates is taken into account by means of a Brownian motion.Comonotonicity; Life annuity; Stochastic time horizon; Stop-loss premium;
Robust Charge-based Qubit Encoding
We propose a simple encoding of charge-based quantum dot qubits which
protects against fluctuating electric fields by charge symmetry. We analyse the
reduction of coupling to noise due to nearby charge traps and present single
qubit gates. The relative advantage of the encoding increases with lower charge
trap density.Comment: 6 Pages, 7 Figures. Published Versio
Young Red Spheroidal Galaxies in the Hubble Deep Fields: Evidence for a Truncated IMF at ~2M_solar and a Constant Space Density to z~2
The optical-IR images of the Northern and Southern Hubble Deep Fields are
used to measure the spectral and density evolution of early-type galaxies. The
mean optical SED is found to evolve passively towards a mid F-star dominated
spectrum by z ~ 2. We demonstrate with realistic simulations that hotter
ellipticals would be readily visible if evolution progressed blueward and
brightward at z > 2, following a standard IMF. The colour distributions are
best fitted by a `red' IMF, deficient above ~2 M_solar and with a spread of
formation in the range 1.5 < z_f < 2.5. Traditional age dating is spurious in
this context, a distant elliptical can be young but appear red, with an
apparent age >3 Gyrs independent of its formation redshift. Regarding density
evolution, we demonstrate that the sharp decline in numbers claimed at z > 1
results from a selection bias against distant red galaxies in the optical,
where the flux is too weak for morphological classification, but is remedied
with relatively modest IR exposures revealing a roughly constant space density
to z ~ 2. We point out that the lack of high mass star-formation inferred here
and the requirement of metals implicates cooling-flows of pre-enriched gas in
the creation of the stellar content of spheroidal galaxies. Deep-field X-ray
images will be very helpful to examine this possibility.Comment: 6 pages, 3 figures, submitted to Astrophysical Journal Letters,
typographical errors corrected, simulated images with different IMFs
illustrated at http://astro.berkeley.edu/~bouwens/ellip.htm
Metal Cooling in Simulations of Cosmic Structure Formation
The addition of metals to any gas can significantly alter its evolution by
increasing the rate of radiative cooling. In star-forming environments,
enhanced cooling can potentially lead to fragmentation and the formation of
low-mass stars, where metal-free gas-clouds have been shown not to fragment.
Adding metal cooling to numerical simulations has traditionally required a
choice between speed and accuracy. We introduce a method that uses the
sophisticated chemical network of the photoionization software, Cloudy, to
include radiative cooling from a complete set of metals up to atomic number 30
(Zn) that can be used with large-scale three-dimensional hydrodynamic
simulations. Our method is valid over an extremely large temperature range (10
K < T < 10^8 K), up to hydrogen number densities of 10^12 cm^-3. At this
density, a sphere of 1 Msun has a radius of roughly 40 AU. We implement our
method in the adaptive mesh refinement (AMR) hydrodynamic/N-body code, Enzo.
Using cooling rates generated with this method, we study the physical
conditions that led to the transition from Population III to Population II star
formation. While C, O, Fe, and Si have been previously shown to make the
strongest contribution to the cooling in low-metallicity gas, we find that up
to 40% of the metal cooling comes from fine-structure emission by S, when solar
abundance patterns are present. At metallicities, Z > 10^-4 Zsun, regions of
density and temperature exist where gas is both thermally unstable and has a
cooling time less than its dynamical time. We identify these doubly unstable
regions as the most inducive to fragmentation. At high redshifts, the CMB
inhibits efficient cooling at low temperatures and, thus, reduces the size of
the doubly unstable regions, making fragmentation more difficult.Comment: 19 pages, 12 figures, significant revision, including new figure
Photon Conserving Radiative Transfer around Point Sources in multi-dimensional Numerical Cosmology
Many questions in physical cosmology regarding the thermal and ionization
history of the intergalactic medium are now successfully studied with the help
of cosmological hydrodynamical simulations. Here we present a numerical method
that solves the radiative transfer around point sources within a three
dimensional cartesian grid. The method is energy conserving independently of
resolution: this ensures the correct propagation speeds of ionization fronts.
We describe the details of the algorithm, and compute as first numerical
application the ionized region surrounding a mini-quasar in a cosmological
density field at z=7.Comment: 5 pages, 4 figures, submitted to ApJ
Dynamics of semi-flexible polymer solutions in the highly entangled regime
We present experimental evidence that the effective medium approximation
(EMA), developed by D.C. Morse [Phys. Rev. E {\bf 63}, 031502, (2001)],
provides the correct scaling law of the macroscopic plateau modulus
(where is the contour length per
unit volume and is the persistence length) of semi-flexible polymer
solutions, in the highly entangled concentration regime. Competing theories,
including a self-consistent binary collision approximation (BCA), have instead
predicted . We have tested both the EMA and
BCA scaling predictions using actin filament (F-actin) solutions which permit
experimental control of independently of other parameters. A combination
of passive video particle tracking microrheology and dynamic light scattering
yields independent measurements of the elastic modulus and
respectively. Thus we can distinguish between the two proposed laws, in
contrast to previous experimental studies, which focus on the (less
discriminating) concentration functionality of .Comment: 4 pages, 6 figures, Phys. Rev. Lett. (accepted
- …