2,648 research outputs found
Drop rebound in clouds and precipitation
The possibility of rebound for colliding cloud drops was measured by determining the collection efficiency. The collection efficiency for 17 size pairs of relatively uncharged drops in over 500 experimental runs was measured using two techniques. The collection efficiencies fall in a narrow range of 0.60 to 0.70 even though the collection drop was varied between 63 and 326 microns and the size ratio from 0.05 to 0.33. In addition the measured values of collection efficiencies (Epsilon) were below the computed values of collision efficiencies (E) for rigid spheres. Therefore it was concluded that rebound was occurring for these sizes since inferred coalescence (epsilon = Epsilon/E) efficiencies are about 0.6 yo 0.8. At a very small size ratio (r/R = p = 0.05, R = 326 microns) the coalescence efficiency inferred is in good agreement with the experimental findings for a supported collector drop. At somewhat large size ratios the inferred values of epsilon are well above results of supported drop experiments, but show a slight correspondence in collected drop size dependency to two models of drop rebound. At a large size ratio (p = 0.73, R = 275) the inferred coalescence efficiency is significantly different from all previous results
A Monte Carlo Study on the Dynamical Fluctuations Inside Quark and Antiquark Jets
The dynamical fluctuations inside the quark and antiquark jets are studied
using Monte Carlo method. Quark and antiquark jets are identified from the
2-jet events in e+e- collisions at 91.2 GeV by checking them at parton level.
It is found that transition point exists inside both of these two kinds of
jets. At this point the jets are circular in the transverse plane with respect
to the property of dynamical fluctuations. The results are consistent with the
fact that the third jet (gluon jet) was historically first discovered in e+e-
collisions in the energy region 17-30 GeV.Comment: 9 pages, 4 figure
On the high order multiplicity moments
The description of multiplicity distributions in terms of the ratios of
cumulants to factorial moments is analyzed both for data and for the Monte
Carlo generated events. For the PYTHIA generated events the moments are
investigated for the restricted range of phase-space and for the jets
reconstructed from single particle momenta. The results cast doubts on the
validity of extended local parton-hadron duality and suggest the possibility of
more effective experimental investigations concerning the origin of the
observed structure in the dependence of moments on their order.Comment: 10 pages, 5 figures; corrected version to be published in JP
Deep Bilevel Learning
We present a novel regularization approach to train neural networks that
enjoys better generalization and test error than standard stochastic gradient
descent. Our approach is based on the principles of cross-validation, where a
validation set is used to limit the model overfitting. We formulate such
principles as a bilevel optimization problem. This formulation allows us to
define the optimization of a cost on the validation set subject to another
optimization on the training set. The overfitting is controlled by introducing
weights on each mini-batch in the training set and by choosing their values so
that they minimize the error on the validation set. In practice, these weights
define mini-batch learning rates in a gradient descent update equation that
favor gradients with better generalization capabilities. Because of its
simplicity, this approach can be integrated with other regularization methods
and training schemes. We evaluate extensively our proposed algorithm on several
neural network architectures and datasets, and find that it consistently
improves the generalization of the model, especially when labels are noisy.Comment: ECCV 201
Criticality, Fractality and Intermittency in Strong Interactions
Assuming a second-order phase transition for the hadronization process, we
attempt to associate intermittency patterns in high-energy hadronic collisions
to fractal structures in configuration space and corresponding intermittency
indices to the isothermal critical exponent at the transition temperature. In
this approach, the most general multidimensional intermittency pattern,
associated to a second-order phase transition of the strongly interacting
system, is determined, and its relevance to present and future experiments is
discussed.Comment: 15 pages + 2 figures (available on request), CERN-TH.6990/93,
UA/NPPS-5-9
Novel Scaling Behavior for the Multiplicity Distribution under Second-Order Quark-Hadron Phase Transition
Deviation of the multiplicity distribution in small bin from its
Poisson counterpart is studied within the Ginzburg-Landau description for
second-order quark-hadron phase transition. Dynamical factor for the distribution and ratio are defined, and
novel scaling behaviors between are found which can be used to detect the
formation of quark-gluon plasma. The study of and is also very
interesting for other multiparticle production processes without phase
transition.Comment: 4 pages in revtex, 5 figures in eps format, will be appeared in Phys.
Rev.
QuaSI: Quantile Sparse Image Prior for Spatio-Temporal Denoising of Retinal OCT Data
Optical coherence tomography (OCT) enables high-resolution and non-invasive
3D imaging of the human retina but is inherently impaired by speckle noise.
This paper introduces a spatio-temporal denoising algorithm for OCT data on a
B-scan level using a novel quantile sparse image (QuaSI) prior. To remove
speckle noise while preserving image structures of diagnostic relevance, we
implement our QuaSI prior via median filter regularization coupled with a Huber
data fidelity model in a variational approach. For efficient energy
minimization, we develop an alternating direction method of multipliers (ADMM)
scheme using a linearization of median filtering. Our spatio-temporal method
can handle both, denoising of single B-scans and temporally consecutive
B-scans, to gain volumetric OCT data with enhanced signal-to-noise ratio. Our
algorithm based on 4 B-scans only achieved comparable performance to averaging
13 B-scans and outperformed other current denoising methods.Comment: submitted to MICCAI'1
The random case of Conley's theorem: III. Random semiflow case and Morse decomposition
In the first part of this paper, we generalize the results of the author
\cite{Liu,Liu2} from the random flow case to the random semiflow case, i.e. we
obtain Conley decomposition theorem for infinite dimensional random dynamical
systems. In the second part, by introducing the backward orbit for random
semiflow, we are able to decompose invariant random compact set (e.g. global
random attractor) into random Morse sets and connecting orbits between them,
which generalizes the Morse decomposition of invariant sets originated from
Conley \cite{Con} to the random semiflow setting and gives the positive answer
to an open problem put forward by Caraballo and Langa \cite{CL}.Comment: 21 pages, no figur
Finite size scaling analysis of intermittency moments in the two dimensional Ising model
Finite size scaling is shown to work very well for the block variables used
in intermittency studies on a 2-d Ising lattice. The intermittency exponents so
derived exhibit the expected relations to the magnetic critical exponent of the
model. Email contact: [email protected]: Saclay-T93/063 Email: [email protected]
The potential natural vegetation of large river floodplains - from dynamic to static equilibrium
Article in PressThe potential natural vegetation (PNV) is a useful benchmark for the restoration of large river floodplains because
very few natural reference reaches exist. Expert-based approaches and different types of ecological models
(static and dynamic) are commonly used for its estimation despite the conceptual differences they imply. For
natural floodplains a static concept of PNV is not reasonable, as natural disturbances cause a constant resetting of
succession. However, various forms of river regulation have disrupted the natural dynamics of most large
European rivers for centuries. Therefore, we asked whether the consideration of succession dynamics and time
dependent habitat turnover are still relevant factors for the reconstruction of the PNV.
To answer this we compared the results of a simulation of the vegetation succession (1872–2016) of a segment
of the upper Rhine river after regulation (damming, straightening and bank protection) to different statistic
and expert-based modelling approaches for PNV reconstruction. The validation of the different PNV estimation
methods against a set of independent reference plots and the direct comparison of their results revealed very
similar performances. We therefore conclude that due to a lack of large disturbances, the vegetation of regulated
large rivers has reached a near-equilibrium state with the altered hydrologic regime and that a static perception
of its PNV may be justified. Consequently, statistical models seem to be the best option for its reconstruction
since they need relatively few resources (data, time, expert knowledge) and are reproducibleinfo:eu-repo/semantics/acceptedVersio
- …