1,690 research outputs found
Stochastic growth equations on growing domains
The dynamics of linear stochastic growth equations on growing substrates is
studied. The substrate is assumed to grow in time following the power law
, where the growth index is an arbitrary positive number.
Two different regimes are clearly identified: for small the interface
becomes correlated, and the dynamics is dominated by diffusion; for large
the interface stays uncorrelated, and the dynamics is dominated by
dilution. In this second regime, for short time intervals and spatial scales
the critical exponents corresponding to the non-growing substrate situation are
recovered. For long time differences or large spatial scales the situation is
different. Large spatial scales show the uncorrelated character of the growing
interface. Long time intervals are studied by means of the auto-correlation and
persistence exponents. It becomes apparent that dilution is the mechanism by
which correlations are propagated in this second case.Comment: Published versio
Specific heat studies of pure Nb3Sn single crystals at low temperature
Specific heat measurements performed on high purity vapor-grown NbSn
crystals show clear features related to both the martensitic and
superconducting transitions. Our measurements indicate that the martensitic
anomaly does not display hysteresis, meaning that the martensitic transition
could be a weak first or a second order thermodynamic transition. Careful
measurements of the two transition temperatures display an inverse correlation
between both temperatures. At low temperature specific heat measurements show
the existence of a single superconducting energy gap feature.Comment: Accepted in Journal of Physics: Condensed Matte
Globular Clusters: DNA of Early-Type galaxies?
This paper explores if the mean properties of Early-Type Galaxies (ETG) can
be reconstructed from "genetic" information stored in their GCs (i.e., in their
chemical abundances, spatial distributions and ages). This approach implies
that the formation of each globular occurs in very massive stellar
environments, as suggested by some models that aim at explaining the presence
of multi-populations in these systems. The assumption that the relative number
of globular clusters to diffuse stellar mass depends exponentially on chemical
abundance, [Z/H], and the presence of two dominant GC sub-populations blue and
red, allows the mapping of low metallicity halos and of higher metallicity (and
more heterogeneous) bulges. In particular, the masses of the low-metallicity
halos seem to scale up with dark matter mass through a constant. We also find a
dependence of the globular cluster formation efficiency with the mean projected
stellar mass density of the galaxies within their effective radii. The analysis
is based on a selected sub-sample of galaxies observed within the ACS Virgo
Cluster Survey of the {\it Hubble Space Telescope}. These systems were grouped,
according to their absolute magnitudes, in order to define composite fiducial
galaxies and look for a quantitative connection with their (also composite)
globular clusters systems. The results strengthen the idea that globular
clusters are good quantitative tracers of both baryonic and dark matter in
ETGs.Comment: 20 pages, 28 figures and 5 table
A parallel computation approach for solving multistage stochastic network problems
The original publication is available at www.springerlink.comThis paper presents a parallel computation approach for the efficient solution of very
large multistage linear and nonlinear network problems with random parameters. These
problems result from particular instances of models for the robust optimization of network
problems with uncertainty in the values of the right-hand side and the objective function
coefficients. The methodology considered here models the uncertainty using scenarios to
characterize the random parameters. A scenario tree is generated and, through the use of
full-recourse techniques, an implementable solution is obtained for each group of scenarios
at each stage along the planning horizon.
As a consequence of the size of the resulting problems, and the special structure of their
constraints, these models are particularly well-suited for the application of decomposition
techniques, and the solution of the corresponding subproblems in a parallel computation
environment. An augmented Lagrangian decomposition algorithm has been implemented
on a distributed computation environment, and a static load balancing approach has been
chosen for the parallelization scheme, given the subproblem structure of the model. Large
problems – 9000 scenarios and 14 stages with a deterministic equivalent nonlinear model
having 166000 constraints and 230000 variables – are solved in 45 minutes on a cluster of
four small (11 Mflops) workstations. An extensive set of computational experiments is
reported; the numerical results and running times obtained for our test set, composed of
large-scale real-life problems, confirm the efficiency of this procedure.Publicad
A parallel computation approach for solving multistage stochastic network problems
This paper presents a parallel computation approach for the efficient solution of very large multistage linear and nonIinear network problems with random parameters. These problems resul t from particular instances of models for the robust optimization of network problems with uncertainty in the values of the right-hand side and the objective function coefficients. The methodology considered here models the uncertainty using scenarios to characterize the random parameters. A. scenario tree is generated and, through the use of full-recourse techniques, an implementable solution is obtained for each group of scenarios at each stage along the planning horizon. As a consequence of the size of the resulting problems, and the special structure of their constraints, these models are particularly well-suited for the application of decomposition techniques, and the solution of the corresponding subproblems in a parallel computation environment. An Augmented Lagrangian decomposition algorithm has been implemented on a distributed computation environment, and a static load balancing approach has been chosen for the parallelization scheme. given the subproblem structure of the model. Large problems -9000 scenarios and 14 stages with a deterministic equivalent nonlinear model having 166000 constraints and 230000 variables- are solved in 15 minutes on a cluster of 4 small (16 Mflops) workstations. An extensive set of computational experiments is reported; the numerical results and running times obtained for our test set, composed of large-scale real-life problems, confirm the efficiency of this procedure
- …