22,880 research outputs found
Discretizing Gravity in Warped Spacetime
We investigate the discretized version of the compact Randall-Sundrum model.
By studying the mass eigenstates of the lattice theory, we demonstrate that for
warped space, unlike for flat space, the strong coupling scale does not depend
on the IR scale and lattice size. However, strong coupling does prevent us from
taking the continuum limit of the lattice theory. Nonetheless, the lattice
theory works in the manifestly holographic regime and successfully reproduces
the most significant features of the warped theory. It is even in some respects
better than the KK theory, which must be carefully regulated to obtain the
correct physical results. Because it is easier to construct lattice theories
than to find exact solutions to GR, we expect lattice gravity to be a useful
tool for exploring field theory in curved space.Comment: 17 pages, 4 figures; references adde
Population extremal optimisation for discrete multi-objective optimisation problems
The power to solve intractable optimisation problems is often found through population based evolutionary methods. These include, but are not limited to, genetic algorithms, particle swarm optimisation, differential evolution and ant colony optimisation. While showing much promise as an effective optimiser, extremal optimisation uses only a single solution in its canonical form – and there are no standard population mechanics. In this paper, two population models for extremal optimisation are proposed and applied to a multi-objective version of the generalised assignment problem. These models use novel intervention/interaction strategies as well as collective memory in order to allow individual population members to work together. Additionally, a general non-dominated local search algorithm is developed and tested. Overall, the results show that improved attainment surfaces can be produced using population based interactions over not using them. The new EO approach is also shown to be highly competitive with an implementation of NSGA-II.No Full Tex
A parallel implementation of ant colony optimization
Ant Colony Optimization is a relatively new class of meta-heuristic search techniques for optimization problems. As it is a population-based technique that examines numerous solution options at each step of the algorithm, there are a variety of parallelization opportunities. In this paper, several parallel decomposition strategies are examined. These techniques are applied to a specific problem, namely the travelling salesman problem, with encouraging speedup and efficiency results.Full Tex
A unified view of convective transports by stratocumulus clouds, shallow cumulus clouds, and deep convection
A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers
Fractional cloudiness in shallow cumulus layers
Fractional cloudiness influences the planetary boundary layer (PBL) by controlling the cloud-top radiative cooling rate, and regulating the buoyant production and consumption of turbulence kinetic energy. Betts, Hanson, and Albrecht have modeled partly cloudy PBLs by assuming a single family of convective circulations. The same idealized model has been used in observational studies, based on conditional sampling and/or joint distribution functions, by Lenschow, Albrecht, and others. This approach is extended. None of these authors has proposed a method to determine the fractional area covered by rising motion; finding such a method was a key objective of the present study
The effects of clouds on CO2 forcing
The cloud radiative forcing (CRF) is the difference between the radiative flux (at the top of the atmosphere) which actually occurs in the presence of clouds, and that which would occur if the clouds were removed but the atmospheric state were otherwise unchanged. The CO2 forcing is defined, in analogy with the cloud forcing, as the difference in fluxes and/or infrared heating rates obtained by instantaneously changing CO2 concentration (doubling it) without changing anything else, i.e., without allowing any feedback. An increased CO2 concentration leads to a reduced net upward longwave flux at the Earth's surface. This induced net upward flux is due to an increased downward emission by the CO2 in the atmosphere above. The negative increment to the net upward flux becomes more intense at higher levels in the troposphere, reaching a peak intensity roughly at the tropopause. It then weakens with height in the stratosphere. This profile implies a warming of the troposphere and cooling of the stratosphere. The CSU GCM was recently used to make some preliminary CO2 forcing calculations, for a single simulated, for July conditions. The longwave radiation routine was called twice, to determine the radiative fluxes and heating rates for both 2 x CO2 and 1 x CO2. As diagnostics, the 2-D distributions of the longwave fluxes at the surface and the top of atmosphere, as well as the 3-D distribution of the longwave cooling in the interior was saved. In addition, the pressure was saved (near the tropopause) where the difference in the longwave flux due to CO2 doubling has its largest magnitude. For convenience, this level is referred to as the CO2 tropopause. The actual difference in the flux at that level was also saved. Finally, all of these fields were duplicated for the hypothetical case of no cloudiness (clear sky), so that the effects of the clouds can be isolated
The effects of cloud radiative forcing on an ocean-covered planet
Cumulus anvil clouds, whose importance has been emphasized by observationalists in recent years, exert a very powerful influence on deep tropical convection by tending to radiatively destabilize the troposphere. In addition, they radiatively warm the column in which they reside. Their strong influence on the simulated climate argues for a much more refined parameterization in the General Circulation Model (GCM). For Seaworld, the atmospheric cloud radiative forcing (ACRF) has a powerful influence on such basic climate parameters as the strength of the Hadley circulation, the existence of a single narrow InterTropical Convergence Zone (ITCZ), and the precipitable water content of the atmosphere. It seems likely, however, that in the real world the surface CRF feeds back negatively to suppress moist convection and the associated cloudiness, and so tends to counteract the effects of the ACRF. Many current climate models have fixed sea surface temperatures but variable land-surface temperatures. The tropical circulations of such models may experience a position feedback due to ACRF over the oceans, and a negative or weak feedback due to surface CRF over the land. The overall effects of the CRF on the climate system can only be firmly established through much further analysis, which can benefit greatly from the use of a coupled ocean-atmospheric model
Highly Robust Error Correction by Convex Programming
This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x ∈ ℝ^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors).
We show that if one encodes the information as Ax where A ∈
ℝ^(m x n) (m ≥ n) is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well
- …