236 research outputs found
Gravitational waves from quasi-spherical black holes
A quasi-spherical approximation scheme, intended to apply to coalescing black
holes, allows the waveforms of gravitational radiation to be computed by
integrating ordinary differential equations.Comment: 4 revtex pages, 2 eps figure
Bondian frames to couple matter with radiation
A study is presented for the non linear evolution of a self gravitating
distribution of matter coupled to a massless scalar field. The characteristic
formulation for numerical relativity is used to follow the evolution by a
sequence of light cones open to the future. Bondian frames are used to endow
physical meaning to the matter variables and to the massless scalar field.
Asymptotic approaches to the origin and to infinity are achieved; at the
boundary surface interior and exterior solutions are matched guaranteeing the
Darmois--Lichnerowicz conditions. To show how the scheme works some numerical
models are discussed. We exemplify evolving scalar waves on the following fixed
backgrounds: A) an atmosphere between the boundary surface of an incompressible
mixtured fluid and infinity; B) a polytropic distribution matched to a
Schwarzschild exterior; C) a Schwarzschild- Schwarzschild spacetime. The
conservation of energy, the Newman--Penrose constant preservation and other
expected features are observed.Comment: 20 pages, 6 figures; to appear in General Relativity and Gravitatio
An assessment of Evans' unified field theory I
Evans developed a classical unified field theory of gravitation and
electromagnetism on the background of a spacetime obeying a Riemann-Cartan
geometry. This geometry can be characterized by an orthonormal coframe theta
and a (metric compatible) Lorentz connection Gamma. These two potentials yield
the field strengths torsion T and curvature R. Evans tried to infuse
electromagnetic properties into this geometrical framework by putting the
coframe theta to be proportional to four extended electromagnetic potentials A;
these are assumed to encompass the conventional Maxwellian potential in a
suitable limit. The viable Einstein-Cartan(-Sciama-Kibble) theory of gravity
was adopted by Evans to describe the gravitational sector of his theory.
Including also the results of an accompanying paper by Obukhov and the author,
we show that Evans' ansatz for electromagnetism is untenable beyond repair both
from a geometrical as well as from a physical point of view. As a consequence,
his unified theory is obsolete.Comment: 39 pages of latex, modified because of referee report, mistakes and
typos removed, partly reformulated, taken care of M.W.Evans' rebutta
Onset of Superfluidity in 4He Films Adsorbed on Disordered Substrates
We have studied 4He films adsorbed in two porous glasses, aerogel and Vycor,
using high precision torsional oscillator and DC calorimetry techniques. Our
investigation focused on the onset of superfluidity at low temperatures as the
4He coverage is increased. Torsional oscillator measurements of the 4He-aerogel
system were used to determine the superfluid density of films with transition
temperatures as low as 20 mK. Heat capacity measurements of the 4He-Vycor
system probed the excitation spectrum of both non-superfluid and superfluid
films for temperatures down to 10 mK. Both sets of measurements suggest that
the critical coverage for the onset of superfluidity corresponds to a mobility
edge in the chemical potential, so that the onset transition is the bosonic
analog of a superconductor-insulator transition. The superfluid density
measurements, however, are not in agreement with the scaling theory of an onset
transition from a gapless, Bose glass phase to a superfluid. The heat capacity
measurements show that the non-superfluid phase is better characterized as an
insulator with a gap.Comment: 15 pages (RevTex), 21 figures (postscript
Software engineering techniques for the development of systems of systems
This paper investigates how existing software engineering techniques can be employed, adapted and integrated for the development of systems of systems. Starting from existing system-of-systems (SoS) studies, we identify computing paradigms and techniques that have the potential to help address the challenges associated with SoS development, and propose an SoS development framework that combines these techniques in a novel way. This framework addresses the development of a class of IT systems of systems characterised by high variability in the types of interactions between their component systems, and by relatively small numbers of such interactions. We describe how the framework supports the dynamic, automated generation of the system interfaces required to achieve these interactions, and present a case study illustrating the development of a data-centre SoS using the new framework
VERTIGO (VERtical Transport In the Global Ocean) : a study of particle sources and flux attenuation in the North Pacific
Author Posting. © Elsevier B.V., 2008. This is the author's version of the work. It is posted here by permission of Elsevier B.V. for personal use, not for redistribution. The definitive version was published in Deep Sea Research Part II: Topical Studies in Oceanography 55 (2008): 1522-1539, doi:10.1016/j.dsr2.2008.04.024.The VERtical Transport In the Global Ocean (VERTIGO) study examined particle sources and
fluxes through the ocean’s “twilight zone” (defined here as depths below the euphotic zone to
1000 m). Interdisciplinary process studies were conducted at contrasting sites off Hawaii
(ALOHA) and in the NW Pacific (K2) during 3 week occupations in 2004 and 2005, respectively.
We examine in this overview paper the contrasting physical, chemical and biological settings and
how these conditions impact the source characteristics of the sinking material and the transport
efficiency through the twilight zone. A major finding in VERTIGO is the considerably lower
transfer efficiency (Teff) of particulate organic carbon (POC), POC flux 500 / 150 m, at ALOHA
(20%) vs. K2 (50%). This efficiency is higher in the diatom-dominated setting at K2 where
silica-rich particles dominate the flux at the end of a diatom bloom, and where zooplankton and
their pellets are larger. At K2, the drawdown of macronutrients is used to assess export and
suggests that shallow remineralization above our 150 m trap is significant, especially for N
relative to Si. We explore here also surface export ratios (POC flux/primary production) and
possible reasons why this ratio is higher at K2, especially during the first trap deployment. When
we compare the 500 m fluxes to deep moored traps, both sites lose about half of the sinking POC
by >4000 m, but this comparison is limited in that fluxes at depth may have both a local and
distant component. Certainly, the greatest difference in particle flux attenuation is in the
mesopelagic, and we highlight other VERTIGO papers that provide a more detailed examination
of the particle sources, flux and processes that attenuate the flux of sinking particles. Ultimately,
we contend that at least three types of processes need to be considered: heterotrophic degradation
of sinking particles, zooplankton migration and surface feeding, and lateral sources of suspended
and sinking materials. We have evidence that all of these processes impacted the net attenuation
of particle flux vs. depth measured in VERTIGO and would therefore need to be considered and
quantified in order to understand the magnitude and efficiency of the ocean’s biological pump.Funding for VERTIGO was provided primarily by research grants
from the US National Science Foundation Programs in Chemical and Biological Oceanography
(KOB, CHL, MWS, DKS, DAS). Additional US and non-US grants included: US Department
of Energy, Office of Science, Biological and Environmental Research Program (JKBB); the
Gordon and Betty Moore Foundation (DMK); the Australian Cooperative Research Centre
program and Australian Antarctic Division (TWT); Chinese NSFC and MOST programs (NZJ);
Research Foundation Flanders and Vrije Universiteit Brussel (FD, ME); JAMSTEC (MCH); New
Zealand Public Good Science Foundation (PWB); and internal WHOI sources and a contribution
from the John Aure and Cathryn Ann Hansen Buesseler Foundation (KOB)
Geodesic motion in the neighbourhood of submanifolds embedded in warped product spaces
We study the classical geodesic motions of nonzero rest mass test particles
and photons in (3+1+n)- dimensional warped product spaces. An important feature
of these spaces is that they allow a natural decoupling between the motions in
the (3+1)-dimensional spacetime and those in the extra n dimensions. Using this
decoupling and employing phase space analysis we investigate the conditions for
confinement of particles and photons to the (3+1)- spacetime submanifold. In
addition to providing information regarding the motion of photons, we also show
that these motions are not constrained by the value of the extrinsic curvature.
We obtain the general conditions for the confinement of geodesics in the case
of pseudo-Riemannian manifolds as well as establishing the conditions for the
stability of such confinement. These results also generalise a recent result of
the authors concerning the embeddings of hypersurfaces with codimension one.Comment: 8 pages, 1 figure. To appear in General Relativity and Gravitation as
a contributed paper to Mashhoon Festschrif
b-Jet Identification in the D0 Experiment
Algorithms distinguishing jets originating from b quarks from other jet
flavors are important tools in the physics program of the D0 experiment at the
Fermilab Tevatron p-pbar collider. This article describes the methods that have
been used to identify b-quark jets, exploiting in particular the long lifetimes
of b-flavored hadrons, and the calibration of the performance of these
algorithms based on collider data.Comment: submitted to Nuclear Instruments and Methods in Physics Research
Direct Learning of Sparse Changes in Markov Networks by Density Ratio Estimation
Abstract. We propose a new method for detecting changes in Markov network structure between two sets of samples. Instead of naively fitting two Markov network models separately to the two data sets and figuring out their difference, we directly learn the network structure change by estimating the ratio of Markov network models. This density-ratio formulation naturally allows us to introduce sparsity in the network structure change, which highly contributes to enhancing interpretability. Furthermore, computation of the normalization term, which is a critical computational bottleneck of the naive approach, can be remarkably mitigated. Through experiments on gene expression and Twitter data analysis, we demonstrate the usefulness of our method.
- …