11,168 research outputs found
Regularization of fields for self-force problems in curved spacetime: foundations and a time-domain application
We propose an approach for the calculation of self-forces, energy fluxes and
waveforms arising from moving point charges in curved spacetimes. As opposed to
mode-sum schemes that regularize the self-force derived from the singular
retarded field, this approach regularizes the retarded field itself. The
singular part of the retarded field is first analytically identified and
removed, yielding a finite, differentiable remainder from which the self-force
is easily calculated. This regular remainder solves a wave equation which
enjoys the benefit of having a non-singular source. Solving this wave equation
for the remainder completely avoids the calculation of the singular retarded
field along with the attendant difficulties associated with numerically
modeling a delta function source. From this differentiable remainder one may
compute the self-force, the energy flux, and also a waveform which reflects the
effects of the self-force. As a test of principle, we implement this method
using a 4th-order (1+1) code, and calculate the self-force for the simple case
of a scalar charge moving in a circular orbit around a Schwarzschild black
hole. We achieve agreement with frequency-domain results to ~ 0.1% or better.Comment: 15 pages, 12 figures, 1 table. More figures, extended summar
Displaying 3D images: algorithms for single-image random-dot
A new, simple, and symmetric algorithm can be implemented that results in higher levels of detail in solid objects than previously possible with autostereograms. In a stereoscope, an optical instrument similar to binoculars, each eye views a different picture and thereby receives the specific image that would have arisen naturally. An early suggestion for a color stereo computer display involved a rotating filter wheel held in front of the eyes. In contrast, this article describes a method for viewing on paper or on an ordinary computer screen without special equipment, although it is limited to the display of 3D monochromatic objects. (The image can be colored, say, for artistic reasons, but the method we describe does not allow colors to be allocated in a way that corresponds to an arbitrary coloring of the solid object depicted.) The image can easily be constructed by computer from any 3D scene or solid object description
Recommended from our members
Structure variation and evolution in microphase-separated grafted diblock copolymer films
The phase behavior of grafted d-polystyrene-block-poly(methyl methacrylate) diblock copolymer films is examined, with particular focus on the effect of solvent and annealing time. It was observed that the films undergo a two-step transformation from an initially disordered state, through an ordered metastable state, to the final equilibrium configuration. It was also found that altering the solvent used to wash the films, or complete removal of the solvent prior to thermal annealing using supercritical CO2, could influence the structure of the films in the metastable state, though the final equilibrium state was unaffected. To aid in the understanding to these experimental results, a series of self-consistent field theory calculations were done on a model diblock copolymer brush containing solvent. Of the different models examined, those which contained a solvent selective for the grafted polymer block most accurately matched the observed experimental behavior. We hypothesize that the structure of the films in the metastable state results from solvent enrichment of the film near the film/substrate interface in the case of films washed with solvent or faster relaxation of the nongrafted block for supercritical CO2 treated (solvent free) films. The persistence of the metastable structures was attributed to the slow reorganization of the polymer chains in the absence of solvent
Transverse Momentum Dependent Fragmentation and Quark Distribution Functions from the NJL-jet Model
Using the model of Nambu and Jona-Lasinio to provide a microscopic
description of both the structure of the nucleon and of the quark to hadron
elementary fragmentation functions, we investigate the transverse momentum
dependence of the unpolarized quark distributions in the nucleon and of the
quark to pion and kaon fragmentation functions. The transverse momentum
dependence of the fragmentation functions is determined within a Monte Carlo
framework, with the notable result that the average of the produced
kaons is significantly larger than that of the pions. We also find that
has a sizable $z$ dependence, in contrast with the naive Gaussian
ansatz for the fragmentation functions. Diquark correlations in the nucleon
give rise to a non-trivial flavor dependence in the unpolarized transverse
momentum dependent quark distribution functions. The of the quarks in
the nucleon are also found to have a sizable dependence. Finally, these
results are used as input to a Monte Carlo event generator for semi-inclusive
deep inelastic scattering (SIDIS), which is used to determine the average
transverse momentum squared of the produced hadrons measured in SIDIS, namely
. Again we find that the average of the produced kaons in
SIDIS is significantly larger than that of the pions and in each case \la
P_T^2 \ra has a sizable dependence.Comment: 13 pages, 17 figures, v2: minor revisions to conform with the
published version in Phys.Rev.
Frequency division multiplexing for interferometric planar Doppler velocimetry
A new method of acquiring simultaneously the signal and reference channels used for interferometric
planar Doppler velocimetry is proposed and demonstrated. The technique uses frequency division multiplexing
(FDM) to facilitate the capture of the requisite images on a single camera, and is suitable for
time-averaged flow measurements. Furthermore, the approach has the potential to be expanded to allow
the multiplexing of additional measurement channels for multicomponent velocity measurement. The
use of FDM for interferometric referencing is demonstrated experimentally with measurements
of a single velocity component of a seeded axisymmetric air jet. The expansion of the technique to
include multiple velocity components was then investigated theoretically and experimentally to
account for bandwidth, crosstalk, and dynamic range limitations. The technique offers reduced
camera noise, automatic background light suppression, and crosstalk levels of typically <10%.
Furthermore, as this crosstalk is dependent upon the channel modulations applied, it can be corrected for in postprocessing
A user evaluation of hierarchical phrase browsing
Phrase browsing interfaces based on hierarchies of phrases extracted automatically from document collections offer a useful compromise between automatic full-text searching and manually-created subject indexes. The literature contains descriptions of such systems that many find compelling and persuasive. However, evaluation studies have either been anecdotal, or focused on objective measures of the quality of automatically-extracted index terms, or restricted to questions of computational efficiency and feasibility. This paper reports on an empirical, controlled user study that compares hierarchical phrase browsing with full-text searching over a range of information seeking tasks. Users found the results located via phrase browsing to be relevant and useful but preferred keyword searching for certain types of queries. Users experiences were marred by interface details, including inconsistencies between the phrase browser and the surrounding digital library interface
Incipient ferralization and weathering indices along a soil chronosequence in Taiwan
The low hilly topography of Green Island, a volcanic island off southeastern Taiwan, includes an altitudinal sequence of sub-horizontal benches. We examined eight profiles along this sequence, ranging from pale brown loamy coral sand on the lowest bench that fringes the coast at an elevation of about 10 m to deep, intensely red and acid clay on the highest bench at about 240 m. Chemical analyses, differential Fe extractions, thin sections, X-ray diffraction of the clay minerals and indices of pedochemical weathering and strain indicated that soil development progressed by weathering of primary and secondary phyllosilicates through argilluviation in the intermediate stages to the generation of increasing quantities of free Fe. The Fe accumulates as free sesquioxides, which crystallize with age. Taxonomically the soil types progress from sandy coral Arenosol, through Eutric Cambisol, Hypereutric Lixisol and Acrisol to incipient Ferralsol (Udipsamment → Eutrudept → Udalf → Udultisol → Udox in Soil Taxonomy). The profiles are interpreted as a chronosequence, although this is complicated by minor and upwardly diminishing contributions of reef coral to the mainly igneous parent materials. There are also variations in the andesitic-basaltic bedrock, and minor aeolian inputs in the higher and older soil types. Regional eustatic sea-level correlations, 14C dating of carbonates on the two lowest benches and estimates of local tectonic uplift indicate that the incipient Ferralsols on the upper bench might date from about 150 ka. The transition through argilluvial Acrisols to incipient sesquioxide-dominated Ferralsols appears, therefore, to develop within 100–200 ka on Green Island, which is faster than usual
Estimating conditional volatility with neural networks
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies
- …