16,864 research outputs found

    Stochastic simulation algorithm for the quantum linear Boltzmann equation

    Full text link
    We develop a Monte Carlo wave function algorithm for the quantum linear Boltzmann equation, a Markovian master equation describing the quantum motion of a test particle interacting with the particles of an environmental background gas. The algorithm leads to a numerically efficient stochastic simulation procedure for the most general form of this integro-differential equation, which involves a five-dimensional integral over microscopically defined scattering amplitudes that account for the gas interactions in a non-perturbative fashion. The simulation technique is used to assess various limiting forms of the quantum linear Boltzmann equation, such as the limits of pure collisional decoherence and quantum Brownian motion, the Born approximation and the classical limit. Moreover, we extend the method to allow for the simulation of the dissipative and decohering dynamics of superpositions of spatially localized wave packets, which enables the study of many physically relevant quantum phenomena, occurring e.g. in the interferometry of massive particles.Comment: 21 pages, 9 figures; v2: corresponds to published versio

    Fourier Analysis of Stochastic Sampling Strategies for Assessing Bias and Variance in Integration

    Get PDF

    The Uniformly Most Powerful Invariant Test for the Shoulder Condition in Point Transect Sampling

    Get PDF
    Estimating population abundance is of primary interest in wildlife population studies. Point transect sampling is a well established methodology for this purpose. The usual approach for estimating the density or the size of the population of interest is to assume a particular model for the detection function (the conditional probability of detecting an animal given that it is at a given distance from the observer). The two most popular models for this function are the half-normal model and the negative exponential model. However, it appears that the estimates are extremely sensitive to the shape of the detection function, particularly to the so-called shoulder condition, which ensures that an animal is almost certain to be detected if it is at a small distance from the observer. The half-normal model satisfies this condition whereas the negative exponential does not. Therefore, testing whether such a hypothesis is consistent with the data at hand should be a primary concern in every study concerning the estimation of animal abundance. In this paper we propose a test for this purpose. This is the uniformly most powerful test in the class of the scale invariant tests. The asymptotic distribution of the test statistic is calculated by utilising both the half-normal and negative exponential model while the critical values and the power are tabulated via Monte Carlo simulations for small samples. Finally, the procedure is applied to two datasets of chipping sparrows collected at the Rocky Mountain Bird Observatory, Colorado..Point Transect Sampling, Shoulder Condition, Uniformly Most Powerful Invariant Test, Asymptotic Critical Values, Monte Carlo Critical Values

    Hierarchical Variance Reduction Techniques for Monte Carlo Rendering

    Get PDF
    Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications Ʊ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields

    Azimuthal Anisotropy in High Energy Nuclear Collision - An Approach based on Complex Network Analysis

    Get PDF
    Recently, a complex network based method of Visibility Graph has been applied to confirm the scale-freeness and presence of fractal properties in the process of multiplicity fluctuation. Analysis of data obtained from experiments on hadron-nucleus and nucleus-nucleus interactions results in values of Power-of-Scale-freeness-of-Visibility-Graph-(PSVG) parameter extracted from the visibility graphs. Here, the relativistic nucleus-nucleus interaction data have been analysed to detect azimuthal-anisotropy by extending the Visibility Graph method and extracting the average clustering coefficient, one of the important topological parameters, from the graph. Azimuthal-distributions corresponding to different pseudorapidity-regions around the central-pseudorapidity value are analysed utilising the parameter. Here we attempt to correlate the conventional physical significance of this coefficient with respect to complex-network systems, with some basic notions of particle production phenomenology, like clustering and correlation. Earlier methods for detecting anisotropy in azimuthal distribution, were mostly based on the analysis of statistical fluctuation. In this work, we have attempted to find deterministic information on the anisotropy in azimuthal distribution by means of precise determination of topological parameter from a complex network perspective

    Subtracting Foregrounds from Interferometric Measurements of the Redshifted 21 cm Emission

    Full text link
    The ability to subtract foreground contamination from low-frequency observations is crucial to reveal the underlying 21 cm signal. The traditional line-of-sight methods can deal with the removal of diffuse emission and unresolved point sources, but not bright point sources. In this paper, we introduce a foreground cleaning technique in Fourier space, which allows us to handle all such foregrounds simultaneously and thus sidestep any special treatments to bright point sources. Its feasibility is tested with a simulated data cube for the 21 CentiMeter Array experiment. This data cube includes more realistic models for the 21 cm signal, continuum foregrounds, detector noise and frequency-dependent instrumental response. We find that a combination of two weighting schemes can be used to protect the frequency coherence of foregrounds: the uniform weighting in the uv plane and the inverse-variance weighting in the spectral fitting. The visibility spectrum is therefore well approximated by a quartic polynomial along the line of sight. With this method, we demonstrate that the huge foreground contamination can be cleaned out effectively with residuals on the order of \sim10 mK, while the spectrally smooth component of the cosmological signal is also removed, bringing about systematic underestimate in the extracted power spectrum primarily on large scales.Comment: 9 pages, 9 figures. Final published versio
    • ā€¦
    corecore