188,212 research outputs found
A pilgrimage to gravity on GPUs
In this short review we present the developments over the last 5 decades that
have led to the use of Graphics Processing Units (GPUs) for astrophysical
simulations. Since the introduction of NVIDIA's Compute Unified Device
Architecture (CUDA) in 2007 the GPU has become a valuable tool for N-body
simulations and is so popular these days that almost all papers about high
precision N-body simulations use methods that are accelerated by GPUs. With the
GPU hardware becoming more advanced and being used for more advanced algorithms
like gravitational tree-codes we see a bright future for GPU like hardware in
computational astrophysics.Comment: To appear in: European Physical Journal "Special Topics" : "Computer
Simulations on Graphics Processing Units" . 18 pages, 8 figure
An OFDM Signal Identification Method for Wireless Communications Systems
Distinction of OFDM signals from single carrier signals is highly important
for adaptive receiver algorithms and signal identification applications. OFDM
signals exhibit Gaussian characteristics in time domain and fourth order
cumulants of Gaussian distributed signals vanish in contrary to the cumulants
of other signals. Thus fourth order cumulants can be utilized for OFDM signal
identification. In this paper, first, formulations of the estimates of the
fourth order cumulants for OFDM signals are provided. Then it is shown these
estimates are affected significantly from the wireless channel impairments,
frequency offset, phase offset and sampling mismatch. To overcome these
problems, a general chi-square constant false alarm rate Gaussianity test which
employs estimates of cumulants and their covariances is adapted to the specific
case of wireless OFDM signals. Estimation of the covariance matrix of the
fourth order cumulants are greatly simplified peculiar to the OFDM signals. A
measurement setup is developed to analyze the performance of the identification
method and for comparison purposes. A parametric measurement analysis is
provided depending on modulation order, signal to noise ratio, number of
symbols, and degree of freedom of the underlying test. The proposed method
outperforms statistical tests which are based on fixed thresholds or empirical
values, while a priori information requirement and complexity of the proposed
method are lower than the coherent identification techniques
Recommended from our members
Comparison of Current Gravity Estimation and Determination Models
This paper will discuss the history of gravity estimation and determination models while analyzing methods that are in development. Some fundamental methods for calculating the gravity field include spherical harmonics solutions, local weighted interpolation, and global point mascon modeling (PMC). Recently, high accuracy measurements have become more accessible, and the requirements for high order geopotential modeling have become more stringent. Interest in irregular bodies, accurate models of the hydrological system, and on-board processing has demanded a comprehensive model that can quickly and accurately compute the geopotential with low memory costs. This trade study of current geopotential modeling techniques will reveal that each modeling technique has a unique use case. It is notable that the spherical harmonics model is relatively accurate but poses a cumbersome inversion problem. PMC and interpolation models, on the other hand, are computationally efficient, but require more research to become robust models with high levels of accuracy. Considerations of the trade study will suggest further research for the point mascon model. The PMC model should be improved through mascon refinement, direct solutions that stem from geodetic measurements, and further validation of the gravity gradient. Finally, the potential for each model to be implemented with parallel computation will be shown to lead to large improvements in computing time while reducing the memory cost for each technique.Aerospace Engineering and Engineering Mechanic
Rank-based camera spectral sensitivity estimation
In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab—a difficult and lengthy procedure—or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have “raw mode.” Experiments validate our method
Analogue algorithm for parallel factorization of an exponential number of large integers I. Theoretical description
We describe a novel analogue algorithm that allows the simultaneous
factorization of an exponential number of large integers with a polynomial
number of experimental runs. It is the interference-induced periodicity of
"factoring" interferograms measured at the output of an analogue computer that
allows the selection of the factors of each integer [1,2,3,4]. At the present
stage the algorithm manifests an exponential scaling which may be overcome by
an extension of this method to correlated qubits emerging from n-order quantum
correlations measurements. We describe the conditions for a generic physical
system to compute such an analogue algorithm. A particular example given by an
"optical computer" based on optical interference will be addressed in the
second paper of this series [5].Comment: to be published in Quantum Information Processing (QIP
- …