133 research outputs found
Difference Sets and Positive Exponential Sums I. General Properties
We describe general connections between intersective properties of sets in Abelian groups and positive exponential sums. In particular, given a set A the maximal size of a set whose difference set avoids A will be related to positive exponential sums using frequencies from A. © 2013 Springer Science+Business Media New York
Systems of mutually unbiased Hadamard matrices containing real and complex matrices
We use combinatorial and Fourier analytic arguments
to prove various non-existence results on systems of real and com-
plex unbiased Hadamard matrices. In particular, we prove that
a complete system of complex mutually unbiased Hadamard ma-
trices (MUHs) in any dimension cannot contain more than one
real Hadamard matrix. We also give new proofs of several known
structural results in low dimensions
Squares and difference sets in finite fields
For infinitely many primes p = 4k+1 we give a slightly
improved upper bound for the maximal cardinality of a set B â Z
p
such that the difference set BâB contains only quadratic residues.
Namely, instead of the âtrivialâ bound |B| †âp we prove |B âp | †â 1, under suitable conditions on p. The new bound is valid
for approximately three quarters of the primes p = 4k + 1
Triangulations and a discrete Brunn-Minkowski inequality in the plane
For a set of points in the plane, not all collinear, we denote by the number of triangles in any triangulation of ; that is, where and are the numbers of points of in the
boundary and the interior of (we use to denote "convex hull of
"). We conjecture the following analogue of the Brunn-Minkowski inequality:
for any two point sets one has
We prove this conjecture in several cases: if , if ,
if , or if none of or has interior points.Comment: 30 page
High precision Y(,)Y scattering at low energies
Elastic scattering cross sections of the Y(,)Y
reaction have been measured at energies E = 15.51 and 18.63 MeV. The
high precision data for the semi-magic nucleus Y are used to
derive a local potential and to evaluate the predictions of global and regional
-nucleus potentials. The variation of the elastic alpha scattering
cross sections along the isotonic chain is investigated by a study of
the ratios of angular distributions for Y(,)Y and
Mo(,)Mo at E 15.51 and 18.63
MeV. This ratio is a very sensitive probe at energies close to the Coulomb
barrier, where scattering data alone is usually not enough to characterize the
different potentials. Furthermore, -cluster states in Nb =
Y are investigated
Editorial: Developments in cardiac implantable electronic device therapy: how can we improve clinical implementation?
Validation and verification of a 2D lattice Boltzmann solver for incompressible fluid flow
The lattice Boltzmann method (LBM) is becoming increasingly popular in the fluid mechanics society because it provides a relatively easy implementation for an incompressible fluid flow solver. Furthermore the particle based LBM can be applied in microscale flows where the continuum based Navier-Stokes solvers fail. Here we present the validation and verification of a two-dimensional in-house lattice Boltzmann solver with two different collision models, namely the BGKW and the MRT models [1]. Five different cases were studied, namely: (i) a channel flow was investigated, the results were compared to the analytical solution, and the convergence properties of the collision models were determined; (ii) the lid-driven cavity problem was examined [2] and the flow features and the velocity profiles were compared to existing simulation results at three different Reynolds number; (iii) the flow in a backward-facing step geometry was validated against experimental data [3]; (iv) the flow in a sudden expansion geometry was compared to experimental data at two different Reynolds numbers [4]; and finally (v) the flow around a cylinder was studied at higher Reynolds number in the turbulent regime. The first four test cases showed that both the BGKW and the MRT models were capable of giving qualitatively and quantitatively good results for these laminar flow cases. The simulations around a cylinder highlighted that the BGKW model becomes unstable for high Reynolds numbers but the MRT model still remains suitable to capture the turbulent von Karman vortex street. The in-house LBM code has been developed in C and has also been parallelised for GPU architectures using CUDA [5] and for CPU architectures using the Partitioned Global Address Space model with UPC [6
A superadditivity and submultiplicativity property for cardinalities of sumsets
For finite sets of integers A1, . . . ,An we study the cardinality of the n-fold
sumset A1 + · · · + An compared to those of (n â 1)-fold sumsets A1 + · · · + Aiâ1 +
Ai+1 + · · · + An. We prove a superadditivity and a submultiplicativity property for
these quantities. We also examine the case when the addition of elements is restricted
to an addition graph between the sets
Performance evaluation of a two-dimensional lattice Boltzmann solver using CUDA and PGAS UPC based parallelisation
The Unified Parallel C (UPC) language from the Partitioned Global Address Space (PGAS) family unifies the advantages of shared and local memory spaces and offers a relatively straightforward code parallelisation with the Central Processing Unit (CPU). In contrast, the Computer Unified Device Architecture (CUDA) development kit gives a tool to make use of the Graphics Processing Unit (GPU). We provide a detailed comparison between these novel techniques through the parallelisation of a two-dimensional lattice Boltzmann method based fluid flow solver. Our comparison between the CUDA and UPC parallelisation takes into account the required conceptual effort, the performance gain, and the limitations of the approaches from the application oriented developersâ point of view. We demonstrated that UPC led to competitive efficiency with the local memory implementation. However, the performance of the shared memory code fell behind our expectations, and we concluded that the investigated UPC compilers could not efficiently treat the shared memory space. The CUDA implementation proved to be more complex compared to the UPC approach mainly because of the complicated memory structure of the graphics card which also makes GPUs suitable for the parallelisation of the lattice Boltzmann method
- âŠ