16,071 research outputs found
A correction to the enhanced bottom drag parameterisation of tidal turbines
Hydrodynamic modelling is an important tool for the development of tidal
stream energy projects. Many hydrodynamic models incorporate the effect of
tidal turbines through an enhanced bottom drag. In this paper we show that
although for coarse grid resolutions (kilometre scale) the resulting force
exerted on the flow agrees well with the theoretical value, the force starts
decreasing with decreasing grid sizes when these become smaller than the length
scale of the wake recovery. This is because the assumption that the upstream
velocity can be approximated by the local model velocity, is no longer valid.
Using linear momentum actuator disc theory however, we derive a relationship
between these two velocities and formulate a correction to the enhanced bottom
drag formulation that consistently applies a force that remains closed to the
theoretical value, for all grid sizes down to the turbine scale. In addition, a
better understanding of the relation between the model, upstream, and actual
turbine velocity, as predicted by actuator disc theory, leads to an improved
estimate of the usefully extractable energy. We show how the corrections can be
applied (demonstrated here for the models MIKE 21 and Fluidity) by a simple
modification of the drag coefficient
Distributed Exact Shortest Paths in Sublinear Time
The distributed single-source shortest paths problem is one of the most
fundamental and central problems in the message-passing distributed computing.
Classical Bellman-Ford algorithm solves it in time, where is the
number of vertices in the input graph . Peleg and Rubinovich (FOCS'99)
showed a lower bound of for this problem, where
is the hop-diameter of .
Whether or not this problem can be solved in time when is
relatively small is a major notorious open question. Despite intensive research
\cite{LP13,N14,HKN15,EN16,BKKL16} that yielded near-optimal algorithms for the
approximate variant of this problem, no progress was reported for the original
problem.
In this paper we answer this question in the affirmative. We devise an
algorithm that requires time, for , and time, for larger . This
running time is sublinear in in almost the entire range of parameters,
specifically, for . For the all-pairs shortest paths
problem, our algorithm requires time, regardless of
the value of .
We also devise the first algorithm with non-trivial complexity guarantees for
computing exact shortest paths in the multipass semi-streaming model of
computation.
From the technical viewpoint, our algorithm computes a hopset of a
skeleton graph of without first computing itself. We then conduct
a Bellman-Ford exploration in , while computing the required edges
of on the fly. As a result, our algorithm computes exactly those edges of
that it really needs, rather than computing approximately the entire
Conduction mechanism and magnetotransport in multi-walled carbon nanotubes
We report on a numerical study of quantum diffusion over micron lengths in
defect-free multi-walled nanotubes. The intershell coupling allows the electron
spreading over several shells, and when their periodicities along the nanotube
axis are incommensurate, which is likely in real materials, the electronic
propagation is shown to be non ballistic. This results in magnetotransport
properties which are exceptional for a disorder free system, and provides a new
scenario to understand the experiments (A. Bachtold et al. Nature 397, 673
(1999)).Comment: 4 page
Stellar intensity interferometry: Optimizing air Cherenkov telescope array layouts
Kilometric-scale optical imagers seem feasible to realize by intensity
interferometry, using telescopes primarily erected for measuring Cherenkov
light induced by gamma rays. Planned arrays envision 50--100 telescopes,
distributed over some 1--4 km. Although array layouts and telescope sizes
will primarily be chosen for gamma-ray observations, also their interferometric
performance may be optimized. Observations of stellar objects were numerically
simulated for different array geometries, yielding signal-to-noise ratios for
different Fourier components of the source images in the interferometric
-plane. Simulations were made for layouts actually proposed for future
Cherenkov telescope arrays, and for subsets with only a fraction of the
telescopes. All large arrays provide dense sampling of the -plane due to
the sheer number of telescopes, irrespective of their geographic orientation or
stellar coordinates. However, for improved coverage of the -plane and a
wider variety of baselines (enabling better image reconstruction), an exact
east-west grid should be avoided for the numerous smaller telescopes, and
repetitive geometric patterns avoided for the few large ones. Sparse arrays
become severely limited by a lack of short baselines, and to cover
astrophysically relevant dimensions between 0.1--3 milliarcseconds in visible
wavelengths, baselines between pairs of telescopes should cover the whole
interval 30--2000 m.Comment: 12 pages, 10 figures; presented at the SPIE conference "Optical and
Infrared Interferometry II", San Diego, CA, USA (June 2010
- …