16,071 research outputs found

    A correction to the enhanced bottom drag parameterisation of tidal turbines

    Get PDF
    Hydrodynamic modelling is an important tool for the development of tidal stream energy projects. Many hydrodynamic models incorporate the effect of tidal turbines through an enhanced bottom drag. In this paper we show that although for coarse grid resolutions (kilometre scale) the resulting force exerted on the flow agrees well with the theoretical value, the force starts decreasing with decreasing grid sizes when these become smaller than the length scale of the wake recovery. This is because the assumption that the upstream velocity can be approximated by the local model velocity, is no longer valid. Using linear momentum actuator disc theory however, we derive a relationship between these two velocities and formulate a correction to the enhanced bottom drag formulation that consistently applies a force that remains closed to the theoretical value, for all grid sizes down to the turbine scale. In addition, a better understanding of the relation between the model, upstream, and actual turbine velocity, as predicted by actuator disc theory, leads to an improved estimate of the usefully extractable energy. We show how the corrections can be applied (demonstrated here for the models MIKE 21 and Fluidity) by a simple modification of the drag coefficient

    Distributed Exact Shortest Paths in Sublinear Time

    Full text link
    The distributed single-source shortest paths problem is one of the most fundamental and central problems in the message-passing distributed computing. Classical Bellman-Ford algorithm solves it in O(n)O(n) time, where nn is the number of vertices in the input graph GG. Peleg and Rubinovich (FOCS'99) showed a lower bound of Ω~(D+n)\tilde{\Omega}(D + \sqrt{n}) for this problem, where DD is the hop-diameter of GG. Whether or not this problem can be solved in o(n)o(n) time when DD is relatively small is a major notorious open question. Despite intensive research \cite{LP13,N14,HKN15,EN16,BKKL16} that yielded near-optimal algorithms for the approximate variant of this problem, no progress was reported for the original problem. In this paper we answer this question in the affirmative. We devise an algorithm that requires O((nlogn)5/6)O((n \log n)^{5/6}) time, for D=O(nlogn)D = O(\sqrt{n \log n}), and O(D1/3(nlogn)2/3)O(D^{1/3} \cdot (n \log n)^{2/3}) time, for larger DD. This running time is sublinear in nn in almost the entire range of parameters, specifically, for D=o(n/log2n)D = o(n/\log^2 n). For the all-pairs shortest paths problem, our algorithm requires O(n5/3log2/3n)O(n^{5/3} \log^{2/3} n) time, regardless of the value of DD. We also devise the first algorithm with non-trivial complexity guarantees for computing exact shortest paths in the multipass semi-streaming model of computation. From the technical viewpoint, our algorithm computes a hopset G"G" of a skeleton graph GG' of GG without first computing GG' itself. We then conduct a Bellman-Ford exploration in GG"G' \cup G", while computing the required edges of GG' on the fly. As a result, our algorithm computes exactly those edges of GG' that it really needs, rather than computing approximately the entire GG'

    Conduction mechanism and magnetotransport in multi-walled carbon nanotubes

    Full text link
    We report on a numerical study of quantum diffusion over micron lengths in defect-free multi-walled nanotubes. The intershell coupling allows the electron spreading over several shells, and when their periodicities along the nanotube axis are incommensurate, which is likely in real materials, the electronic propagation is shown to be non ballistic. This results in magnetotransport properties which are exceptional for a disorder free system, and provides a new scenario to understand the experiments (A. Bachtold et al. Nature 397, 673 (1999)).Comment: 4 page

    Stellar intensity interferometry: Optimizing air Cherenkov telescope array layouts

    Full text link
    Kilometric-scale optical imagers seem feasible to realize by intensity interferometry, using telescopes primarily erected for measuring Cherenkov light induced by gamma rays. Planned arrays envision 50--100 telescopes, distributed over some 1--4 km2^2. Although array layouts and telescope sizes will primarily be chosen for gamma-ray observations, also their interferometric performance may be optimized. Observations of stellar objects were numerically simulated for different array geometries, yielding signal-to-noise ratios for different Fourier components of the source images in the interferometric (u,v)(u,v)-plane. Simulations were made for layouts actually proposed for future Cherenkov telescope arrays, and for subsets with only a fraction of the telescopes. All large arrays provide dense sampling of the (u,v)(u,v)-plane due to the sheer number of telescopes, irrespective of their geographic orientation or stellar coordinates. However, for improved coverage of the (u,v)(u,v)-plane and a wider variety of baselines (enabling better image reconstruction), an exact east-west grid should be avoided for the numerous smaller telescopes, and repetitive geometric patterns avoided for the few large ones. Sparse arrays become severely limited by a lack of short baselines, and to cover astrophysically relevant dimensions between 0.1--3 milliarcseconds in visible wavelengths, baselines between pairs of telescopes should cover the whole interval 30--2000 m.Comment: 12 pages, 10 figures; presented at the SPIE conference "Optical and Infrared Interferometry II", San Diego, CA, USA (June 2010
    corecore