8,224 research outputs found

    On 3rd and 4th moments of finite upper half plane graphs

    Get PDF
    AbstractTerras [A. Terras, Fourier Analysis on Finite Groups and Applications, Cambridge Univ. Press, 1999] gave a conjecture on the distribution of the eigenvalues of finite upper half plane graphs. This is known as a finite analogue of Sato–Tate conjecture. There are several modified versions of them. In this paper, we show that this conjecture is not correct in its original form (i.e., Conjecture 1.1). This is shown for the calculations of the 3rd and 4th moments of the distribution of the eigenvalues. We remark that a weaker version of the conjecture (i.e., Conjecture 1.2) may still hold

    Tensile buckling of advanced turboprops

    Get PDF
    Theoretical studies were conducted to determine analytically the tensile buckling of advanced propeller blades (turboprops) in centrifugal fields, as well as the effects of tensile buckling on other types of structural behavior, such as resonant frequencies and flutter. Theoretical studies were also conducted to establish the advantages of using high performance composite turboprops as compared to titanium. Results show that the vibration frequencies are not affected appreciably prior to 80 percent of the tensile speed. Some frequencies approach zero as the tensile buckling speed is approached. Composites provide a substantial advantage over titanium on a buckling speed to weight basis. Vibration modes change as the rotor speed is increased and substantial geometric coupling is present

    Z(3) Interfaces in Lattice Gauge Theory

    Full text link
    A study is made of properties of the Z(3) interface which forms between the different ordered phases of pure SU(3) gauge theory above a critical temperature. The theory is simulated on a (2+1)-D lattice at various temperatures above this critical point. At high temperatures, the interface tension is shown to agree well with the prediction of perturbation theory. Near the critical temperature, the interface behaviour is characterised by various displacement moments, and modelled by an interacting scalar field theory. This thesis is provided for reference, as it gives full details of the computational and statistical methods outlined only briefly in preprints hep-lat/9605040 and hep-lat/9607005.Comment: TeX, 143 pages, 52 figure

    Systematic Study of Accuracy of Wall-Modeled Large Eddy Simulation using Uncertainty Quantification Techniques

    Full text link
    The predictive accuracy of wall-modeled large eddy simulation is studied by systematic simulation campaigns of turbulent channel flow. The effect of wall model, grid resolution and anisotropy, numerical convective scheme and subgrid-scale modeling is investigated. All of these factors affect the resulting accuracy, and their action is to a large extent intertwined. The wall model is of the wall-stress type, and its sensitivity to location of velocity sampling, as well as law of the wall's parameters is assessed. For efficient exploration of the model parameter space (anisotropic grid resolution and wall model parameter values), generalized polynomial chaos expansions are used to construct metamodels for the responses which are taken to be measures of the predictive error in quantities of interest (QoIs). The QoIs include the mean wall shear stress and profiles of the mean velocity, the turbulent kinetic energy, and the Reynolds shear stress. DNS data is used as reference. Within the tested framework, a particular second-order accurate CFD code (OpenFOAM), the results provide ample support for grid and method parameters recommendations which are proposed in the present paper, and which provide good results for the QoIs. Notably, good results are obtained with a grid with isotropic (cubic) hexahedral cells, with 15 00015\, 000 cells per δ3\delta^3, where δ\delta is the channel half-height (or thickness of the turbulent boundary layer). The importance of providing enough numerical dissipation to obtain accurate QoIs is demonstrated. The main channel flow case investigated is Reτ=5200{\rm Re}_\tau=5200, but extension to a wide range of Re{\rm Re}-numbers is considered. Use of other numerical methods and software would likely modify these recommendations, at least slightly, but the proposed framework is fully applicable to investigate this as well

    Foundational principles for large scale inference: Illustrations through correlation mining

    Full text link
    When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number nn of acquired samples (statistical replicates) is far fewer than the number pp of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size nn is fixed, and the dimension pp grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks

    Conceptual design of an airborne laser Doppler velocimeter system for studying wind fields associated with severe local storms

    Get PDF
    An airborne laser Doppler velocimeter was evaluated for diagnostics of the wind field associated with an isolated severe thunderstorm. Two scanning configurations were identified, one a long-range (out to 10-20 km) roughly horizontal plane mode intended to allow probing of the velocity field around the storm at the higher altitudes (4-10 km). The other is a shorter range (out to 1-3 km) mode in which a vertical or horizontal plane is scanned for velocity (and possibly turbulence), and is intended for diagnostics of the lower altitude region below the storm and in the out-flow region. It was concluded that aircraft flight velocities are high enough and severe storm lifetimes are long enough that a single airborne Doppler system, operating at a range of less than about 20 km, can view the storm area from two or more different aspects before the storm characteristics change appreciably

    Confocal Laser Induced Fluorescence with Comparable Spatial Localization to the Conventional Method

    Full text link
    We present measurements of ion velocity distributions obtained by laser induced fluorescence (LIF) using a single viewport in an argon plasma. A patent pending design, which we refer to as the confocal fluorescence telescope, combines large objective lenses with a large central obscuration and a spatial filter to achieve high spatial localization along the laser injection direction. Models of the injection and collection optics of the two assemblies are used to provide a theoretical estimate of the spatial localization of the confocal arrangement, which is taken to be the full width at half maximum of the spatial optical response. The new design achieves approximately 1.4 mm localization at a focal length of 148.7 mm, improving on previously published designs by an order of magnitude and approaching the localization achieved by the conventional method. The confocal method, however, does so without requiring a pair of separated, perpendicular optical paths. The confocal technique therefore eases the two window access requirement of the conventional method, extending the application of LIF to experiments where conventional LIF measurements have been impossible or difficult, or where multiple viewports are scarce
    • …
    corecore