1,030 research outputs found

    High-efficiency 20 GHz traveling wave tube development for space communications

    Get PDF
    A 75 watt CW high efficiency helix TWT operating at 20 GHz was developed for satellite communication systems. The purpose was to extend the performance capabilities of helix TWTs by using recent technology developments. The TWT described is a unique design because high overall efficiency is obtained with a low perveance beam. In the past, low perveance designs resulted in low beam efficiencies. However, due to recent breakthoughs in diamond rod technology and in collector electrode materials, high efficiencies can now be achieved with low perveance beams. The advantage of a low perveance beam is a reduction in space charge within the beam which translates to more efficient collector operation. In addition, this design incorporates textured graphite electrodes which further enhance collector operation by suppressing backstreaming secondaries. The diamond supported helix circuit features low RF losses, high interaction impedance, good thermal handling capability and has been designed to compensate for the low perveance beam. One more discussed tube feature is the use of a velocity taper in the output helix that achieves low signal distortion while maintaining high efficiency

    Performance of a parallel code for the Euler equations on hypercube computers

    Get PDF
    The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made

    Parametric test of a zirconium (4) oxide-polyacrylic acid dual layer hyperfiltration membrane with spacecraft washwater

    Get PDF
    Performance data consisting of solute rejections and product flux were measured, as dependent on the operation parameters. These parameters and ranges were pressure (500,000 n/m2 to 700,000 n/m2), temperature (74 C to 95 C), velocity (1.6 M/sec to 10 M/sec), and concentration (up to 14x). Tests were carried out on analog washwater. Data presented include rejections of organic materials, ammonia, urea, and an assortment of ions. The membrane used was deposited in situ on a porcelain ceramic substrate

    The decision tree approach to classification

    Get PDF
    A class of multistage decision tree classifiers is proposed and studied relative to the classification of multispectral remotely sensed data. The decision tree classifiers are shown to have the potential for improving both the classification accuracy and the computation efficiency. Dimensionality in pattern recognition is discussed and two theorems on the lower bound of logic computation for multiclass classification are derived. The automatic or optimization approach is emphasized. Experimental results on real data are reported, which clearly demonstrate the usefulness of decision tree classifiers

    Weighted False Discovery Rate Control in Large-Scale Multiple Testing

    Get PDF
    The use of weights provides an effective strategy to incorporate prior domain knowledge in large-scale inference. This paper studies weighted multiple testing in a decision-theoretic framework. We develop oracle and data-driven procedures that aim to maximize the expected number of true positives subject to a constraint on the weighted false discovery rate. The asymptotic validity and optimality of the proposed methods are established. The results demonstrate that incorporating informative domain knowledge enhances the interpretability of results and precision of inference. Simulation studies show that the proposed method controls the error rate at the nominal level, and the gain in power over existing methods is substantial in many settings. An application to genome-wide association study is discussed.Comment: Revise

    Optimal Dorfman Group Testing For Symmetric Distributions

    Full text link
    We study Dorfman's classical group testing protocol in a novel setting where individual specimen statuses are modeled as exchangeable random variables. We are motivated by infectious disease screening. In that case, specimens which arrive together for testing often originate from the same community and so their statuses may exhibit positive correlation. Dorfman's protocol screens a population of n specimens for a binary trait by partitioning it into nonoverlapping groups, testing these, and only individually retesting the specimens of each positive group. The partition is chosen to minimize the expected number of tests under a probabilistic model of specimen statuses. We relax the typical assumption that these are independent and indentically distributed and instead model them as exchangeable random variables. In this case, their joint distribution is symmetric in the sense that it is invariant under permutations. We give a characterization of such distributions in terms of a function q where q(h) is the marginal probability that any group of size h tests negative. We use this interpretable representation to show that the set partitioning problem arising in Dorfman's protocol can be reduced to an integer partitioning problem and efficiently solved. We apply these tools to an empirical dataset from the COVID-19 pandemic. The methodology helps explain the unexpectedly high empirical efficiency reported by the original investigators.Comment: 20 pages w/o references, 2 figure

    Estimating the Energy Loss in Pelton Turbine Casings by Transient CFD and Experimental Analysis

    Get PDF
    Many consider the Pelton turbine a mature technology, nevertheless the advent of Computational Fluid Dynamics (CFD) in recent decades has been a key driver in the continued design development. Impulse turbine casings play a very important role and experience dictates that the efficiency of a Pelton turbine is closely dependent on the success of keeping vagrant spray water away from the turbine runner and the water jet. Despite this overarching purpose, there is no standard design guidelines and casing styles vary from manufacturer to manufacturer, often incorporating a considerable number of shrouds and baffles to direct the flow of water into the tailrace with minimal interference with the aforementioned. The present work incorporates the Reynolds-averaged Navier Stokes (RANS) k-ɛ turbulence model and a two-phase Volume of Fluid (VOF) model, using the ANSYS® FLUENT® code to simulate the casing flow in a 2-jet horizontal axis Pelton turbine. The results of the simulation of two casing configurations are compared against flow visualisations and measurements obtained from a model established at the National Technical University of Athens. Further investigations were carried out in order to compare the absolute difference between the numerical runner efficiency and the experimental efficiency. In doing so, the various losses that occur during operation of the turbine can be appraised and a prediction of casing losses can be made. Firstly, the mechanical losses of the test rig are estimated to determine the experimental hydraulic efficiency. Following this, the numerical efficiency of the runner can then be ascertained by considering the upstream pipework losses and the aforementioned runner simulations, which are combined with previously published results of the 3D velocity profiles obtained from simulating the injectors. The results indicate that out of all of the experimental cases tested, in the best case scenario the casing losses can be approximated to be negligible and in the worst case scenario ≈3%

    On the impact of communication complexity in the design of parallel numerical algorithms

    Get PDF
    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation
    • …
    corecore