15,509 research outputs found

    Implementation of the LANS-alpha turbulence model in a primitive equation ocean model

    Get PDF
    This paper presents the first numerical implementation and tests of the Lagrangian-averaged Navier-Stokes-alpha (LANS-alpha) turbulence model in a primitive equation ocean model. The ocean model in which we work is the Los Alamos Parallel Ocean Program (POP); we refer to POP and our implementation of LANS-alpha as POP-alpha. Two versions of POP-alpha are presented: the full POP-alpha algorithm is derived from the LANS-alpha primitive equations, but requires a nested iteration that makes it too slow for practical simulations; a reduced POP-alpha algorithm is proposed, which lacks the nested iteration and is two to three times faster than the full algorithm. The reduced algorithm does not follow from a formal derivation of the LANS-alpha model equations. Despite this, simulations of the reduced algorithm are nearly identical to the full algorithm, as judged by globally averaged temperature and kinetic energy, and snapshots of temperature and velocity fields. Both POP-alpha algorithms can run stably with longer timesteps than standard POP. Comparison of implementations of full and reduced POP-alpha algorithms are made within an idealized test problem that captures some aspects of the Antarctic Circumpolar Current, a problem in which baroclinic instability is prominent. Both POP-alpha algorithms produce statistics that resemble higher-resolution simulations of standard POP. A linear stability analysis shows that both the full and reduced POP-alpha algorithms benefit from the way the LANS-alpha equations take into account the effects of the small scales on the large. Both algorithms (1) are stable; (2) make the Rossby Radius effectively larger; and (3) slow down Rossby and gravity waves.Comment: Submitted to J. Computational Physics March 21, 200

    Preparation and characterisation of irradiated waste eggshells as oil adsorbent

    Get PDF
    Adsorption method had been developed by using natural organic adsorbent for the removal of oil because of its ability to bind the oil molecules into the surface of adsorbent. In this study, chicken eggshells waste was used and it undergoes irradiation process with four different amount of dose which was 0.5 kGy, 1.0 kGy, 1.5 kGy, and 2.0 kGy by using Gamma Cell Irradiator. Three equipment had been used for the characterization process which were the Scanning Electron Microscope (SEM), Energy Dispersive X-ray spectroscopy (EDX), and Fourier-Transform Infrared Spectroscopy (FTIR). The adsorption experiment was conducted to calculate the sorption efficiency by using different mass of samples. The result showed that irradiated chicken eggshells powder with 2.0 kGy amount of radiation dose has a best performance as oil adsorbent

    Building an Emulation Environment for Cyber Security Analyses of Complex Networked Systems

    Full text link
    Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly growing, and the testing and experimentation of cyber defense solutions requires the availability of separate, test environments that best emulate the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, thus enabling the study of cyber defense strategies under real and controllable traffic and attack scenarios. In this paper, we propose a methodology that makes use of a combination of techniques of network and security assessment, and the use of cloud technologies to build an emulation environment with adjustable degree of affinity with respect to actual reference networks or planned systems. As a byproduct, starting from a specific study case, we collected a dataset consisting of complete network traces comprising benign and malicious traffic, which is feature-rich and publicly available

    Three regularization models of the Navier-Stokes equations

    Get PDF
    We determine how the differences in the treatment of the subfilter-scale physics affect the properties of the flow for three closely related regularizations of Navier-Stokes. The consequences on the applicability of the regularizations as SGS models are also shown by examining their effects on superfilter-scale properties. Numerical solutions of the Clark-alpha model are compared to two previously employed regularizations, LANS-alpha and Leray-alpha (at Re ~ 3300, Taylor Re ~ 790) and to a DNS. We derive the Karman-Howarth equation for both the Clark-alpha and Leray-alpha models. We confirm one of two possible scalings resulting from this equation for Clark as well as its associated k^(-1) energy spectrum. At sub-filter scales, Clark-alpha possesses similar total dissipation and characteristic time to reach a statistical turbulent steady-state as Navier-Stokes, but exhibits greater intermittency. As a SGS model, Clark reproduces the energy spectrum and intermittency properties of the DNS. For the Leray model, increasing the filter width decreases the nonlinearity and the effective Re is substantially decreased. Even for the smallest value of alpha studied, Leray-alpha was inadequate as a SGS model. The LANS energy spectrum k^1, consistent with its so-called "rigid bodies," precludes a reproduction of the large-scale energy spectrum of the DNS at high Re while achieving a large reduction in resolution. However, that this same feature reduces its intermittency compared to Clark-alpha (which shares a similar Karman-Howarth equation). Clark is found to be the best approximation for reproducing the total dissipation rate and the energy spectrum at scales larger than alpha, whereas high-order intermittency properties for larger values of alpha are best reproduced by LANS-alpha.Comment: 21 pages, 8 figure

    A Delayed-ACK Scheme for Performance Enhancement of Wireless LANs

    Get PDF
    The IEEE 802.11 MAC protocol provides a reliable link layer using Stop & Wait ARQ. The cost for high reliability is the overhead due to acknowledgement packets in the direction opposite to the actual data flow. In this paper, the design of a new protocol as an enhancement of IEEE 802.11 is proposed, with the aim of reducing supplementary traffic overhead in order to increase the bandwidth available for actual data transmission. The performance of the proposed protocol is evaluated through comparison with IEEE 802.11 as well as with a SSCOP-based protocol. Results underline significant advantages of the proposed protocol against existing ones, thus confirming the value and potentiality of the approach

    Numerical Aerodynamic Simulation (NAS)

    Get PDF
    The history of the Numerical Aerodynamic Simulation Program, which is designed to provide a leading-edge capability to computational aerodynamicists, is traced back to its origin in 1975. Factors motivating its development and examples of solutions to successively refined forms of the governing equations are presented. The NAS Processing System Network and each of its eight subsystems are described in terms of function and initial performance goals. A proposed usage allocation policy is discussed and some initial problems being readied for solution on the NAS system are identified

    A turbulence model for smoothed particle hydrodynamics

    Full text link
    The aim of this paper is to devise a turbulence model for the particle method Smoothed Particle Hydrodynamics (SPH) which makes few assumptions, conserves linear and angular momentum, satisfies a discrete version of Kelvin's circulation theorem, and is computationally efficient. These aims are achieved. Furthermore, the results from the model are in good agreement with the experimental and computational results of Clercx and Heijst for two dimensional turbulence inside a box with no-slip walls. The model is based on a Lagrangian similar to that used for the Lagrangian averaged Navier Stokes (LANS) turbulence model, but with a different smoothed velocity. The smoothed velocity preserves the shape of the spectrum of the unsmoothed velocity, but reduces the magnitude for short length scales by an amount which depends on a parameter ϵ\epsilon. We call this the SPH-ϵ\epsilon model. The effectiveness of the model is indicated by the fact that the second order velocity correlation function calculated using the smoothed velocity and a coarse resolution, is in good agreement with a calculation using a resolution which is finer by a factor 2, and therefore requires 8 times as much work to integrate to the same time.Comment: 34 pages, 11 figure
    • …
    corecore