1,984 research outputs found

    Viscoelastic Suppression of Gravity-Driven Counterflow Instability

    Full text link
    Attempts to achieve "top kill" of actively flowing oil wells by insertion of dense drilling "muds", i.e., slurries of dense minerals, from above will fail if the Kelvin-Helmholtz instability in the gravity-driven counterflow produces turbulence that breaks up the denser fluid into small droplets. Here we estimate the droplet size to be sub-mm for fast flows and suggest the addition of a shear-thickening polymer to suppress turbulence. Laboratory experiments show a progression from droplet formation to complete turbulence suppression at the relevant high velocities, illustrating rich new physics accessible by using a shear-thickening liquid in gravity driven counter-streaming flows.Comment: 11 pages, 2 figures, revised in response to referees' comment

    Homotopy methods for constraint relaxation in unilevel reliability based design optimization

    Get PDF
    Reliability based design optimization is a methodology for finding optimized designs that are characterized with a low probability of failure. The main ob jective in reliability based design optimization is to minimize a merit function while satisfying the reliability constraints. The reliability constraints are constraints on the probability of failure corre- sponding to each of the failure modes of the system or a single constraint on the system probability of failure. The probability of failure is usually estimated by performing a relia- bility analysis. During the last few years, a variety of different techniques have been devel- oped for reliability based design optimization. Traditionally, these have been formulated as a double-loop (nested) optimization problem. The upper level optimization loop gen- erally involves optimizing a merit function sub ject to reliability constraints and the lower level optimization loop(s) compute the probabilities of failure corresponding to the failure mode(s) that govern the system failure. This formulation is, by nature, computationally intensive. A new efficient unilevel formulation for reliability based design optimization was developed by the authors in earlier studies. In this formulation, the lower level optimiza- tion (evaluation of reliability constraints in the double loop formulation) was replaced by its corresponding first order Karush-Kuhn-Tucker (KKT) necessary optimality conditions at the upper level optimization. It was shown that the unilevel formulation is computation- ally equivalent to solving the original nested optimization if the lower level optimization is solved by numerically satisfying the KKT conditions (which is typically the case), and the two formulations are mathematically equivalent under constraint qualification and general- ized convexity assumptions. In the unilevel formulation, the KKT conditions of the inner optimization for each probabilistic constraint evaluation are imposed at the system level as equality constraints. Most commercial optimizers are usually numerically unreliable when applied to problems accompanied by many equality constraints. In this investigation an optimization framework for reliability based design using the unilevel formulation is de- veloped. Homotopy methods are used for constraint relaxation and to obtain a relaxed feasible design. A series of optimization problems are solved as the relaxed optimization problem is transformed via a homotopy to the original problem. A heuristic scheme is employed in this paper to update the homotopy parameter. The proposed algorithm is illustrated with example problems

    Feature Reduction using a Singular Value Decomposition for the Iterative Guided Spectral Class Rejection Hybrid Classifier

    Get PDF
    Feature reduction in a remote sensing dataset is often desirable to decrease the processing time required to perform a classification and improve overall classification accuracy. This work introduces a feature reduction method based on the singular value decomposition (SVD). This feature reduction technique was applied to training data from two multitemporal datasets of Landsat TM/ETM+ imagery acquired over a forested area in Virginia, USA and Rondonia, Brazil. Subsequent parallel iterative guided spectral class rejection (pIGSCR) forest/nonforest classifications were performed to determine the quality of the feature reduction. The classifications of the Virginia data were five times faster using SVDbased feature reduction without affecting the classification accuracy. Feature reduction using the SVD was also compared to feature reduction using principal components analysis (PCA). The highest average accuracies for the Virginia dataset (88.34%) and for the Rondonia dataset (93.31%) were achieved using the SVD. The results presented here indicate that SVDbased feature reduction can produce statistically significantly better classifications than PCA

    An Adaptive Noise Filtering Algorithm for AVIRIS Data with Implications for Classification Accuracy

    Get PDF
    This paper describes a new algorithm used to adaptively filter a remote sensing dataset based on signal-to-noise ratios (SNRs) once the maximum noise fraction (MNF) has been applied. This algorithm uses Hermite splines to calculate the approximate area underneath the SNR curve as a function of band number, and that area is used to place bands into “bins” with other bands having similar SNRs. A median filter with a variable sized kernel is then applied to each band, with the same size kernel used for each band in a particular bin. The proposed adaptive filters are applied to a hyperspectral image generated by the AVIRIS sensor, and results are given for the identification of three different pine species located within the study area. The adaptive filtering scheme improves image quality as shown by estimated SNRs, and classification accuracies improved by more than 10% on the sample study area, indicating that the proposed methods improve the image quality, thereby aiding in species discrimination

    Reduced Sampling for Construction of Quadratic Response Surface Approximations Using Adaptive Experimental Design

    Get PDF
    The purpose of this paper is to reduce the computational complexity per step from O(n^2) to O(n) for optimization based on quadratic surrogates, where n is the number of design variables. Applying nonlinear optimization strategies directly to complex multidisciplinary systems can be prohibitively expensive when the complexity of the simulation codes is large. Increasingly, response surface approximations, and specifically quadratic approximations, are being integrated with nonlinear optimizers in order to reduce the CPU time required for the optimization of complex multidisciplinary systems. For evaluation by the optimizer, response surface approximations provide a computationally inexpensive lower fidelity representation of the system performance. The curse of dimensionality is a major drawback in the implementation of these approximations as the amount of required data grows quadratically with the number n of design variables in the problem. In this paper a novel technique to reduce the magnitude of the sampling from O(n^2) to O(n) is presented. The technique uses prior information to approximate the eigenvectors of the Hessian matrix of the response surface approximation and only requires the eigenvalues to be computed by response surface techniques. The technique is implemented in a sequential approximate optimization algorithm and applied to engineering problems of variable size and characteristics. Results demonstrate that a reduction in the data required per step from O(n^2) to O(n) points can be accomplished without significantly compromising the performance of the optimization algorithm. A reduction in the time (number of system analyses) required per step from O(n^2) to O(n) is significant, even more so as n increases. The novelty lies in how only O(n) system analyses can be used to approximate a Hessian matrix whose estimation normally requires O(n^2) system analyses

    KKT conditions satisfied using adaptive neighboring in hybrid cellular automata for topology optimization

    Get PDF
    The hybrid cellular automaton (HCA) method is a biologically inspired algorithm capable of topology synthesis that was developed to simulate the behavior of the bone functional adaptation process. In this algorithm, the design domain is divided into cells with some communication property among neighbors. Local evolutionary rules, obtained from classical control theory, iteratively establish the value of the design variables in order to minimize the local error between a field variable and a corresponding target value. Karush-Kuhn-Tucker (KKT) optimality conditions have been derived to determine the expression for the field variable and its target. While averaging techniques mimicking intercellular communication have been used to mitigate numerical instabilities such as checkerboard patterns and mesh dependency, some questions have been raised whether KKT conditions are fully satisfied in the final topologies. Furthermore, the averaging procedure might result in cancellation or attenuation of the error between the field variable and its target. Several examples are presented showing that HCA converges to different final designs for different neighborhood configurations or averaging schemes. Although it has been claimed that these final designs are optimal, this might not be true in a precise mathematical sense—the use of the averaging procedure induces a mathematical incorrectness that has to be addressed. In this work, a new adaptive neighboring scheme will be employed that utilizes a weighting function for the influence of a cell’s neighbors that decreases to zero over time. When the weighting function reaches zero, the algorithm satisfies the aforementioned optimality criterion. Thus, the HCA algorithm will retain the benefits that result from utilizing neighborhood information, as well as obtain an optimal solution

    Postoperative Delirium Prevention in the Older Adult: An Evidence-Based Process Improvement Project

    Get PDF
    Postoperative delirium is a major complication in hospitalized older adults. Implementation of a screening tool and evidence-based delirium-prevention protocol on a surgical unit increased nurses’ knowledge regarding delirium, increased identification of delirium, and produced medical treatment alterations leading to positive patient outcomes

    Development of an Advanced Recycle Filter Tank Assembly for the ISS Urine Processor Assembly

    Get PDF
    Recovering water from urine is a process that is critical to supporting larger crews for extended missions aboard the International Space Station. Urine is collected, preserved, and stored for processing into water and a concentrated brine solution that is highly toxic and must be contained to avoid exposure to the crew. The brine solution is collected in an accumulator tank, called a Recycle Filter Tank Assembly (RFTA) that must be replaced monthly and disposed in order to continue urine processing operations. In order to reduce resupply requirements, a new accumulator tank is being developed that can be emptied on orbit into existing ISS waste tanks. The new tank, called the Advanced Recycle Filter Tank Assembly (ARFTA) is a metal bellows tank that is designed to collect concentrated brine solution and empty by applying pressure to the bellows. This paper discusses the requirements and design of the ARFTA as well as integration into the urine processor assembly

    Pb isotopic variability in melt inclusions from the EMI–EMII–HIMU mantle end-members and the role of the oceanic lithosphere

    Get PDF
    Melt inclusions from four individual lava samples representing the HIMU (Mangaia Island), EMI (Pitcairn Island) and EMII (Tahaa Island) end member components, have heterogeneous Pb isotopic composition larger than that defined by the erupted lavas in each island. The broad linear trend in ^(207)Pb/^(206)Pb–^(208)Pb/^(206)Pb space produced by the melt inclusions from Mangaia, Tahaa and fPitcairn samples reproduces the entire trend defined by the Austral chain, the Society islands and the Pitcairn island and seamount groups. The inclusions preserve a record of melt composition of far greater isotopic diversity than that sampled in whole rock basalts. These results can be explained by mixing of a common depleted component with the HIMU, EMI and EMII lavas, respectively. We favor a model that considers the oceanic lithosphere to be that common component. We suggest that the Pb isotopic compositions of the melt inclusions reflect wall rock reaction of HIMU, EMI and EMII melts during their percolation through the oceanic lithosphere. Under these conditions, the localized rapid crystallization of olivine from primitive basalt near the reaction zone would allow the entrapment of melt inclusions with different isotopic composition
    corecore