47 research outputs found

    Solution of 3-dimensional time-dependent viscous flows. Part 3: Application to turbulent and unsteady flows

    Get PDF
    A numerical scheme is developed for solving the time dependent, three dimensional compressible viscous flow equations to be used as an aid in the design of helicopter rotors. In order to further investigate the numerical procedure, the computer code developed to solve an approximate form of the three dimensional unsteady Navier-Stokes equations employing a linearized block implicit technique in conjunction with a QR operator scheme is tested. Results of calculations are presented for several two dimensional boundary layer flows including steady turbulent and unsteady laminar cases. A comparison of fourth order and second order solutions indicate that increased accuracy can be obtained without any significant increases in cost (run time). The results of the computations also indicate that the computer code can be applied to more complex flows such as those encountered on rotating airfoils. The geometry of a symmetric NACA four digit airfoil is considered and the appropriate geometrical properties are computed

    An Adaptive Method for Calculating Blow-Up Solutions

    Get PDF
    Reactive-diffusive systems modeling physical phenomena in certain situations develop a singularity at a finite value of the independent variable referred to as blow-up. The attempt to find the blow-up time analytically is most often impossible, thus requiring a numerical determination of the value. The numerical methods often use a priori knowledge of the blow-up solution such as monotonicity or self-similarity. For equations where such a priori knowledge is unavailable, ad hoc methods were constructed. The object of this research is to develop a simple and consistent approach to find numerically the blow-up solution without having a priori knowledge or resorting to other ad hoc methods. The proposed method allows the investigator the ability to distinguish whether a singular solution or a non-singular solution exists on a given interval. Step size in the vicinity of a singular solution is automatically adjusted. The programming of the proposed method is simple and uses well-developed software for most of the auxiliary routines. The proposed numerical method is mainly concerned with the integration of nonlinear integral equations with Abel-type kernels developed from combustion problems, but may be used on similar equations from other fields. To demonstrate the flexibility of the proposed method, it is applied to ordinary differential equations with blow-up solutions or to ordinary differential equations which exhibit extremely stiff structure

    On the dynamics of debris flows : case study Fjærland, Western Norway - a debris flow triggered by a natural dam breach

    Get PDF
    Abstract Debris flows represent a major threat to human life and property. Due to their ability of entraining material and reaching long run-outs, they have a potential of causing massive damage. There are many studies on debris flow dynamics. Still, the topic reveals a number of challenges in understanding the forces involved and in representing them numerically. In this respect the recent debris flow in Fjærland, Western Norway, is looked upon as a unique full scale experiment with its 1000 m height drop and 3000 m long run-out. The Fjærland case includes the breach of a moraine ridge damming a glacial lake, the additional water coming from the glacier, and its resulting flood. The study of this case illustrates how a water flood can evolve into a full debris flow through bulking. Even though the event started out as a flood of water, the study has revealed that the deposited materials and the degree of erosion also result from a significant grain-to-grain contact. It is seen that treating such events as floods is not appropriate where the height drop is large and the bed material erodible. The primary objective of the thesis has been to provide a thorough documentation and description of the event. Through this work, several eyewitnesses have been interviewed, and detailed pre- and post flow terrain models have been developed through laser scanning and photogrammetry in the purpose of estimating the volume involved in the debris flow. The same terrain data have been employed in a numerical model (BING), trying to simulate the dynamics of the flow. The model uses a Bingham rheology, but is modified to include Coulomb friction and entrainment by Dr. Fabio DeBlasio of ICG. The expanded BING-model predicts a run-out, erosion depth and a volume encouragingly similar to what is seen in nature. However, many physical simplifications have had to be introduced, and challenges for further studies are several. The study reveals the necessity for a dynamical model which, depending on the contents and properties of the material involved, includes both viscous, plastic and frictional forces, allows forces and properties to vary in time and space, as well as taking material entrainment into account. A most interesting topic for further study is the factor of entrainment. Until now, this phenomenon is not fully understood. The thesis tries to illustrate the phenomenon with the recently accumulated data. One of the findings of the thesis is the recognition of a feedback mechanism, where volume growth increases the entrainment of bed material, which again increases the volume of the flow. For erosion, the volume of the mass flow seems more important than slope angle, at least in slopes steeper than a certain value. This was also recognised in the numerical modelling, shown by an exponential increase in volume. The ability of volume growth can be the explanation for debris flows with very far-reaching run-outs. There is historical evidence that similar debris flow events have happened in Fjærland twice during the last century. An event like the one that occurred 8 May 2004 is also likely to happen again - in Fjærland as well as any other place where large volumes of water are released at high altitude. This study is therefore important also in the evaluation of hazards related to other glaciers, lakes and dams as well as to landslide-triggered debris flows

    Determination of Optimal Air Pollution Control Strategies

    Get PDF
    One of the important environmental problems facing urban officials today is the selection and enforcement of air pollutant emission control measures. These measures take two forms: long-term controls (multi-year legislation, such as the Federal new car emission standards through 1976) and short-term controls (action taken over a period of hours to days to avoid an air pollution episode). What is required for each form of control is a methodology for the systematic determination of the "best" strategy from among all those possible. In this thesis, a general theoretical framework for the determination of optimal air pollution control strategies is presented for both long-term and real-time controls. For the long-term control problem, it is assumed that emission control procedures are changed on a year-to-year basis. The problem considered is to determine the set of control measures that minimizes the total cost of control while maintaining specified levels of air quality each year. It is assumed that an airshed model exists which is capable of predicting pollutant concentrations as a function of source emissions in the airshed. Both single-year and multi-year problems are treated. Computational methods are developed based on mathematical programming techniques. The theory and computational methods developed are applied to the evaluation of long-term air pollution control strategies for the Los Angeles basin. Optimal strategies for the control of carbon monoxide, nitrogen dioxide and ozone for 1973 to 1975 in the Los Angeles basin have been obtained. The problem of determining real-time (short-term) air pollution control strategies for an urban airshed is posed as selecting those control measures from among all possible such that air quality is maintained at a certain level over a given time period and the total control imposed is a minimum. The real-time control is based on meteorological predictions made over a several hour to several day period. A computational algorithm is developed for solving the class of control problems that result. Typical control measures include restrictions on the number of motor vehicles allowed on a freeway, reduced operation of power plants, and substitution of low emission fuel (e.g. natural gas) for high emission fuel (e.g. coal) in power plants. The control strategy is assumed to be enforced over a certain period, say, one hour, based on meteorological predictions made at the beginning of the period. The strategy for each time period could be determined by an air pollution control agency by means of a computer implementing the algorithm presented. The theory is applied to a hypothetical study of implementation of the optimal control on September 29, 1969 in the Los Angeles basin.</p

    Colloquium numerical treatment of integral equations

    Get PDF

    Real time control of nonlinear dynamic systems using neuro-fuzzy controllers

    Get PDF
    The problem of real time control of a nonlinear dynamic system using intelligent control techniques is considered. The current trend is to incorporate neural networks and fuzzy logic into adaptive control strategies. The focus of this work is to investigate the current neuro-fuzzy approaches from literature and adapt them for a specific application. In order to achieve this objective, an experimental nonlinear dynamic system is considered. The motivation for this comes from the desire to solve practical problems and to create a test-bed which can be used to test various control strategies. The nonlinear dynamic system considered here is an unstable balance beam system that contains two fluid tanks, one at each end, and the balance is achieved by pumping the fluid back and forth from the tanks. A popular approach, called ANFIS (Adaptive Networks-based Fuzzy Inference Systems), which combines the structure of fuzzy logic controllers with the learning aspects from neural networks is considered as a basis for developing novel techniques, because it is considered to be one of the most general framework for developing adaptive controllers. However, in the proposed new method, called Generalized Network-based Fuzzy Inferencing Systems (GeNFIS), more conventional fuzzy schemes for the consequent part are used instead of using what is called the Sugeno type rules. Moreover, in contrast to ANFIS which uses a full set of rules, GeNFIS uses only a limited number of rules based on certain expert knowledge. GeNFIS is tested on the balance beam system, both in a real- time actual experiment and the simulation, and is found to perform better than a comparable ANFIS under supervised learning. Based on these results, several modifications of GeNFIS are considered, for example, synchronous defuzzification through triangular as well as bell shaped membership functions. Another modification involves simultaneous use of Sugeno type as well as conventional fuzzy schemes for the consequent part, in an effort to create a more flexible framework. Results of testing different versions of GeNFIS on the balance beam system are presented

    Quadrature, Interpolation and Observability

    Get PDF
    Methods of interpolation and quadrature have been used for over 300 years. Improvements in the techniques have been made by many, most notably by Gauss, whose technique applied to polynomials is referred to as Gaussian Quadrature. Stieltjes extended Gauss's method to certain non-polynomial functions as early as 1884. Conditions that guarantee the existence of quadrature formulas for certain collections of functions were studied by Tchebycheff, and his work was extended by others. Today, a class of functions which satisfies these conditions is called a Tchebycheff System. This thesis contains the definition of a Tchebycheff System, along with the theorems, proofs, and definitions necessary to guarantee the existence of quadrature formulas for such systems. Solutions of discretely observable linear control systems are of particular interest, and observability with respect to a given output function is defined. The output function is written as a linear combination of a collection of orthonormal functions. Orthonormal functions are defined, and their properties are discussed. The technique for evaluating the coefficients in the output function involves evaluating the definite integral of functions which can be shown to form a Tchebycheff system. Therefore, quadrature formulas for these integrals exist, and in many cases are known. The technique given is useful in cases where the method of direct calculation is unstable. The condition number of a matrix is defined and shown to be an indication of the the degree to which perturbations in data affect the accuracy of the solution. In special cases, the number of data points required for direct calculation is the same as the number required by the method presented in this thesis. But the method is shown to require more data points in other cases. A lower bound for the number of data points required is given

    MATLAB

    Get PDF
    This excellent book represents the final part of three-volumes regarding MATLAB-based applications in almost every branch of science. The book consists of 19 excellent, insightful articles and the readers will find the results very useful to their work. In particular, the book consists of three parts, the first one is devoted to mathematical methods in the applied sciences by using MATLAB, the second is devoted to MATLAB applications of general interest and the third one discusses MATLAB for educational purposes. This collection of high quality articles, refers to a large range of professional fields and can be used for science as well as for various educational purposes

    Strategies for teaching engineering mathematics

    Get PDF
    This thesis is an account of experiments into the teaching of mathematics to engineering undergraduates which have been conducted over twenty years against a background of changing intake ability, varying output requirements and increasing restrictions on the formal contact time available. The aim has been to improve the efficiency of the teaching-learning process. The main areas of experimentation have been the integration in the syllabus of numerical and analytical methods, the incorporation of case studies into the curriculum and the use of micro-based software to enhance the teaching process. Special attention is paid to courses in Mathematical Engineering and their position in the spectrum of engineering disciplines. A core curriculum in mathematics for undergraduate engineers is proposed and details are provided of its implementation. The roles of case studies and micro-based software are highlighted. The provision of a mathematics learning resource centre is considered a necessary feature of the implementation of the proposed course. Finally, suggestions for further research are made
    corecore