71 research outputs found

    Real-time fault identification for developmental turbine engine testing

    Get PDF
    Hundreds of individual sensors produce an enormous amount of data during developmental turbine engine testing. The challenge is to ensure the validity of the data and to identify data and engine anomalies in a timely manner. An automated data validation, engine condition monitoring, and fault identification process that emulates typical engineering techniques has been developed for developmental engine testing.An automated data validation and fault identification approach employing enginecycle-matching principles is described. Engine cycle-matching is automated by using an adaptive nonlinear component-level computer model capable of simulating both steady state and transient engine operation. Automated steady-state, transient, and real-time model calibration processes are also described. The model enables automation of traditional data validation, engine condition monitoring, and fault identification procedures. A distributed parallel computing approach enables the entire process to operate in real-time.The result is a capability to detect data and engine anomalies in real-time during developmental engine testing. The approach is shown to be successful in detecting and identifying sensor anomalies as they occur and distinguishing these anomalies from variations in component and overall engine aerothermodynamic performance. The component-level model-based engine performance and fault identification technique of the present research is capable of: identifying measurement errors on the order of 0.5 percent (e.g., sensor bias, drift,level shift, noise, or poor response) in facility fuel flow, airflow, and thrust measurements; identifying measurement errors in engine aerothermodynamic measurements (rotorspeeds, gas path pressures and temperatures); identifying measurement errors in engine control sensors (e.g., leaking/biased pressure sensor, slowly responding pressure measurement) and variable geometry rigging (e.g., misset guide vanes or nozzle area) that would invalidate a test or series of tests; identifying abrupt faults (e.g., faults due to domestic object damage, foreign object damage, and control anomalies); identifying slow faults (e.g., component or overall engine degradation, and sensor drift). Specifically, the technique is capable of identifying small changes in compressor (or fan) performance on the order of 0.5 percent; and being easily extended to diagnose secondary failure modes and to verify any modeling assumptions that may arise for developmental engine tests (e.g., increase in turbine flow capacity, inaccurate measurement of facility bleed flows, horsepower extraction, etc.).The component-level model-based engine performance and fault identification method developed in the present work brings together features which individually and collectively advance the state-of-the-art. These features are separated into three categories: advancements to effectively quantify off-nominal behavior, advancements to provide a fault detection capability that is practical from the viewpoint of the analysis,implementation, tuning, and design, and advancements to provide a real-time fault detection capability that is reliable and efficient

    A graphical system for parallel software development

    Get PDF
    PhD ThesisParallel architectures have become popular in the race to meet an increasing demand for computational power. While the benefits of parallel computing - the performance improvements due to simultaneous computations - are clear, achieving these benefits has proved difficult. The wide variety of parallel architectures has led to a similarly diverse range of parallel languages and methods for parallel programming, many of which feature complicated architecture-specific language mechanisms. The lack of good tools to assist in the development of parallel software has compounded the problem of parallel programming being limited to a field which is both specialist and fragmented. This thesis investigates techniques for the graphical specification of parallel programs, using an architecture-independent graph-based notation representing the design of the program, combined with conventional sequential languages. Automatic code generation is used to translate the graph program into executable code suitable for different parallel architectures. To overcome the differing performance characteristics of parallel architectures, methods for the graphical adjustment of granularity are proposed and investigated, and an encompassing parallel design environment is presented.Engineering and Physical Sciences Research Council

    Toward Optimizing Distributed Programs Directed by Configurations

    Get PDF
    Networks of workstations are now viable environments for running distributed and parallel applications. Recent advances in software interconnection technology enables programmers to prepare applications to run in dynamically changing environments because module interconnection activity is regarded as an essentially distinct and different intellectual activity so as isolated from that of implementing individual modules. But there remains the question of how to optimize the performance of those applications for a given execution environment: how can developers realize performance gains without paying a high programming cost to specialize their application for the target environment? Interconnection technology has allowed programmers to tailor and tune their applications on distributed environments, but the traditional approach to this process has ignored the performance issue over gracefully seemless integration of various software components

    PolyAPM: Vergleichende Parallelprogrammierung mit Abstrakten Parallelen Maschinen

    Get PDF
    A parallelising compilation consists of many translation and optimisation stages. The programmer may steer the compiler through these stages by supplying directives with the source code or setting compiler switches. However, for an evaluation of the effects of individual stages, their selection and their best order, this approach is not optimal. To solve this problem, we propose the following method. The compilation is cast as a sequence of program transformations. Each intermediate program runs on an Abstract Parallel Machine (APM), while the program generated by the final transformation runs on the target architecture. Our intermediate programs are all in the same language, Haskell. Thus, each program is executable and still abstract enough to be legible, which enables the evaluation of the transformation that generated it. This evaluation is supported by a cost model, which makes a performance prediction of the abstract program for a real machine. Our project, PolyAPM, provides an acyclic directed graph -- usually a tree -- of APMs whose traversal specifies different combinations and orders of transformations. From one source program, several target programs can be constructed. Their run time characteristics can be evaluated and compared. The goal of PolyAPM is not to support the one-off construction of parallel application programs. For the method's overhead to pay off, the project aims rather at supporting the construction and comparison of many similar variations of a parallel program and a comparative evaluation of parallelisation techniques. With the automation of transformations, PolyAPM can also be used to construct semi-automatic compilation systems.Eine parallelisierende Compilation besteht aus vielen Übersetzungs- und Optimierungsstufen. Der Programmierer kann den Compiler in diesen Stufen steuern, in dem er im Quellcode Anweisungen einfügt oder Compileroptionen verwendet. Für eine Bewertung der Auswirkungen der einzelnen Stufen, der Auswahl der Stufen und ihrer besten Reihenfolge ist der Ansatz aber nicht geeignet. Um dieses Problem zu lösen, schlagen wir folgende Methode vor. Eine Compilation wird als Abfolge von Programmtransformationen betrachtet. Jedes Zwischenprogramm gehört jeweils zu einer Abstrakten Parallelen Maschine (APM), während das durch die letzte Transformation erzeugte Program für die Zielarchitektur bestimmt ist. Alle Zwischenprogramme sind in der Sprache Haskell geschrieben. Dadurch ist jedes Programm ausführbar und trotzdem abstrakt genug, um gut lesbar zu sein. Durch diese Ausführbarkeit kann die Transformation, durch die das Programm erzeugt wird, bewertet werden. Diese Bewertung wird durch ein Kostenmodell unterstützt, das eine Performance-Vorhersage des abstrakten Programms, bezogen auf eine reale Maschine, ermöglicht. Unser Projekt PolyAPM liefert einen azyklischen, gerichteten Graphen - in der Regel einen Baum - aus APMs, dessen Traversierungen jeweils bestimmte Kombinationen und Reihenfolgen von Transformationen definieren. Aus einem Quellprogramm können verschiedene Zielprogramme erzeugt werden, deren Laufzeitverhalten bewert- und vergleichbar ist. Das Ziel von PolyAPM liegt nicht in der Erzeugung eines einzelnen, parallelen Programms. Damit sich der zusätzliche Aufwand der Methode auszahlt, richtet sich das Projekt eher auf die Entwicklung und den Vergleich vieler, ähnlicher Variationen eines parallelen Programms und der vergleichenden Bewertung von Parallelisierungstechniken. Mit der Automatisierung von Transformationen kann PolyAPM dazu benutzt werden, halbautomatische Compilations-Systeme zu bauen

    Design and analysis of low voltage smart feeder protection using PhotoMOS and proficient monitoring model of real-time IOT applications.

    Get PDF
    Masters Degree. University of KwaZulu- Natal, Durban.Abstract available in PDF

    A parallel functional language compiler for message-passing multicomputers

    Get PDF
    The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects

    Institute of Safety Research, Annual Report 1995

    Get PDF
    The report gives an overview on the scientific work of the Institute of Safety Research in 1995
    corecore