27 research outputs found

    Advanced computer architecture specification for automated weld systems

    Get PDF
    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications

    Development of a Computer Architecture to Support the Optical Plume Anomaly Detection (OPAD) System

    Get PDF
    The NASA OPAD spectrometer system relies heavily on extensive software which repetitively extracts spectral information from the engine plume and reports the amounts of metals which are present in the plume. The development of this software is at a sufficiently advanced stage where it can be used in actual engine tests to provide valuable data on engine operation and health. This activity will continue and, in addition, the OPAD system is planned to be used in flight aboard space vehicles. The two implementations, test-stand and in-flight, may have some differing requirements. For example, the data stored during a test-stand experiment are much more extensive than in the in-flight case. In both cases though, the majority of the requirements are similar. New data from the spectrograph is generated at a rate of once every 0.5 sec or faster. All processing must be completed within this period of time to maintain real-time performance. Every 0.5 sec, the OPAD system must report the amounts of specific metals within the engine plume, given the spectral data. At present, the software in the OPAD system performs this function by solving the inverse problem. It uses powerful physics-based computational models (the SPECTRA code), which receive amounts of metals as inputs to produce the spectral data that would have been observed, had the same metal amounts been present in the engine plume. During the experiment, for every spectrum that is observed, an initial approximation is performed using neural networks to establish an initial metal composition which approximates as accurately as possible the real one. Then, using optimization techniques, the SPECTRA code is repetitively used to produce a fit to the data, by adjusting the metal input amounts until the produced spectrum matches the observed one to within a given level of tolerance. This iterative solution to the original problem of determining the metal composition in the plume requires a relatively long period of time to execute the software in a modern single-processor workstation, and therefore real-time operation is currently not possible. A different number of iterations may be required to perform spectral data fitting per spectral sample. Yet, the OPAD system must be designed to maintain real-time performance in all cases. Although faster single-processor workstations are available for execution of the fitting and SPECTRA software, this option is unattractive due to the excessive cost associated with very fast workstations and also due to the fact that such hardware is not easily expandable to accommodate future versions of the software which may require more processing power. Initial research has already demonstrated that the OPAD software can take advantage of a parallel computer architecture to achieve the necessary speedup. Current work has improved the software by converting it into a form which is easily parallelizable. Timing experiments have been performed to establish the computational complexity and execution speed of major components of the software. This work provides the foundation of future work which will create a fully parallel version of the software executing in a shared-memory multiprocessor system

    Spectrum Preprocessing in the OPAD System

    Get PDF
    To determine the readiness of a rocket engine, and facilitate decisions on continued use of the engine before servicing is required, high-resolution optical spectrometers are used to acquire spectra from the exhaust plume. Such instruments are in place at the Technology Test Bed (TTB) stand at Marshall Space Flight Center (MSFC) and the A1 stand at Stennis Space Center (SSC). The optical spectrometers in place have a wide wavelength range covering the visible and near-infrared regions, taking approximately 8000 measurements at about one Angstrom spacing every half second. In the early stages of this work, data analysis was done manually. A simple spectral model produced a theoretical spectrum for given amount of elements and an operator visually matched the theoretical and real spectra. Currently, extensive software is being developed to receive data from the spectrometer and automatically generate an estimate of element amounts in the plume. It will result in fast and reliable analysis, with the capability of real-time performance. This software is the result of efforts of several groups but mainly it has been developed and used by scientists and combustion engineers, in their effort of understanding the underlying physical processes and phenomena and creating visualization and report generation facilities. Most of the software has been developed using the IDL language and programming environment which allows for extensive data visualization. Although this environment has been very beneficial, the resulting programs execute very slowly and are not easily portable to more popular, real-time environments. The need for portability and high speed of execution is becoming more apparent as the software matures moving out of the experimentation stage and into the production stage where ease of use and short response time are the most desirable features. The purpose of the work described here is to assist the scientists who developed the original IDL-based version in the conversion of the software into the real-time, production version. Specifically, a section of the software devoted to the preprocessing of the spectra has been converted into the C language. In addition, parts of this software which may be improved have been identified, and recommendations are given to improve the functionality and ease of use of the new version

    Task scheduling in dataflow computer architectures

    Get PDF
    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size

    Large-scale computations on histology images reveal grade-differentiating parameters for breast cancer

    Get PDF
    BACKGROUND: Tumor classification is inexact and largely dependent on the qualitative pathological examination of the images of the tumor tissue slides. In this study, our aim was to develop an automated computational method to classify Hematoxylin and Eosin (H&E) stained tissue sections based on cancer tissue texture features. METHODS: Image processing of histology slide images was used to detect and identify adipose tissue, extracellular matrix, morphologically distinct cell nuclei types, and the tubular architecture. The texture parameters derived from image analysis were then applied to classify images in a supervised classification scheme using histologic grade of a testing set as guidance. RESULTS: The histologic grade assigned by pathologists to invasive breast carcinoma images strongly correlated with both the presence and extent of cell nuclei with dispersed chromatin and the architecture, specifically the extent of presence of tubular cross sections. The two parameters that differentiated tumor grade found in this study were (1) the number density of cell nuclei with dispersed chromatin and (2) the number density of tubular cross sections identified through image processing as white blobs that were surrounded by a continuous string of cell nuclei. Classification based on subdivisions of a whole slide image containing a high concentration of cancer cell nuclei consistently agreed with the grade classification of the entire slide. CONCLUSION: The automated image analysis and classification presented in this study demonstrate the feasibility of developing clinically relevant classification of histology images based on micro- texture. This method provides pathologists an invaluable quantitative tool for evaluation of the components of the Nottingham system for breast tumor grading and avoid intra-observer variability thus increasing the consistency of the decision-making process

    Block Migration in Broadcast-based Multiprocessor Architectures

    Get PDF
    www.library.drexel.edu The following item is made available as a courtesy to scholars by the author(s) and Drexel University Library and may contain materials and content, including computer code and tags, artwork, text, graphics, images, and illustrations (Material) which may be protected by copyright law. Unless otherwise noted, the Material is made available for non profit and educational purposes, such as research, teaching and private study. For these limited purposes, you may reproduce (print, download or make copies) the Material without prior permission. All copies must include any copyright notice originally included with the Material. You must seek permission from the authors or copyright owners for all uses that are not allowed by fair use and other provisions of the U.S. Copyright Law. The responsibility for making an independent legal assessment and securing any necessary permission rests with persons desiring to reproduce or use the Material

    A comparison of Broadcast-based and Switch-based Networks of Workstations

    No full text
    memory, fault tolerance. Networks of Workstations have been mostly designed using switch-based architectures and programming based on message passing. This paper describes a network of workstations based on the Simultaneous Optical Multiprocessor Exchange Bus (SOME-Bus) which is a low-latency, high-bandwidth interconnection network that directly links arbitrary pairs of processor nodes without contention, and can efficiently interconnect several hundred nodes. Each node has a dedicated output channel and an array of receivers, with one receiver dedicated to every other node's output channel. The SOME-Bus eliminates the need for global arbitration and provides bandwidth that scales directly with the number of nodes in the system. Under the Distributed Shared Memory (DSM) paradigm, the SOME-bus allows strong integration of the transmitter, receiver and cache controller hardware to produce a highly integrated system-wide cache coherence mechanism. Backward Error Recovery fault-tolerance techniques can exploit DSM data replication and SOME-Bus broadcasts with little additional network traffic and corresponding performance degradation. This paper examines switch-based networks that maintain high performance under varying degrees of application locality, and compares them to the SOME-Bus, in terms of latency and processor utilization. In addition, the effect of fault-tolerant DSM is examined on all networks. 1
    corecore