64,113 research outputs found

    Effect of pressure on the flow behavior of polybutene

    Get PDF
    The rheology of submicron thick polymer melt is examined under high normal pressure conditions by a recently developed photobleached‐fluorescence imaging velocimetry technique. In particular, the validity and limitation of Reynold equation solution, which suggests a linear through‐thickness velocity profile, is investigated. Polybutene (PB) is sheared between two surfaces in a point contact. The results presented in this work suggest the existence of a critical pressure below which the through‐thickness velocity profile is close to linear. At higher pressures however, the profile assumes a sigmoidal shape resembling partial plug flow. The departure of the sigmoidal profile from the linear profile increases with pressure, which is indicative of a second‐order phase/glass transition. The nature of the transition is confirmed independently by examining the pressure‐dependent dynamics of PB squeeze films. The critical pressure for flow profile transition varies with molecular weight, which is consistent with the pressure‐induced glass transition of polymer melt

    Implementation of an intelligent control system

    Get PDF
    A laboratory testbed facility which was constructed at NASA LeRC for the development of an Intelligent Control System (ICS) for reusable rocket engines is described. The framework of the ICS consists of a hierarchy of various control and diagnostic functions. The traditional high speed, closed-loop controller resides at the lowest level of the ICS hierarchy. Above this level resides the diagnostic functions which identify engine faults. The ICS top level consists of the coordination function which manages the interaction between an expert system and a traditional control system. The purpose of the testbed is to demonstrate the feasibility of the OCS concept by implementing the ICS as the primary controller in a simulation of the Space Shuttle Main Engine (SSME). The functions of the ICS which are implemented in the testbed are as follows: an SSME dynamic simulation with selected fault mode models, a reconfigurable controller, a neural network for sensor validation, a model-based failure detection algorithm, a rule based failure detection algorithm, a diagnostic expert system, an intelligent coordinator, and a user interface which provides a graphical representation of the event occurring within the testbed. The diverse nature of the ICS has led to the development of a distributed architecture consisting of specialized hardware and software for the implementation of the various functions. This testbed is made up of five different computer systems. These individual computers are discussed along with the schemes used to implement the various ICS components. The communication between computers and the timing and synchronization between components are also addressed

    Logic analysis of complex systems by characterizing failure phenomena to achieve diagnosis and fault-isolation

    Get PDF
    A recent result shows that, for a certain class of systems, the interdependency among the elements of such a system together with the elements constitutes a mathematical structure a partially ordered set. It is called a loop free logic model of the system. On the basis of an intrinsic property of the mathematical structure, a characterization of system component failure in terms of maximal subsets of bad test signals of the system was obtained. Also, as a consequence, information concerning the total number of failure components in the system was deduced. Detailed examples are given to show how to restructure real systems containing loops into loop free models for which the result is applicable

    Matrix Transfer Function Design for Flexible Structures: An Application

    Get PDF
    The application of matrix transfer function design techniques to the problem of disturbance rejection on a flexible space structure is demonstrated. The design approach is based on parameterizing a class of stabilizing compensators for the plant and formulating the design specifications as a constrained minimization problem in terms of these parameters. The solution yields a matrix transfer function representation of the compensator. A state space realization of the compensator is constructed to investigate performance and stability on the nominal and perturbed models. The application is made to the ACOSSA (Active Control of Space Structures) optical structure

    Possible Implication of a Single Nonextensive pTp_T Distribution for Hadron Production in High-Energy pppp Collisions

    Full text link
    Multiparticle production processes in pppp collisions at the central rapidity region are usually considered to be divided into independent "soft" and "hard" components. The first is described by exponential (thermal-like) transverse momentum spectra in the low-pTp_T region with a scale parameter TT associated with the temperature of the hadronizing system. The second is governed by a power-like distributions of transverse momenta with power index nn at high-pTp_T associated with the hard scattering between partons. We show that the hard-scattering integral can be approximated as a nonextensive distribution of a quasi-power-law containing a scale parameter TT and a power index n=1/(q1)n=1/(q -1), where qq is the nonextensivity parameter. We demonstrate that the whole region of transverse momenta presently measurable at LHC experiments at central rapidity (in which the observed cross sections varies by 1414 orders of magnitude down to the low pTp_T region) can be adequately described by a single nonextensive distribution. These results suggest the dominance of the hard-scattering hadron-production process and the approximate validity of a "no-hair" statistical-mechanical description of the pTp_T spectra for the whole pTp_T region at central rapidity for pppp collisions at high-energies.Comment: 10 pages, 3 figures; presented by G.Wilk at the XLIV International Symposium on Multiparticle Dynamics; 8 - 12 September 2014 - Bologna, ITAL

    An Investigation Into an Effective Method of Automatically Analysing Oracle Applications to Count Function Points

    Get PDF
    Function Point Analysis (FPA) is a synthetic software estimation metric used for computing the size and complexity of applications. It was first introduced by Allan. J. Albrecht during the mid-seventies, as a result of a lengthy research based on applications that were developed using COBOL and PL/1 programming languages. The purpose of this research· is to investigate the possibility, and the most effective method, of automatically performing a Function Point Analysis on Oracle applications that consist of Oracle Forms and Oracle Reports. The research revealed a seemingly lack of other researches on this topic. As FPA was invented a few years prior to the birth of Oracle, and consequently that of fourth-generation languages, it had to be tailored to suit the fourth-generation language Oracle tools used to develop the Oracle applications. This experiment provided a proof of concept and resulted in a software that achieved its objective of automatically calculating Oracle applications, consisting of Oracle Forms and Oracle Reports, in an a posteriori manner
    corecore