15,120 research outputs found

    Analysis of the fluidization behaviour and application of a novel spouted bed\ud apparatus for spray granulation and coating

    Get PDF
    Spouted beds are well known for their good mixing of the solid phase and for their intensive heat\ud and mass transfers between the fluid phase and the solid phase. Nearly isothermal conditions are\ud enabled which is of advantage for the treatment of granular solid materials in granulation,\ud agglomeration or coating processes. In this work the hydrodynamic behaviour of a novel spouted\ud bed apparatus with two horizontal and slit-shaped gas inlets is investigated by high-frequency\ud recordings of the gas phase pressure fluctuations over the entire bed. The hydrodynamic stable\ud operation domain, which is of importance for operating the apparatus, will be identified and\ud depicted in the Re-G-Ar-diagram by Mitev [1]. Another focus of this work is the simulation of the\ud spouting process by application of a continuum approach in FLUENT 6.2. The effect of the\ud frictional stresses on the hydrodynamic behaviour is examined by performing simulations with and\ud without consideration of friction. The angle of internal friction fi in Schaeffer`s [10] model will be\ud varied and the simulation results will be compared with experiments. It was found that the influence\ud of friction is not very big by application of the quite simple and empirical frictional viscosity model\ud by Schaeffer [10] basing on soil mechanical principles. Also the simulation results under negligence\ud of friction were similar to those under consideration of friction. Another part of this work is the\ud industrial application of the novel spouted bed in granulation and coating processes. Compared to\ud classical fluidized beds, a much narrower particle size distribution, a higher yield and a higher\ud product quality was obtained in the novel spouted be

    Application of expert systems in project management decision aiding

    Get PDF
    The feasibility of developing an expert systems-based project management decision aid to enhance the performance of NASA project managers was assessed. The research effort included extensive literature reviews in the areas of project management, project management decision aiding, expert systems technology, and human-computer interface engineering. Literature reviews were augmented by focused interviews with NASA managers. Time estimation for project scheduling was identified as the target activity for decision augmentation, and a design was developed for an Integrated NASA System for Intelligent Time Estimation (INSITE). The proposed INSITE design was judged feasible with a low level of risk. A partial proof-of-concept experiment was performed and was successful. Specific conclusions drawn from the research and analyses are included. The INSITE concept is potentially applicable in any management sphere, commercial or government, where time estimation is required for project scheduling. As project scheduling is a nearly universal management activity, the range of possibilities is considerable. The INSITE concept also holds potential for enhancing other management tasks, especially in areas such as cost estimation, where estimation-by-analogy is already a proven method

    Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures

    Full text link
    Quantum computers have recently made great strides and are on a long-term path towards useful fault-tolerant computation. A dominant overhead in fault-tolerant quantum computation is the production of high-fidelity encoded qubits, called magic states, which enable reliable error-corrected computation. We present the first detailed designs of hardware functional units that implement space-time optimized magic-state factories for surface code error-corrected machines. Interactions among distant qubits require surface code braids (physical pathways on chip) which must be routed. Magic-state factories are circuits comprised of a complex set of braids that is more difficult to route than quantum circuits considered in previous work [1]. This paper explores the impact of scheduling techniques, such as gate reordering and qubit renaming, and we propose two novel mapping techniques: braid repulsion and dipole moment braid rotation. We combine these techniques with graph partitioning and community detection algorithms, and further introduce a stitching algorithm for mapping subgraphs onto a physical machine. Our results show a factor of 5.64 reduction in space-time volume compared to the best-known previous designs for magic-state factories.Comment: 13 pages, 10 figure

    Solar Sources of Interplanetary Magnetic Clouds Leading to Helicity Prediction

    Full text link
    This study identifies the solar origins of magnetic clouds that are observed at 1 AU and predicts the helical handedness of these clouds from the solar surface magnetic fields. We started with the magnetic clouds listed by the Magnetic Field Investigation (MFI) team supporting NASA's WIND spacecraft in what is known as the MFI table and worked backwards in time to identify solar events that produced these clouds. Our methods utilize magnetograms from the Helioseismic and Magnetic Imager (HMI) instrument on the Solar Dynamics Observatory (SDO) spacecraft so that we could only analyze MFI entries after the beginning of 2011. This start date and the end date of the MFI table gave us 37 cases to study. Of these we were able to associate only eight surface events with clouds detected by WIND at 1 AU. We developed a simple algorithm for predicting the cloud helicity which gave the correct handedness in all eight cases. The algorithm is based on the conceptual model that an ejected flux tube has two magnetic origination points at the positions of the strongest radial magnetic field regions of opposite polarity near the places where the ejected arches end at the solar surface. We were unable to find events for the remaining 29 cases: lack of a halo or partial halo CME in an appropriate time window, lack of magnetic and/or filament activity in the proper part of the solar disk, or the event was too far from disk center. The occurrence of a flare was not a requirement for making the identification but in fact flares, often weak, did occur for seven of the eight cases.Comment: 18 pages, 8 figures, 2 table

    An interaction paradigm for impact analysis

    Get PDF
    The Aerospace industry is concerned with huge software projects. Software development is an evolving process resulting in larger and larger software systems. As systems grow in size, they become more complex and hence harder to maintain. Thus it appears that the maintenance of software systems is the most expensive part of the software life-cycle, often consuming 50-90% of a project total budget. Yet while there has been much research carried out on the problems of program and system development very little work has been done on the problem of maintaining developed programs. Thus it will be essential to improve the software maintenance process and the environment for maintenance. Historically, the term Software Maintenance has been applied to the process of modifying a software program after it has been delivered and during its life time. The high cost of software during its life cycle can be attributed largely to software maintenance activities, and a major part of these activities is to deal with the modifications of the software. These modifications may involve changes at any level of abstraction of a software system (i.e design, specification, code,...). Software Maintenance has to deal with modifications which can have severe Ripple Effects at other points in the software system. Impact Analysis addresses the problem and attempts to localize these Ripple Effects. In this thesis the Software Maintenance process and more specifically the Impact Analysis process is examined. The different parts of the implementation for the Impact Analysis System are explained. The main results of the thesis are the dependencies generation and the graph tool used to visualize these dependencies as well as the impacts on general dependency graph for impact analysis purpose

    Autonomous Architectural Assembly And Adaptation

    No full text
    An increasingly common solution for systems which are deployed in unpredictable or dangerous environments is to provide the system with an autonomous or selfmanaging capability. This capability permits the software of the system to adapt to the environmental conditions encountered at runtime by deciding what changes need to be made to the system’s behaviour in order to continue meeting the requirements imposed by the designer. The chief advantage of this approach comes from a reduced reliance on the brittle assumptions made at design time. In this work, we describe mechanisms for adapting the software architecture of a system using a declarative expression of the functional requirements (derived from goals), structural constraints and preferences over the space of non-functional properties possessed by the components of the system. The declarative approach places this work in contrast to existing schemes which require more fine-grained, often procedural, specifications of how to perform adaptations. Our algorithm for assembling and re-assembling configurations chooses between solutions that meet both the functional requirements and the structural constraints by comparing the non-functional properties of the selected components against the designer’s preferences between, for example, a high-performance or a highly reliable solution. In addition to the centralised algorithm, we show how the approach can be applied to a distributed system with no central or master node that is aware of the full space of solutions. We use a gossip protocol as a mechanism by which peer nodes can propose what they think the component configuration is (or should be). Gossip ensures that the nodes will reach agreement on a solution, and will do so in a logarithmic number of steps. This latter property ensures the approach can scale to very large systems. Finally, the work is validated on a number of case studies
    • …
    corecore