15,120 research outputs found
Recommended from our members
Designing Resilient Manufacturing Systems In the Presence of Change
Economic and technical changes force manufacturers to redesign and enhance their operational systems. The implications of such changes within a complex system such as manufacturing and the supply chain can be very challenging. In particular, where the number of system elements and their connections result in a high level of complexity, the potential effects of a change can be expensive concerning the delivery time and cost targets, as a change to one part or element of a design requires additional changes throughout the system.
Companies need to understand the characteristics of their manufacturing systems that make them resilient to change. Considered from a system perspective, the structures of the system, and its elements and connections, contribute greatly to the characteristics and behavior of the system and hence potential resilience. A change prediction method can help to analyse the change properties and improve complex systems by focusing on the underlying structural elements and dependencies.
This thesis proposes a novel system change method that can enable the review of the current manufacturing system and understand how to design a more robust or adaptable system that addresses resilience. This method is a combination of matrix-based approaches and methods to assess the interaction between elements of the product and its manufacturing process in order to understand the risk of changes propagating through the system. Risk assessment across layers of a system can give valuable insight into how an element change interacts within the system. The goal of this thesis is to contribute to gaining a fundamental understanding of manufacturing systems resilience by developing a method to evaluate capabilities of changes, performance robustness or adaptability, and achieving high resilience.Universal Oil Products (a Honeywell Company);
Laing O’Rourk
Analysis of the fluidization behaviour and application of a novel spouted bed\ud apparatus for spray granulation and coating
Spouted beds are well known for their good mixing of the solid phase and for their intensive heat\ud
and mass transfers between the fluid phase and the solid phase. Nearly isothermal conditions are\ud
enabled which is of advantage for the treatment of granular solid materials in granulation,\ud
agglomeration or coating processes. In this work the hydrodynamic behaviour of a novel spouted\ud
bed apparatus with two horizontal and slit-shaped gas inlets is investigated by high-frequency\ud
recordings of the gas phase pressure fluctuations over the entire bed. The hydrodynamic stable\ud
operation domain, which is of importance for operating the apparatus, will be identified and\ud
depicted in the Re-G-Ar-diagram by Mitev [1]. Another focus of this work is the simulation of the\ud
spouting process by application of a continuum approach in FLUENT 6.2. The effect of the\ud
frictional stresses on the hydrodynamic behaviour is examined by performing simulations with and\ud
without consideration of friction. The angle of internal friction fi in Schaeffer`s [10] model will be\ud
varied and the simulation results will be compared with experiments. It was found that the influence\ud
of friction is not very big by application of the quite simple and empirical frictional viscosity model\ud
by Schaeffer [10] basing on soil mechanical principles. Also the simulation results under negligence\ud
of friction were similar to those under consideration of friction. Another part of this work is the\ud
industrial application of the novel spouted bed in granulation and coating processes. Compared to\ud
classical fluidized beds, a much narrower particle size distribution, a higher yield and a higher\ud
product quality was obtained in the novel spouted be
Application of expert systems in project management decision aiding
The feasibility of developing an expert systems-based project management decision aid to enhance the performance of NASA project managers was assessed. The research effort included extensive literature reviews in the areas of project management, project management decision aiding, expert systems technology, and human-computer interface engineering. Literature reviews were augmented by focused interviews with NASA managers. Time estimation for project scheduling was identified as the target activity for decision augmentation, and a design was developed for an Integrated NASA System for Intelligent Time Estimation (INSITE). The proposed INSITE design was judged feasible with a low level of risk. A partial proof-of-concept experiment was performed and was successful. Specific conclusions drawn from the research and analyses are included. The INSITE concept is potentially applicable in any management sphere, commercial or government, where time estimation is required for project scheduling. As project scheduling is a nearly universal management activity, the range of possibilities is considerable. The INSITE concept also holds potential for enhancing other management tasks, especially in areas such as cost estimation, where estimation-by-analogy is already a proven method
Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures
Quantum computers have recently made great strides and are on a long-term
path towards useful fault-tolerant computation. A dominant overhead in
fault-tolerant quantum computation is the production of high-fidelity encoded
qubits, called magic states, which enable reliable error-corrected computation.
We present the first detailed designs of hardware functional units that
implement space-time optimized magic-state factories for surface code
error-corrected machines. Interactions among distant qubits require surface
code braids (physical pathways on chip) which must be routed. Magic-state
factories are circuits comprised of a complex set of braids that is more
difficult to route than quantum circuits considered in previous work [1]. This
paper explores the impact of scheduling techniques, such as gate reordering and
qubit renaming, and we propose two novel mapping techniques: braid repulsion
and dipole moment braid rotation. We combine these techniques with graph
partitioning and community detection algorithms, and further introduce a
stitching algorithm for mapping subgraphs onto a physical machine. Our results
show a factor of 5.64 reduction in space-time volume compared to the best-known
previous designs for magic-state factories.Comment: 13 pages, 10 figure
Solar Sources of Interplanetary Magnetic Clouds Leading to Helicity Prediction
This study identifies the solar origins of magnetic clouds that are observed
at 1 AU and predicts the helical handedness of these clouds from the solar
surface magnetic fields. We started with the magnetic clouds listed by the
Magnetic Field Investigation (MFI) team supporting NASA's WIND spacecraft in
what is known as the MFI table and worked backwards in time to identify solar
events that produced these clouds. Our methods utilize magnetograms from the
Helioseismic and Magnetic Imager (HMI) instrument on the Solar Dynamics
Observatory (SDO) spacecraft so that we could only analyze MFI entries after
the beginning of 2011. This start date and the end date of the MFI table gave
us 37 cases to study. Of these we were able to associate only eight surface
events with clouds detected by WIND at 1 AU. We developed a simple algorithm
for predicting the cloud helicity which gave the correct handedness in all
eight cases. The algorithm is based on the conceptual model that an ejected
flux tube has two magnetic origination points at the positions of the strongest
radial magnetic field regions of opposite polarity near the places where the
ejected arches end at the solar surface. We were unable to find events for the
remaining 29 cases: lack of a halo or partial halo CME in an appropriate time
window, lack of magnetic and/or filament activity in the proper part of the
solar disk, or the event was too far from disk center. The occurrence of a
flare was not a requirement for making the identification but in fact flares,
often weak, did occur for seven of the eight cases.Comment: 18 pages, 8 figures, 2 table
Recommended from our members
Design Space Exploration in Cyber-Physical Systems
Cyber physical systems (CPS) integrate a variety of engineering areas such as control, mechanical and computer engineering in a holistic design effort. While interdependencies between the different disciplines are key attributes of CPS design science, little is known about the impact of design decisions of the cyber part on the overall system qualities. To investigate these interdependencies, this paper proposes a simulation-based Design Space Exploration (DSE) framework that considers detailed cyber system parameters such as cache size, bus width, and voltage levels in addition to physical and control parameters of the CPS. We propose an exploration algorithm that surfs the parameter configurations in the cyber physical sub-systems, in order to approximate the Pareto-optimal design points with regards to the trade-os among the design objectives, such as energy consumption and control stability. We apply the proposed framework to a network control system for an inverted-pendulum application. The presented holistic evaluation of the identified Pareto-points reveals the presence of non-trivial trade-os, which are imposed by the control, physical, and detailed cyber parameters. For instance the identified energy and control optimal design points comprise configurations with a wide range of CPU speeds, sample times and cache configuration following non-trivial zig-zag patterns. The proposed framework could identify and manage those trade-os and, as a result, is an imperative rst step to automate the search for superior CSP configurations
An interaction paradigm for impact analysis
The Aerospace industry is concerned with huge software projects. Software development is an evolving process resulting in larger and larger software systems. As systems grow in size, they become more complex and hence harder to maintain. Thus it appears that the maintenance of software systems is the most expensive part of the software life-cycle, often consuming 50-90% of a project total budget. Yet while there has been much research carried out on the problems of program and system development very little work has been done on the problem of maintaining developed programs. Thus it will be essential to improve the software maintenance process and the environment for maintenance. Historically, the term Software Maintenance has been applied to the process of modifying a software program after it has been delivered and during its life time. The high cost of software during its life cycle can be attributed largely to software maintenance activities, and a major part of these activities is to deal with the modifications of the software. These modifications may involve changes at any level of abstraction of a software system (i.e design, specification, code,...). Software Maintenance has to deal with modifications which can have severe Ripple Effects at other points in the software system. Impact Analysis addresses the problem and attempts to localize these Ripple Effects. In this thesis the Software Maintenance process and more specifically the Impact Analysis process is examined. The different parts of the implementation for the Impact Analysis System are explained. The main results of the thesis are the dependencies generation and the graph tool used to visualize these dependencies as well as the impacts on general dependency graph for impact analysis purpose
Autonomous Architectural Assembly And Adaptation
An increasingly common solution for systems which are deployed in unpredictable
or dangerous environments is to provide the system with an autonomous or selfmanaging
capability. This capability permits the software of the system to adapt to
the environmental conditions encountered at runtime by deciding what changes
need to be made to the system’s behaviour in order to continue meeting the
requirements imposed by the designer. The chief advantage of this approach comes
from a reduced reliance on the brittle assumptions made at design time.
In this work, we describe mechanisms for adapting the software architecture of
a system using a declarative expression of the functional requirements (derived
from goals), structural constraints and preferences over the space of non-functional
properties possessed by the components of the system. The declarative approach
places this work in contrast to existing schemes which require more fine-grained,
often procedural, specifications of how to perform adaptations. Our algorithm for
assembling and re-assembling configurations chooses between solutions that meet
both the functional requirements and the structural constraints by comparing
the non-functional properties of the selected components against the designer’s
preferences between, for example, a high-performance or a highly reliable solution.
In addition to the centralised algorithm, we show how the approach can be applied
to a distributed system with no central or master node that is aware of the full
space of solutions. We use a gossip protocol as a mechanism by which peer nodes
can propose what they think the component configuration is (or should be). Gossip
ensures that the nodes will reach agreement on a solution, and will do so in a
logarithmic number of steps. This latter property ensures the approach can scale
to very large systems. Finally, the work is validated on a number of case studies
- …