618 research outputs found

    Semi-optimal Practicable Algorithmic Cooling

    Full text link
    Algorithmic Cooling (AC) of spins applies entropy manipulation algorithms in open spin-systems in order to cool spins far beyond Shannon's entropy bound. AC of nuclear spins was demonstrated experimentally, and may contribute to nuclear magnetic resonance (NMR) spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; Exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semi-optimal practicable AC (SOPAC), wherein few cycles (typically 2-6) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC, and are much more efficient than the exhaustive algorithms. The new algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.Comment: 13 pages, 5 figure

    Thermomechanical modelling and simulation of laser powder bed fusion processes

    Get PDF
    Die vorliegende Arbeit behandelt einen neuartigen mikromechanisch motivierten Rahmen zur Modellierung und Simulation von Laser-Pulver-Bett-Fusion (LPBF) Prozessen. LPBF Verfahren gehören zur additiven Fertigung, welche die schichtweise Herstellung von Bauteilen ermöglicht. (Metallische) Partikel einer Pulverschicht werden durch einen Laserstrahl selektiv geschmolzen, um ein Bauteil zu fertigen. Dadurch ergeben sich innovative Möglichkeiten hinsichtlich Design, Struktur, Materialkombinationen und maßgeschneiderten Teilen. Aufgrund des hohen Temperatureintrags treten komplexe thermische, mechanische und metallurgische Phänomene auf, darunter auch Phasenumwandlungen von Pulver über geschmolzenes zu wieder erstarrtem Material. Diese Hochtemperaturzyklen mit schnellem Aufheizen und Abkühlen verursachen verschiedene Defekte, wie zum Beispiel Hohlräume, Verzug und Eigenspannungen. Um die verschiedenen Fehler eines mit LPBF hergestellten Werkstücks besser vorhersagen zu können, sind neue Ansätze erforderlich. Der erste Schwerpunkt dieser Arbeit liegt auf der Entwicklung eines physikalisch motivierten Materialmodells, das thermodynamisch konsistent ist und auf der Minimierung der freien Energiedichte basiert. Dieses Modell wird im kleinen Maßstab einer einzelnen Schmelzspur angewendet. Im zweiten Teil der Arbeit wird ein Multiskalen-Ansatz entwickelt, der das Phasentransformationsmodell mit der Methode der inhärenten Dehnung kombiniert, um ein vollständiges Teil simulieren zu können. Dieses stellt im Hinblick auf physikalische Genauigkeit und Rechenzeit einen vernünftigen Kompromiss dar. Hierfür wird ein vollständig thermomechanisch gekoppelter Framework verwendet, welcher mithilfe des kommerziellen Finite Elemente Programms Abaqus gelöst wird. Die Simulationen werden auf die α-β- Titanaluminiumlegierung Ti6Al4V angewendet, die je nach Abkühlgeschwindigkeit eine unterschiedliche Zusammensetzung der Mikrostruktur entwickelt. Daher wird im letzten Teil der Arbeit ein Festkörper-Phasentransformationsansatz mit einer neuartigen Dissipationsfunktion vorgestellt, um das entsprechende kontinuierliche Zeit-Temperatur-Umwandlungsschaubild modellieren zu können. Das thermodynamisch und physikalisch fundierte Modell wird anschließend auf LPBFTemperaturprofile auf lokaler Ebene angewendet.The present work deals with a novel micromechanically motivated framework for the modelling and simulation of laser powder bed fusion (LPBF) processes. LPBF processes belong to additive manufacturing (AM) which allows the layer-wise manufacturing of components. (Metallic) particles of a powder layer are selectively molten by a laser beam to construct a part. This opens up innovative possibilities in terms of design, structure, material combinations and custom-made parts. Due to the high temperature input, complex thermal, mechanical and metallurgical phenomena occur, including phase changes from powder to molten to re-solidified material. These high temperature cycles of rapid heating and cooling cause diverse defects such as voids, warpage and residual stresses. New approaches are necessary in order to better predict the various defects of a workpiece manufactured with LPBF. The first focus of this work is set on developing a physically well-motivated material model that is thermodynamically consistent based on the minimisation of the free energy density. This model is then applied to the small scale of a single melt track. Secondly, a multiscale approach is developed combining the phase transformation model with the inherent strain (IS) method to simulate a complete part. This represents a reasonable compromise in view of physical accuracy and computational time. For this purpose, a fully thermomechanically coupled framework is employed using the commercial finite element programme Abaqus. The material used for the simulations is the α- β titan aluminium alloy Ti6Al4V, which developes a different microstructure composition based on the cooling rate. Therefore, in the last part of the work, a solid-state phase transformation approach with a novel dissipation function is presented in order to be able to model the respective continuous cooling transformation diagram. The thermodynamically and physically sound model is then applied to LPBF temperature profiles at the local scale

    Algorithm Selection in Auction-based Allocation of Cloud Computing Resources

    Get PDF

    Coherent control of nuclear and electron spins for quantum information processing

    Get PDF
    The ability to perform quantum error correction (QEC) arbitrarily many cycles is a significant challenge for scalable quantum information processing (QIP). Key requirements for multiple-round QEC are a high degree of quantum control, the ability to efficiently characterize both intrinsic and extrinsic noise, and the ability to dynamically and efficiently extract entropy from ancilla qubits. Nuclear Magnetic Resonance (NMR) based quantum devices have demonstrated high control fidelity with up to 12 qubits, and the noise characterizations can be performed using an efficient protocol known as randomized benchmarking. One of the remaining challenges with NMR systems is that qubit initialization is normally only attainable via thermal equilibration. This results in very low polarizations in reasonable experimental conditions. Moving to electron-nuclear coupled spin systems in a single crystal is a promising solution to the ancilla qubit preparation problem. One obvious advantage of incorporating electron spins comes from higher gyromagnetic ratio of the electron which yields about three orders of magnitude larger thermal spin polarization than that of nuclear spins in the same experimental condition. In addition, fast control of nuclear spins is possible provided appropriate level of anisotropic hyperfine interaction strength. The nuclear spins can be polarized even beyond the thermal electron spin temperature using a technique Heat-Bath Algorithmic Cooling (HBAC). With theoretical ideas in hand, the next step is to develop classical instrumentations to control electron-nuclear coupled systems and accomplish high fidelity coherent control. Noise characterizations are also necessary for benchmarking the quality of control over the electron-nuclear spin system. I first present example applications of NMR QIP with small number of qubits: Testing a foundational question in quantum mechanics and measuring spectral density of noise in a quantum system. Then I report on our home-built X-band electron spin resonance (ESR) spectrometer and progress in achieving high fidelity coherent control of electron and nuclear spins for QIP. We focus on implementing nuclear spin manipulation via anisotropic hyperfine interaction and microwave (mw) control, but discussions also include electron nuclear double resonance (ENDOR) control techniques. We perform realistic algorithmic simulations to show that an experimental cooling of nuclear spins below electron thermal temperature is feasible, and to present the electron-nuclear spin systems as promising testbeds for scalable QIP

    Representation, Characterization, and Mitigation of Noise in Quantum Processors

    Get PDF
    Quantum computers have the potential to outperform classical computers on several families of important problems, and have a great potential to revolutionize our understanding of computational models. However, the presence of noise deteriorates the output quality from near-term quantum computers and may even offset their advantage over classical computers. Studies on noise in these near-term quantum devices has thus become an important field of research during the past years. This thesis addresses several topics related to this subject including representing, quantifying, and mitigating noise in quantum processors. To study noise in quantum processors, it is first necessary to ask how noise can be accurately represented. This is the subject of Chapter 2. The conventional way is to use a gate-set, which include mathematical objects assigned to each component of a quantum processor, and compare individual gate-set elements to their ideal images. Here, we present some clarifications on this approach, pointing out that a gauge freedom exists in this representation. We demonstrate with experimentally relevant examples that there exists equally valid descriptions of the same experiment which distribute errors differently among objects in a gate-set, leading to different error rates. This leads us to rethink about the operational meaning to figures of merit for individual gate-set elements. We propose an alternative operational figure of merit for a gate-set, the mean variation error, and develop a protocol for measuring this figure. We performed numerical simulations for the mean variation error, illustrating how it suggests a potential issue with conventional randomized benchmarking approaches. Next, we study the problem of whether there exist sufficient assumptions under which the gauge ambiguity can be removed, allowing one to obtain error rates of individual gate-set elements in a more conventional manner. We focus on the subset of errors including state preparation and measurement (SPAM) errors, both subject to a gauge ambiguity issue. In Chapter 3, we provide a sufficient assumption that allows a separate SPAM error characterization, and propose a protocol that achieves this in the case of ideal quantum gates. In reality where quantum gates are imperfect, we derived bounds on the estimated SPAM error rates, based on gate error measures which can be estimated independently of SPAM processes. We tested the protocol on a publicly available quantum processor and demonstrated its validity by comparing our results with simulations. In Chapter 4, we present another protocol capable of separately characterizing SPAM errors, based on a different principle of algorithmic cooling (AC). We propose an alternative AC method called measurement-based algorithmic cooling (MBAC), which assumes the ability to perform (potentially imperfect) projective measurements on individual qubits and is available on various modern quantum computing platforms. Cooling reduces the error on initial states while keeping the measurement operations untouched, thereby breaking the gauge symmetry between the two. We demonstrate that MBAC can significantly reduce state preparation error under realistic assumptions, with a small overhead that can be upper bounded by measurable quantities. Thus, our results can be a valuable tool not only for benchmarking near-term quantum processors, but also for improving the quality of state preparation processes in an algorithmic manner. The capability of AC for improving initial state quality has inspired us to perform a parallel study on the thermodynamic cost of AC protocols. The motivation is that since cooling a subset of qubits may result in finite energy increase in its environment, applying them in certain platforms that are temperature-sensitive could induce a negative impact on the overall stability. Meanwhile, previous studies on AC have largely focused on subjects like cooling limits, without paying attention to their thermodynamics. Understanding the thermodynamic cost of AC is of both theoretical and practical interest. These results are presented in Chapter 5. After reviewing their procedure, cooling limits, and target state evolution of various AC protocols, we propose two efficiency measures based on the amount of work required, or the amount of heat released. We show how these measures are related to each other and how they can be computed for a given protocol. We then compare the previously studied protocols using both measures, providing suggestions on which ones to use when these protocols are to be carried out experimentally. We also propose improved protocols that are energetically more favorable over the original proposals. Finally, in Chapter 6, we present a study on a different family of methods aiming at reducing effective noise level in near-term hardware called quantum error mitigation (QEM). The principle behind various QEM approaches is to mimic outputs from the ideal circuit one wants to implement using noisy hardware. These methods recently became popular because many near-term hybrid quantum-classical algorithms only involve relatively shallow depth circuits and limited types of local measurements, implying a manageable cost of performing data processing to alleviate the effect of noise. Using some intuitions built upon classical and quantum communication scenarios, we clarify some fundamental distinctions between quantum error correction (QEC) and QEM. We then discuss the implications of noise invertibility for QEM, and give an explicit construction called Drazin-inverse for non-invertible noise, which is trace preserving while the commonly-used Moore-Penrose pseudoinverse may not be. Finally, we study the consequences of having an imperfect knowledge about the noise, and derive conditions when noise can be reduced using QEM

    A study of the design expertise for plants handling hazardous materials

    Get PDF
    A study of the design expertise for plants handling hazardous material

    Services on Multinationals Operating in Different Countries in Automation and Performance in Organizations as A New Way of Increasing Profit and Cutting Costs

    Get PDF
    The thesiss main purpose is to focus shared services on multinationals operating in different countries and take the automation process as a new way of increasing profit and cutting costs. However, on the other hand, the effect of automation on employment will be targeted. The thesis project is focused on papers that detail the above measures. They are combined, and the primary goal of the analysis is to illustrate that technology cannot substitute people. Does the research include the methodology for determining what a study report is? And what are the numerous kinds? Finally, it is shown that automation is efficient for businesses but cannot replace people on the other hand because creativity and the ability to develop new processes can never be at hand. We chose AZADEA for research support. We interviewed the operations manager and HR team semi-structured to show that although the shared service process is being implemented, it is important to keep our staff there

    Towards Sustainable Freight Energy Management - Development of a Strategic Decision Support Tool

    Get PDF
    Freight transportation, in its current shape and form, is on a highly unsustainable trajectory. Global demand for freight is ever increasing, while this demand is predominantly serviced by inefficient, fossil fuel dependent transportation options. The management of energy use in freight transportation has been identified as a significant opportunity to improve the sustainability of the freight sector. Given the vast amount of energy mitigation measures and policies to choose from to attempt this, decision-makers need support and guidance in terms of selecting which policies to adopt – they are faced with a complex and demanding problem. These complexities result, in part, from the vast range, scope and extent of measures to be considered by decision-makers. The tool developed needs to encompass a suitable methodology for comparing proverbial apples to oranges in a fair and unbiased manner, despite the development of one consistent assessment metric that can accommodate this level of diversity being problematic. Further to this, decision-makers need insight into the extent of implementation that is required for each measure. Because the level of implementation of each measure is variable and the extent to which each adopted measure will be implemented in the network needs to be specified, the number of potential measure implementation combinations that decision-makers need to consider is infinite, adding further complexity to the problem. Freight energy management measures cannot, and should not, be evaluated in isolation. The knock-on effects of measure adoption on the performance of other measures need to be considered. Measures are not all independent and decision-makers need to take these dependencies and their ramifications into account. In addition, there is dimensionality to be accounted for in terms of each measure, because one measure can be applied in a variable manner across different components of the freight network. A unique and independent decision needs to be made on the application of a measure for each of these network components (for example for each mode). Decisions on freight transportation impact all three traditional pillars of sustainability: social, environmental and economic. Measure impacts, thus, need to be assessed over multiple criteria. Decisions will affect a variety of stakeholders and outcomes must be acceptable to a range of interested parties. Sustainability criteria are often in conflict with one another, implying that there are trade-offs to be negotiated by the decision-makers. Decision-makers, thus, need to propose system alterations, or a portfolio of system alterations, that achieve improvements in some sustainability respects, whilst maintaining a balance between all other sustainability aspects. Moreover, the magnitude of impacts (be it positive or negative) of a measure on the sustainability criteria is variable, adding additional dimensionality to the problem. The aim of the research presented in this dissertation was to develop a decision support tool which addresses the complexities involved in the formulation of freight transport energy management strategies on behalf of the decision-makers, facilitating the development of holistic, sustainable and comprehensive freight management policy by government level decision-makers. The Freight Transport Energy Management Tool (FTEMT) was developed in response to this research objective, using a standardised operations research approach as a roadmap for its development. Following a standardised operations research approach to model development provides a structure where stakeholder participation can be encouraged at all the key stages in the decision-making process; it offers a logical basis for proposing solutions and for assessing any proposed suggestions by others; it ensures that the appraisal of alternative solutions is conducted in a logical, consistent and comprehensive manner against the full set of objectives; and it provides a means for assessing whether the implemented instruments have performed as predicted, enabling the improvement of the model being developed. The FTEMT can be classified as a simulation optimisation model, which is a combination between multi-objective optimisation and simulation. The simulation component provides a suitably accurate representation of the freight system and affords the ability to approximate the effect that measure implementation will have on the sustainability objectives, whilst the optimisation component provides the ability to effectively explore the decision space and reduces the number of alternative options (and, therefore, the complexity) that decision-makers need to consider. It is this simulation optimisation backbone of the FTEMT that enables the tool to address all the complexities surrounding the problem, enabling the decision support produced by the FTEMT to provide the information necessary for decision-makers to steer the freight transport sector towards true sustainability. Although this problem originates from the domain of sustainable transportation planning, the combination of operations research and transport modelling knowledge applied proved essential in developing a decision support tool that is able to generate adequate decision support on the problem. To demonstrate the use and usefulness of the decision support system developed, a fictitious case study version of the FTEMT was modelled and is discussed throughout this dissertation. Results from the case study implementation were used to verify and validate the tool, to demonstrate the decision support generated and to illustrate how this decision support can be interpreted and incorporated into a decision-making process. Outputs from the case study FTEMT proved the tool to be operationally valid, as it successfully achieved its stated objectives (the FTEMT unearths a Pareto set of solutions close to the true efficient frontier through the exploration of different energy management measure combinations). Explained in short, the value of using the FTEMT to generate decision support is that it explores the decision space and reduces the number of decision alternatives that decision-makers need to consider to a manageable number of solutions, all of which represent harmonic measure combinations geared toward optimal performance in terms of the entire spectrum of the problem objectives. These solutions are developed taking all the complexity issues surrounding the problem into account. Decision-makers can, thus, have confidence that the acceptance of any one of the solutions proposed by the FTEMT will be a responsible and sound decision. As an additional benefit, preferences and strategic priorities of the decision-makers can be factored in when selecting a preferred decision alternative for implementation. Decision-makers must debate the trade-offs between solutions and need to determine what they are willing to sacrifice to realise what gain, but they are afforded the opportunity to select solutions that show the greatest alignment with their official mandates. The structure of the FTEMT developed and described in this dissertation presents a practical methodology for producing decision support on the development of sound freight energy management policy. This work serves as a basis to stimulate further scholarship and expands upon the collective knowledge on the topic, by proposing an approach that is able to address the full scale of complexities involved in the production of such decision support
    corecore