153 research outputs found

    Numerical methods for shape optimization of photonic nanostructures

    Get PDF

    Analysis of Reactor Simulations Using Surrogate Models.

    Full text link
    The relatively recent abundance of computing resources has driven computational scientists to build more complex and approximation-free computer models of physical phenomenon. Often times, multiple high fidelity computer codes are coupled together in hope of improving the predictive powers of simulations with respect to experimental data. To improve the predictive capacity of computer codes experimental data should be folded back into the parameters processed by the codes through optimization and calibration algorithms. However, application of such algorithms may be prohibitive since they generally require thousands of evaluations of computationally expensive, coupled, multiphysics codes. Surrogates models for expensive computer codes have shown promise towards making optimization and calibration feasible. In this thesis, non-intrusive surrogate building techniques are investigated for their applicability in nuclear engineering applications. Specifically, Kriging and the coupling of the anchored-ANOVA decomposition with collocation are utilized as surrogate building approaches. Initially, these approaches are applied and naively tested on simple reactor applications with analytic solutions. Ultimately, Kriging is applied to construct a surrogate to analyze fission gas release during the RisĂž AN3 power ramp experiment using the fuel performance modeling code Bison. To this end, Kriging is extended from building surrogates for scalar quantities to entire time series using principal component analysis. A surrogate model is built for fission gas kinetics time series and the true values of relevant parameters are inferred by folding experimental data with the surrogate. Sensitivity analysis is also performed on the fission gas release parameters to gain insight into the underlying physics.PhDNuclear Engineering and Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111485/1/yankovai_1.pd

    Fundamental Carrier-Envelope Phase Noise Limitations during Pulse Formation and Detection

    Get PDF
    The difference between the positions of the maximum peak of the carrier wave of a laser pulse and the maximum of its intensity envelope is termed carrier-envelope phase (CEP). In the last decades, the control and stabilization of this parameter has greatly improved, which enables many applications in research fields that rely on CEP-stable pulses such as attosecond science and optical frequency metrology. Further progress in these fields depends strongly on minimizing the CEP noise that restricts stabilization performance. While the CEP of most high repetition-rate low-energy laser oscillators has been stabilized to a remarkable precision, some types of oscillators show extensive noise that inhibits precise stabilization. The CEP stabilization performance of low repetition-rate high peak-power amplified laser systems also remains limited by noise, which is believed to stem mainly from the CEP detection process. In this thesis, the origins of the CEP noise within four oscillators as well as the noise induced by the measurement of the CEP of amplified pulses are investigated. In the first part, the properties of the CEP noise of one Ti:sapphire oscillator and three different fiber oscillators are extracted by analyzing the unstabilized CEP traces by means of time-resolved correlation analysis of carrier-envelope amplitude and phase noise as well as by methods that reveal the underlying statistical noise properties. In the second part, investigations into the origin of CEP noise induced by the measurement of the CEP of amplified pulses are conducted by comparing several different CEP detection designs that are based on f -2 f interferometry. These detection setups differ in the employed sources of spectral broadening as well as frequency doubling media, both necessary steps to measure the CEP. The results in both parts of this thesis show that white quantum noise dominates most CEP measurements. In one particular fiber oscillator, the strong white noise is found to be a result of a correlating mechanism within the employed SESAM. During amplifier CEP detection, the CEP noise is found to be originating only to a marginal degree from the number of photons that are detected during the measurement, which excludes shot noise as a limiting source. Instead, the analysis reveals that the origin of the observed strong white noise can be interpreted as a loss of coherence during detection. This type of coherence is termed here intra-pulse coherence and describes the phase transfer within f -2 f interferometry. Its degradation is a result of amplitude-to-phase coupling during the spectral broadening process that leads to pulse-to-pulse fluctuations of the phases at the edges of the extended spectrum. Numerical simulations support the concept of intra-pulse coherence degradation and show that the degradation is substantially stronger during plasma-driven spectral broadening as compared to self-phase modulation-dominated spectral broadening. This difference in degradation also explains the much stronger CEP noise typically observed in amplified systems as compared to oscillators, as the former typically rely on filamentation-based and hence plasma-dominated spectral broadening for CEP detection. The concept of intra-pulse coherence constitutes a novel measure to assess the suitability of a spectral broadening mechanism for application in active as well as in passive CEP stabilization schemes and provides new strategies to reduce the impact of the CEP detection on the overall stabilization performance of most lasers.Diese Arbeit beschĂ€ftigt sich mit der Identifizierung und Minimierung fundamentaler Rauschquellen, die zu einer Limitierung des erreichbaren Carrier-Envelope Phasen (CEP) Jitters fĂŒhren. Die Carrier-Envelope Phase beschreibt die Differenz zwischen dem Maximum der TrĂ€gerwelle und dem Scheitelpunkt der IntensitĂ€tseinhĂŒllenden. In den letzten Jahrzehnten hat sich die Kontrolle und Stabilisierung der CEP deutlich verbessert, was zu einem schnellen Fortschritt in Forschungsfeldern gefĂŒhrt hat, bei denen CEP-stabile Pulse notwendig sind. Diese Forschungsfelder umfassen die Attosekundenforschung und optische Frequenzmetrologie. Weitere Entwicklungen in diesen Feldern hĂ€ngt stark von der Minimierung von CEP Rauschen ab, welches die CEP Stabilisierung stark beeintrĂ€chtigt. Obwohl die CEP der Pulse der meisten Laseroszillatoren mit hohen Repetitionsraten Ă€ußerst genau stabilisiert werden kann, existieren einige Laseroszillatoren bei denen starke Rauschquellen eine Stabilisierung verhindern oder stark einschrĂ€nken. Des Weiteren zeigen vor Allem verstĂ€rkte System mit niedrigen Repetitionsraten und hohen Spitzenleistungen eine BeschrĂ€nkung der CEP Stabilisierung aufgrund von Rauschen, dass vermutlich zum großen Teil durch den Detektionsprozess entsteht. In dieser Arbeit ist der Ursprung von CEP Rauschen in vier unterschiedlichen Laseroszillatoren sowie wĂ€hrend der Detektion der CEP von verstĂ€rkten Systemen untersucht worden. Im ersten Teil wurden die Eigenschaften des CEP Rauschens eines Ti:Saphir-basierten Oszillators und drei verschiedener Faserlaser analysiert. Hierzu wurde das Rauschen unter anderem mittels zeitaufgelöster Korrelationsanalyse von Carrier-Envelope Amplituden- und Phasenrauschen sowie mittels Methoden, die die statistischen Eigenschaften des Rauschens offenlegen, analysiert. Im zweiten Teil der Arbeit wurde das Rauschen untersucht, welches durch den Messprozess der CEP von verstĂ€rkten Pulsen mittels f -2 f Interferometrie entsteht. Experimentell wurden hierzu vier unterschiedliche Detektionsanordnungen verwendet, die sich durch die Nutzung unterschiedlicher nichtlinearer Prozesse zum Erzeugen der spektralen Verbreiterung sowie zur Erzeugung der zweiten Harmonischen unterscheiden. Die Ergebnisse in beiden Teilen der Arbeit zeigen dominierendes weißes Quantenrauschen in den meisten CEP Messungen. In einem bestimmten Faserlaser, in dem besonders starkes weißes Rauschen vorlag, konnte der Ursprung einerWechselwirkung innerhalb des verwendeten halbleiterbasierten sĂ€ttigbaren Absorbers zugeordnet werden. Bei der Detektion der CEP bei verstĂ€rkten Systemen wurde hingegen gezeigt, dass niedrige Photonenzahlen und damit Schrotrauschen nur zum kleinen Teil fĂŒr die starken weißen Rauschanteile verantwortlich gemacht werden kann. Stattdessen kann die Ursache des starken Rauschens einem Verlust von KohĂ€renz zugeordnet werden. Diese Art von KohĂ€renz ist hier mit intra-Puls KohĂ€renz bezeichnet und beschreibt den Phasentransfer innerhalb der Detektion mittels f -2 f Interferometrie. Der Verlust von intra-Puls KohĂ€renz ist eine Folge von Amplituden-zu-Phasen Koppelung wĂ€hrend der spektralen Verbreiterung. Von Puls zu Puls fĂŒhrt dies zu Fluktuationen der Phase an beiden RĂ€ndern der erzeugten spektralen Verbreiterung. Numerische Simulationen unterstĂŒtzen das Konzept der intra-Puls KohĂ€renz und zeigen auf, dass die Degradation bedeutend stĂ€rker bei plasmadominierten Prozessen ausfĂ€llt als im Vergleich zu spektraler Verbreiterung mittels Selbstphasenmodulation. Dieser unterschiedlich starke Verlust der intra-Puls KohĂ€renz erklĂ€rt das deutlich höhere Rauschniveau in verstĂ€rkten Systemen im Vergleich zu Oszillatoren, da verstĂ€rkte Systeme plasmadominierte Prozesse zur spektralen Verbreiterung nutzen. Das Konzept der intra-Puls KohĂ€renz stellt ein neues Maß zur EinschĂ€tzung einer Methode zur spektralen Verbreiterung fĂŒr eine bestimmte Anwendung dar, die sowohl in aktiven sowie passiven CEP Stabilisierungen von Lasern eine Rolle spielt. Es ermöglicht somit neue Strategien, um den Einfluss der Detektion auf die CEP Stabilisierung der meisten Laser zu senken

    2022 Review of Data-Driven Plasma Science

    Get PDF
    Data-driven science and technology offer transformative tools and methods to science. This review article highlights the latest development and progress in the interdisciplinary field of data-driven plasma science (DDPS), i.e., plasma science whose progress is driven strongly by data and data analyses. Plasma is considered to be the most ubiquitous form of observable matter in the universe. Data associated with plasmas can, therefore, cover extremely large spatial and temporal scales, and often provide essential information for other scientific disciplines. Thanks to the latest technological developments, plasma experiments, observations, and computation now produce a large amount of data that can no longer be analyzed or interpreted manually. This trend now necessitates a highly sophisticated use of high-performance computers for data analyses, making artificial intelligence and machine learning vital components of DDPS. This article contains seven primary sections, in addition to the introduction and summary. Following an overview of fundamental data-driven science, five other sections cover widely studied topics of plasma science and technologies, i.e., basic plasma physics and laboratory experiments, magnetic confinement fusion, inertial confinement fusion and high-energy-density physics, space and astronomical plasmas, and plasma technologies for industrial and other applications. The final section before the summary discusses plasma-related databases that could significantly contribute to DDPS. Each primary section starts with a brief introduction to the topic, discusses the state-of-the-art developments in the use of data and/or data-scientific approaches, and presents the summary and outlook. Despite the recent impressive signs of progress, the DDPS is still in its infancy. This article attempts to offer a broad perspective on the development of this field and identify where further innovations are required

    Simulation Intelligence: Towards a New Generation of Scientific Methods

    Full text link
    The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science

    A systematic approach for integrated product, materials, and design-process design

    Get PDF
    Designers are challenged to manage customer, technology, and socio-economic uncertainty causing dynamic, unquenchable demands on limited resources. In this context, increased concept flexibility, referring to a designer s ability to generate concepts, is crucial. Concept flexibility can be significantly increased through the integrated design of product and material concepts. Hence, the challenge is to leverage knowledge of material structure-property relations that significantly affect system concepts for function-based, systematic design of product and materials concepts in an integrated fashion. However, having selected an integrated product and material system concept, managing complexity in embodiment design-processes is important. Facing a complex network of decisions and evolving analysis models a designer needs the flexibility to systematically generate and evaluate embodiment design-process alternatives. In order to address these challenges and respond to the primary research question of how to increase a designer s concept and design-process flexibility to enhance product creation in the conceptual and early embodiment design phases, the primary hypothesis in this dissertation is embodied as a systematic approach for integrated product, materials and design-process design. The systematic approach consists of two components i) a function-based, systematic approach to the integrated design of product and material concepts from a systems perspective, and ii) a systematic strategy to design-process generation and selection based on a decision-centric perspective and a value-of-information-based Process Performance Indicator. The systematic approach is validated using the validation-square approach that consists of theoretical and empirical validation. Empirical validation of the framework is carried out using various examples including: i) design of a reactive material containment system, and ii) design of an optoelectronic communication system.Ph.D.Committee Chair: Allen, Janet K.; Committee Member: Aidun, Cyrus K.; Committee Member: Klein, Benjamin; Committee Member: McDowell, David L.; Committee Member: Mistree, Farrokh; Committee Member: Yoder, Douglas P

    Optical imaging and spectroscopy for the study of the human brain: status report

    Get PDF
    This report is the second part of a comprehensive two-part series aimed at reviewing an extensive and diverse toolkit of novel methods to explore brain health and function. While the first report focused on neurophotonic tools mostly applicable to animal studies, here, we highlight optical spectroscopy and imaging methods relevant to noninvasive human brain studies. We outline current state-of-the-art technologies and software advances, explore the most recent impact of these technologies on neuroscience and clinical applications, identify the areas where innovation is needed, and provide an outlook for the future directions

    Optical imaging and spectroscopy for the study of the human brain: status report.

    Get PDF
    This report is the second part of a comprehensive two-part series aimed at reviewing an extensive and diverse toolkit of novel methods to explore brain health and function. While the first report focused on neurophotonic tools mostly applicable to animal studies, here, we highlight optical spectroscopy and imaging methods relevant to noninvasive human brain studies. We outline current state-of-the-art technologies and software advances, explore the most recent impact of these technologies on neuroscience and clinical applications, identify the areas where innovation is needed, and provide an outlook for the future directions
    • 

    corecore