1,019 research outputs found

    All electromagnetic scattering bodies are matrix-valued oscillators

    Full text link
    In this article, we introduce a new viewpoint on electromagnetic scattering. Tailoring spectral electromagnetic response underpins important applications ranging from sensing to energy conversion, and is flourishing with new ideas from non-Hermitian physics. There exist excellent theoretical tools for modeling such responses, particularly coupled-mode theories and quasinormal-mode expansions. Yet these approaches offer little insight into the outer limits of what is possible when broadband light interacts with any designable nanophotonic pattern. We show that a special scattering matrix, the "T\mathbb{T}" matrix, can always be decomposed into a set of fictitious Drude--Lorentz oscillators with matrix-valued (spatially nonlocal) coefficients. For any application and any scatterer, the only designable degrees of freedom are these matrix coefficients, implying strong constraints on lineshapes and response functions that had previously been "hidden." To demonstrate the power of this approach, we apply it to near-field radiative heat transfer, where there has been a long-standing gap between the best known designs and theoretical limits to maximum energy exchange. Our new framework identifies upper bounds that come quite close to the current state-of-the-art, and explains why unconventional plasmonic materials should be superior to conventional plasmonic materials. More generally, this approach can be seamlessly applied to high-interest applications across nanophotonics -- including for metasurfaces, imaging, and photovoltaics -- and may be generalizable to unique challenges that arise in acoustic and/or quantum scattering theory.Comment: 6 pages of main text, 33 pages of Supplementary Material

    Advances in Atomic Time Scale imaging with a Fine Intrinsic Spatial Resolution

    Full text link
    Atomic time scale imaging, opening a new era for studying dynamics in microcosmos, is presently attracting immense research interesting on the global level due to its powerful ability. On the atom level, physics, chemistry, and biology are identical for researching atom motion and atomic state change. The light possesses twoness, the information carrier and the research resource. The most fundamental principle of this imaging is that light records the event modulated light field by itself, so called all optical imaging. This paper can answer what is the essential standard to develop and evaluate atomic time scale imaging, what is the optimal imaging system, and what are the typical techniques to implement this imaging, up to now. At present, the best record in the experiment, made by multistage optical parametric amplification (MOPA), is realizing 50 fs resolved optical imaging with a spatial resolution of ~83 lp/mm at an effective framing rate of 10^13 fps for recording an ultrafast optical lattice with its rotating speed up to 10^13 rad/s

    Methods and Apparatus for Autonomous Robotic Control

    Get PDF
    Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements

    Investigation of nonlinear dynamics in and via femtosecond filaments in gases

    Get PDF
    Intense, ultrashort laser pulses are required for the study of many nonlinear optical effects and are of utmost relevance for various applications from ultrafast X-ray radiography in medicine up to remote sensing of the atmosphere. Increasing their intensity while interacting with matter eventually leads to the generation of laser-induced plasma. This plasma has fascinating optical properties such as a negative refractive index contribution proportional to the free electron density or the lack of a damage threshold, giving the prospect of a multitude of new applications based on the manipulation of light with plasma. The realization of such plasma-based applications requires a precise knowledge of its properties and temporal evolution, as the plasma remains for much longer than its generation event. A method to generate and investigate ultrashort laser pulses as well as laser-induced plasma is femtosecond filamentation. It represents the formation of an intense self-guided light channel in a medium for distances much longer than the Rayleigh range of the same beam focused in vacuum. It is formed by a dynamic balance of Kerr-induced self-focusing and plasma-induced defocusing. In this thesis, it is demonstrated that femtosecond filamentation can be employed as a tool to investigate the temporal evolution of laser-induced plasma. The study is realized in various atomic and molecular gas atmospheres via measuring the temporal evolution of the enhancement of third harmonic radiation generated by a femtosecond filament which is intercepted by a laser-induced plasma spot. Significant differences for the lifetime of the plasma in atomic and molecular gas atmospheres are found. Further, a novel method for the complete spatio-temporal characterization of a femtosecond filament along its length is presented. It is based on controlled filament termination at various positions along its length in combination with spatio-temporal pulse characterization and numerical backpropagation of the filament pulses to the termination point. The capabilities of the method are illustrated by revealing complex spatio-temporal dynamics and couplings during filament propagation

    Methodology for Correlating Experimental and Finite Element Modal Analyses on Valve Trains

    Get PDF
    The widespread use of finite element models in assessing system dynamics for noise, vibration, and harshness (NVH) evaluation has led to recognition of the need for improved procedures for correlating models to experimental results. This study develops and applies a methodology to correlate an experimental modal analysis with a finite element modal analysis of valve trains in IC-engines. A pre-test analysis procedure is employed to guide the execution of tests used in the correlation process. This approach improves the efficiency of the test process, ensuring that the test article is neither under nor over-instrumented. The test-analysis model (TAM) that results from the pre-test simulation provides a means to compare the test and the model both during the experimental approach and during the model updating process. The validity of the correlation methodology is demonstrated through its application on the valve train of a single overhead cam (SOHC) engine

    A Segmented aperture space telescope modeling tool and its application to remote sensing as understood through image quality and image utility

    Get PDF
    The remote sensing community is constantly pushing technology forward to achieve bet- ter system performance, this is often done by improving signal-to-noise ratio and spatial and spectral resolution. However, improving one design parameter (such as spatial resolution) could detract from another (such as signal-to-noise). A flexible imaging system simulation tool capable of modeling the effects of changes in system parameters would be a great asset to design engineers. In words, this tool would manipulate a perfect image and produce an output image identical to one physically created by the imaging system. Having such a tool available would make it possible to fully understand a design\u27s potential. In addition, this tool can be used to understand the importance of changes in system parameters. Modern space based remote sensing systems are taking on new forms using sparse and segmented apertures with lightweight mirrors. The driving force for this is that systems are constrained by the size and weight tolerances of the launch vehicles. The new designs come with new problems, many of which are related to the geometry and aberrations of the aperture. The tool developed in this effort will be able to examine the effects of different amounts and types of aperture aberrations. The task is to build an imaging system simulation tool, based on linear systems and standard radiometry, capable of accurately displaying the performance of a plausible design. Using this tool, several designs will be tested using image quality analysis and image utility. Image quality/utility will be determined using three techniques. The first is an image quality prediction technique called the Generalized Image Quality Equation (GIQE) which relates system characteristics to the National Imagery Interpretability Rating Scale (NIIRS). How- ever, due to the unusual aperture geometry of the sparse aperture systems Fiete et al. (2002) showed that the GIQE is unable to accurately predict image quality. The other two approaches are therefore somewhat unorthodox. These approaches do not actually define an image quality but allow systems to be ranked by their performance in a test of motion detection and a test of spatial target detection. A multispectral motion detection algorithm developed and implemented by Adams (2008) combined with motion truth show a given imaging system\u27s ability to track motion. A similar experimental design is evaluated using the spatial target detection algorithm. The tests reveal how changes in parameters such as GSD; SNR; spectral band selection; piston, tip, and tilt aberrations; and light weight optic aberrations affect a system\u27s NIIR\u27s estimate, ability to detect motion, or ability to detect objects

    Bio-inspired log-polar based color image pattern analysis in multiple frequency channels

    Get PDF
    The main topic addressed in this thesis is to implement color image pattern recognition based on the lateral inhibition subtraction phenomenon combined with a complex log-polar mapping in multiple spatial frequency channels. It is shown that the individual red, green and blue channels have different recognition performances when put in the context of former work done by Dragan Vidacic. It is observed that the green channel performs better than the other two channels, with the blue channel having the poorest performance. Following the application of a contrast stretching function the object recognition performance is improved in all channels. Multiple spatial frequency filters were designed to simulate the filtering channels that occur in the human visual system. Following these preprocessing steps Dragan Vidacic\u27s methodology is followed in order to determine the benefits that are obtained from the preprocessing steps being investigated. It is shown that performance gains are realized by using such preprocessing steps

    A Monte Carlo framework for noise removal and missing wedge restoration in cryo-electron tomography

    Get PDF
    In this paper, we describe a statistical method to address an important issue in cryo-electron tomography image analysis: reduction of a high amount of noise and artifacts due to the presence of a missing wedge (MW) in the spectral domain. The method takes as an input a 3D tomo-gram derived from limited-angle tomography, and gives as an output a 3D denoised and artifact compensated volume. The artifact compensation is achieved by filling up the MW with meaningful information. To address this inverse problem, we compute a Minimum Mean Square Error (MMSE) estimator of the uncorrupted image. The underlying high-dimensional integral is computed by applying a dedicated Markov Chain Monte-Carlo (MCMC) sampling procedure based on the Metropolis-Hasting (MH) algorithm. The proposed computational method can be used to enhance visualization or as a pre-processing step for image analysis, including segmentation and classification of macromolecules. Results are presented for both synthetic data and real 3D cryo-electron images

    Repetitive process control of additive manufacturing with application to laser metal deposition

    Get PDF
    Additive Manufacturing (AM) is a set of manufacturing processes which has promise in the production of complex, functional structures that cannot be fabricated with conventional manufacturing and the repair of high-value parts. However, a significant challenge to the adoption of additive manufacturing processes to these applications is proper process control. In order to enable closed-loop process control compact models suitable for control design and for describing the layer-by-layer material addition process are needed. This dissertation proposes a two-dimensional modeling and control framework, with an application to a specific metal-based AM process, whereby the deposition of the current layer is affected by both in-layer and layer-to-layer dynamics, both of which are driven by the state of the previous layer. The proposed modeling framework can be used to create two-dimensional dynamic models for the analysis of layer-to-layer stability and as a foundation for the design of layer-to-layer controllers for AM processes. In order to analyze the stability of this class of systems, linear repetitive process results are extended enabling the treatment of the process model as a two-dimensional analog of a discrete time system. For process control, the closed-loop repetitive process is again treated as a two-dimensional analog of a discrete time system for which controllers are designed. The proposed methodologies are applied to a metal-based AM process, Laser Metal Deposition (LMD), which is known to exhibit layer-to-layer unstable behavior and is also of significant interest to high-value manufacturing industries --Abstract, page iii
    corecore