132 research outputs found

    DeepAdjoint: An All-in-One Photonic Inverse Design Framework Integrating Data-Driven Machine Learning with Optimization Algorithms

    Full text link
    In recent years, hybrid design strategies combining machine learning (ML) with electromagnetic optimization algorithms have emerged as a new paradigm for the inverse design of photonic structures and devices. While a trained, data-driven neural network can rapidly identify solutions near the global optimum with a given dataset's design space, an iterative optimization algorithm can further refine the solution and overcome dataset limitations. Furthermore, such hybrid ML-optimization methodologies can reduce computational costs and expedite the discovery of novel electromagnetic components. However, existing hybrid ML-optimization methods have yet to optimize across both materials and geometries in a single integrated and user-friendly environment. In addition, due to the challenge of acquiring large datasets for ML, as well as the exponential growth of isolated models being trained for photonics design, there is a need to standardize the ML-optimization workflow while making the pre-trained models easily accessible. Motivated by these challenges, here we introduce DeepAdjoint, a general-purpose, open-source, and multi-objective "all-in-one" global photonics inverse design application framework which integrates pre-trained deep generative networks with state-of-the-art electromagnetic optimization algorithms such as the adjoint variables method. DeepAdjoint allows a designer to specify an arbitrary optical design target, then obtain a photonic structure that is robust to fabrication tolerances and possesses the desired optical properties - all within a single user-guided application interface. Our framework thus paves a path towards the systematic unification of ML and optimization algorithms for photonic inverse design

    Back-Propagation Optimization and Multi-Valued Artificial Neural Networks for Highly Vivid Structural Color Filter Metasurfaces

    Full text link
    We introduce a novel technique for designing color filter metasurfaces using a data-driven approach based on deep learning. Our innovative approach employs inverse design principles to identify highly efficient designs that outperform all the configurations in the dataset, which consists of 585 distinct geometries solely. By combining Multi-Valued Artificial Neural Networks and back-propagation optimization, we overcome the limitations of previous approaches, such as poor performance due to extrapolation and undesired local minima. Consequently, we successfully create reliable and highly efficient configurations for metasurface color filters capable of producing exceptionally vivid colors that go beyond the sRGB gamut. Furthermore, our deep learning technique can be extended to design various pixellated metasurface configurations with different functionalities.Comment: To be published. 25 Pages, 17 Figure

    Deep learning in light-matter interactions

    Get PDF
    The deep-learning revolution is providing enticing new opportunities to manipulate and harness light at all scales. By building models of light-matter interactions from large experimental or simulated datasets, deep learning has already improved the design of nanophotonic devices and the acquisition and analysis of experimental data, even in situations where the underlying theory is not sufficiently established or too complex to be of practical use. Beyond these early success stories, deep learning also poses several challenges. Most importantly, deep learning works as a black box, making it difficult to understand and interpret its results and reliability, especially when training on incomplete datasets or dealing with data generated by adversarial approaches. Here, after an overview of how deep learning is currently employed in photonics, we discuss the emerging opportunities and challenges, shining light on how deep learning advances photonics

    Thin On-Sensor Nanophotonic Array Cameras

    Full text link
    Today's commodity camera systems rely on compound optics to map light originating from the scene to positions on the sensor where it gets recorded as an image. To record images without optical aberrations, i.e., deviations from Gauss' linear model of optics, typical lens systems introduce increasingly complex stacks of optical elements which are responsible for the height of existing commodity cameras. In this work, we investigate flat nanophotonic computational cameras as an alternative that employs an array of skewed lenslets and a learned reconstruction approach. The optical array is embedded on a metasurface that, at 700~nm height, is flat and sits on the sensor cover glass at 2.5~mm focal distance from the sensor. To tackle the highly chromatic response of a metasurface and design the array over the entire sensor, we propose a differentiable optimization method that continuously samples over the visible spectrum and factorizes the optical modulation for different incident fields into individual lenses. We reconstruct a megapixel image from our flat imager with a learned probabilistic reconstruction method that employs a generative diffusion model to sample an implicit prior. To tackle scene-dependent aberrations in broadband, we propose a method for acquiring paired captured training data in varying illumination conditions. We assess the proposed flat camera design in simulation and with an experimental prototype, validating that the method is capable of recovering images from diverse scenes in broadband with a single nanophotonic layer.Comment: 18 pages, 12 figures, to be published in ACM Transactions on Graphic
    • …
    corecore