379 research outputs found

    Quantum Dot Devices for Optical Signal Processing

    Get PDF

    Quantum dot waveguides: ultrafast dynamics and applications:[invited]

    Get PDF

    Simulation of Nonlinear Gain Saturation in Active Photonic Crystal Waveguides

    Get PDF

    Impact of slow-light enhancement on optical propagation in active semiconductor photonic crystal waveguides

    Get PDF
    We derive and validate a set of coupled Bloch wave equations for analyzing the reflection and transmission properties of active semiconductor photonic crystal waveguides. In such devices, slow-light propagation can be used to enhance the material gain per unit length, enabling, for example, the realization of short optical amplifiers compatible with photonic integration. The coupled wave analysis is compared to numerical approaches based on the Fourier modal method and a frequency domain finite element technique. The presence of material gain leads to the build-up of a backscattered field, which is interpreted as distributed feedback effects or reflection at passive-active interfaces, depending on the approach taken. For very large material gain values, the band structure of the waveguide is perturbed, and deviations from the simple coupled Bloch wave model are found.Comment: 8 pages, 5 figure

    Sparse Spatial Transformers for Few-Shot Learning

    Full text link
    Learning from limited data is a challenging task since the scarcity of data leads to a poor generalization of the trained model. The classical global pooled representation is likely to lose useful local information. Recently, many few shot learning methods address this challenge by using deep descriptors and learning a pixel-level metric. However, using deep descriptors as feature representations may lose the contextual information of the image. And most of these methods deal with each class in the support set independently, which cannot sufficiently utilize discriminative information and task-specific embeddings. In this paper, we propose a novel Transformer based neural network architecture called Sparse Spatial Transformers (SSFormers), which can find task-relevant features and suppress task-irrelevant features. Specifically, we first divide each input image into several image patches of different sizes to obtain dense local features. These features retain contextual information while expressing local information. Then, a sparse spatial transformer layer is proposed to find spatial correspondence between the query image and the entire support set to select task-relevant image patches and suppress task-irrelevant image patches. Finally, we propose to use an image patch matching module for calculating the distance between dense local representations, thus to determine which category the query image belongs to in the support set. Extensive experiments on popular few-shot learning benchmarks show that our method achieves the state-of-the-art performance
    • …
    corecore