1,067 research outputs found

    High-speed integrated lithium niobate low-index rib loaded waveguide modulator without direct lithium niobate etching

    Full text link
    Integrated thin film lithium niobate (TFLN) modulators are emerging as an appealing choice for fiber-optic communications, data centers, and microwave photonics due to their high modulation speed and low driving voltage. The key step in fabricating integrated TFLN modulators is the high-quality etching of TFLN, which typically requires long-term fabrication process iteration and specialized equipment. Here we present an integrated TFLN modulator by incorporating low-index rib loaded waveguides onto TFLN without direct etching of TFLN. Based on our systematic investigation into the theory and design methodology of this design, we experimentally demonstrated a 1.3 cm-long Mach-Zender modulator, featuring a 3-dB bandwidth of 59 GHz and a half-wave voltage of 1.96 V. Our design significantly simplifies the fabrication process of integrated TFLN modulators and in turn opens up new avenues for the mass production of high-performance TFLN modulators at low cost

    What is the best spatial distribution to model base station density? A deep dive into two european mobile networks

    Get PDF
    This paper studies the base station (BS) spatial distributions across different scenarios in urban, rural, and coastal zones, based on real BS deployment data sets obtained from two European countries (i.e., Italy and Croatia). Basically, this paper takes into account different representative statistical distributions to characterize the probability density function of the BS spatial density, including Poisson, generalized Pareto, Weibull, lognormal, and \alpha -Stable. Based on a thorough comparison with real data sets, our results clearly assess that the \alpha -Stable distribution is the most accurate one among the other candidates in urban scenarios. This finding is confirmed across different sample area sizes, operators, and cellular technologies (GSM/UMTS/LTE). On the other hand, the lognormal and Weibull distributions tend to fit better the real ones in rural and coastal scenarios. We believe that the results of this paper can be exploited to derive fruitful guidelines for BS deployment in a cellular network design, providing various network performance metrics, such as coverage probability, transmission success probability, throughput, and delay

    Self-guided Few-shot Semantic Segmentation for Remote Sensing Imagery Based on Large Vision Models

    Full text link
    The Segment Anything Model (SAM) exhibits remarkable versatility and zero-shot learning abilities, owing largely to its extensive training data (SA-1B). Recognizing SAM's dependency on manual guidance given its category-agnostic nature, we identified unexplored potential within few-shot semantic segmentation tasks for remote sensing imagery. This research introduces a structured framework designed for the automation of few-shot semantic segmentation. It utilizes the SAM model and facilitates a more efficient generation of semantically discernible segmentation outcomes. Central to our methodology is a novel automatic prompt learning approach, leveraging prior guided masks to produce coarse pixel-wise prompts for SAM. Extensive experiments on the DLRSD datasets underline the superiority of our approach, outperforming other available few-shot methodologies

    Cure the headache of Transformers via Collinear Constrained Attention

    Full text link
    As the rapid progression of practical applications based on Large Language Models continues, the importance of extrapolating performance has grown exponentially in the research domain. In our study, we identified an anomalous behavior in Transformer models that had been previously overlooked, leading to a chaos around closest tokens which carried the most important information. We've coined this discovery the "headache of Transformers". To address this at its core, we introduced a novel self-attention structure named Collinear Constrained Attention (CoCA). This structure can be seamlessly integrated with existing extrapolation, interpolation methods, and other optimization strategies designed for traditional Transformer models. We have achieved excellent extrapolating performance even for 16 times to 24 times of sequence lengths during inference without any fine-tuning on our model. We have also enhanced CoCA's computational and spatial efficiency to ensure its practicality. We plan to open-source CoCA shortly. In the meantime, we've made our code available in the appendix for reappearing experiments.Comment: 16 pages, 6 figure
    • …
    corecore