2,377 research outputs found

    On methods to determine bounds on the Q-factor for a given directivity

    Full text link
    This paper revisit and extend the interesting case of bounds on the Q-factor for a given directivity for a small antenna of arbitrary shape. A higher directivity in a small antenna is closely connected with a narrow impedance bandwidth. The relation between bandwidth and a desired directivity is still not fully understood, not even for small antennas. Initial investigations in this direction has related the radius of a circumscribing sphere to the directivity, and bounds on the Q-factor has also been derived for a partial directivity in a given direction. In this paper we derive lower bounds on the Q-factor for a total desired directivity for an arbitrarily shaped antenna in a given direction as a convex problem using semi-definite relaxation techniques (SDR). We also show that the relaxed solution is also a solution of the original problem of determining the lower Q-factor bound for a total desired directivity. SDR can also be used to relax a class of other interesting non-convex constraints in antenna optimization such as tuning, losses, front-to-back ratio. We compare two different new methods to determine the lowest Q-factor for arbitrary shaped antennas for a given total directivity. We also compare our results with full EM-simulations of a parasitic element antenna with high directivity.Comment: Correct some minor typos in the previous versio

    Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in Dynamic Scenes

    Full text link
    Merging multi-exposure images is a common approach for obtaining high dynamic range (HDR) images, with the primary challenge being the avoidance of ghosting artifacts in dynamic scenes. Recent methods have proposed using deep neural networks for deghosting. However, the methods typically rely on sufficient data with HDR ground-truths, which are difficult and costly to collect. In this work, to eliminate the need for labeled data, we propose SelfHDR, a self-supervised HDR reconstruction method that only requires dynamic multi-exposure images during training. Specifically, SelfHDR learns a reconstruction network under the supervision of two complementary components, which can be constructed from multi-exposure images and focus on HDR color as well as structure, respectively. The color component is estimated from aligned multi-exposure images, while the structure one is generated through a structure-focused network that is supervised by the color component and an input reference (\eg, medium-exposure) image. During testing, the learned reconstruction network is directly deployed to predict an HDR image. Experiments on real-world images demonstrate our SelfHDR achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones. Codes are available at https://github.com/cszhilu1998/SelfHDRComment: ICLR 202

    A Three-Dimensional Cooperative Guidance Law of Multimissile System

    Get PDF
    In order to conduct saturation attacks on a static target, the cooperative guidance problem of multimissile system is researched. A three-dimensional guidance model is built using vector calculation and the classic proportional navigation guidance (PNG) law is extended to three dimensions. Based on this guidance law, a distributed cooperative guidance strategy is proposed and a consensus protocol is designed to coordinate the time-to-go commands of all missiles. Then an expert system, which contains two extreme learning machines (ELM), is developed to regulate the local proportional coefficient of each missile according to the command. All missiles can arrive at the target simultaneously under the assumption that the multimissile network is connected. A simulation scenario is given to demonstrate the validity of the proposed method

    An Abstract Description Method of Map-Reduce-Merge Using Haskell

    Get PDF
    Map-Reduce-Merge is an improved parallel programming model based on Map-Reduce in cloud computing environment. Through the new Merge module, Map-Reduce-Merge can support processing multiple related heterogeneous datasets more efficiently. In order to demonstrate the validity and effectiveness of this new model, we present a rigorous description for Map-Reduce-Merge model using Haskell. Firstly, we describe the basic program skeleton of Map-Reduce-Merge programming model. Secondly, an abstract description for the Merge module is presented by analyzing the structure and function of the Merge module with Haskell as the description tool. Thirdly, we evaluate the Map-Reduce-Merge model on the basis of our description. We capture the functional characteristics of the Map-Reduce-Merge model by our abstract description, which can provide theoretical basis for designing more efficient parallel programming model to process join operation

    How Multilingual is Multilingual LLM?

    Full text link
    Large Language Models (LLMs), trained predominantly on extensive English data, often exhibit limitations when applied to other languages. Current research is primarily focused on enhancing the multilingual capabilities of these models by employing various tuning strategies. Despite their effectiveness in certain languages, the understanding of the multilingual abilities of LLMs remains incomplete. This study endeavors to evaluate the multilingual capacity of LLMs by conducting an exhaustive analysis across 101 languages, and classifies languages with similar characteristics into four distinct quadrants. By delving into each quadrant, we shed light on the rationale behind their categorization and offer actionable guidelines for tuning these languages. Extensive experiments reveal that existing LLMs possess multilingual capabilities that surpass our expectations, and we can significantly improve the multilingual performance of LLMs by focusing on these distinct attributes present in each quadrant
    corecore