538 research outputs found

    Non-Archimedean meromorphic solutions of functional equations

    Full text link
    In this paper, we discuss meromorphic solutions of functional equations over non-Archimedean fields, and prove analogues of the Clunie lemma, Malmquist-type theorem and Mokhon'ko theorem

    Dynamic dissipative cooling of a mechanical oscillator in strong-coupling optomechanics

    Full text link
    Cooling of mesoscopic mechanical resonators represents a primary concern in cavity optomechanics. Here in the strong optomechanical coupling regime, we propose to dynamically control the cavity dissipation, which is able to significantly accelerate the cooling process while strongly suppressing the heating noise. Furthermore, the dynamic control is capable of overcoming quantum backaction and reducing the cooling limit by several orders of magnitude. The dynamic dissipation control provides new insights for tailoring the optomechanical interaction and offers the prospect of exploring macroscopic quantum physics.Comment: accepetd in Physical Review Letter

    1-{(1Z)-1-[3-(2,4-Dichloro­phen­oxy)prop­oxy]-1-(2,4-difluoro­phen­yl)prop-1-en-2-yl}-1H-1,2,4-triazole

    Get PDF
    In the title compound, C20H17Cl2F2N3O2, the triazole ring makes dihedral angles of 28.0 (3) and 72.5 (2)° with the 2,4-dichloro­pheny and 2,4-difluoro­phenyl rings, respectively, and the mol­ecule adopts a Z-conformation about the C=C double bond. In the crystal, C—H⋯O and C—H⋯N hydrogen bonds link the mol­ecules

    An Efficient and Reliable Asynchronous Federated Learning Scheme for Smart Public Transportation

    Full text link
    Since the traffic conditions change over time, machine learning models that predict traffic flows must be updated continuously and efficiently in smart public transportation. Federated learning (FL) is a distributed machine learning scheme that allows buses to receive model updates without waiting for model training on the cloud. However, FL is vulnerable to poisoning or DDoS attacks since buses travel in public. Some work introduces blockchain to improve reliability, but the additional latency from the consensus process reduces the efficiency of FL. Asynchronous Federated Learning (AFL) is a scheme that reduces the latency of aggregation to improve efficiency, but the learning performance is unstable due to unreasonably weighted local models. To address the above challenges, this paper offers a blockchain-based asynchronous federated learning scheme with a dynamic scaling factor (DBAFL). Specifically, the novel committee-based consensus algorithm for blockchain improves reliability at the lowest possible cost of time. Meanwhile, the devised dynamic scaling factor allows AFL to assign reasonable weights to stale local models. Extensive experiments conducted on heterogeneous devices validate outperformed learning performance, efficiency, and reliability of DBAFL

    From Wide to Deep: Dimension Lifting Network for Parameter-efficient Knowledge Graph Embedding

    Full text link
    Knowledge graph embedding (KGE) that maps entities and relations into vector representations is essential for downstream applications. Conventional KGE methods require high-dimensional representations to learn the complex structure of knowledge graph, but lead to oversized model parameters. Recent advances reduce parameters by low-dimensional entity representations, while developing techniques (e.g., knowledge distillation or reinvented representation forms) to compensate for reduced dimension. However, such operations introduce complicated computations and model designs that may not benefit large knowledge graphs. To seek a simple strategy to improve the parameter efficiency of conventional KGE models, we take inspiration from that deeper neural networks require exponentially fewer parameters to achieve expressiveness comparable to wider networks for compositional structures. We view all entity representations as a single-layer embedding network, and conventional KGE methods that adopt high-dimensional entity representations equal widening the embedding network to gain expressiveness. To achieve parameter efficiency, we instead propose a deeper embedding network for entity representations, i.e., a narrow entity embedding layer plus a multi-layer dimension lifting network (LiftNet). Experiments on three public datasets show that by integrating LiftNet, four conventional KGE methods with 16-dimensional representations achieve comparable link prediction accuracy as original models that adopt 512-dimensional representations, saving 68.4% to 96.9% parameters
    corecore