13,341 research outputs found

    Large Margin Neural Language Model

    Full text link
    We propose a large margin criterion for training neural language models. Conventionally, neural language models are trained by minimizing perplexity (PPL) on grammatical sentences. However, we demonstrate that PPL may not be the best metric to optimize in some tasks, and further propose a large margin formulation. The proposed method aims to enlarge the margin between the "good" and "bad" sentences in a task-specific sense. It is trained end-to-end and can be widely applied to tasks that involve re-scoring of generated text. Compared with minimum-PPL training, our method gains up to 1.1 WER reduction for speech recognition and 1.0 BLEU increase for machine translation.Comment: 9 pages. Accepted as a long paper in EMNLP201

    The endomorphism of Grassmann graphs

    Full text link
    A graph is called a pseudo-core if every endomorphism is either an automorphism or a colouring. In this paper, we show that every Grassmann graph Jq(n,m)J_q(n,m) is a pseudo-core. Moreover, the Grassmann graph Jq(n,m)J_q(n,m) is a core whenever mm and n−m+1n-m+1 are not relatively prime, and Jq(2pk−2,pk−1)J_q(2pk-2, pk-1) is a core whenever p,k≥2p,k\geq 2.Comment: 8 page

    Diseño e implementación de una Red Lan para la Empresa Palinda

    Get PDF
    The management of the LAN and WAN networks has allowed companies and institutions to optimize the use of resources through a centralized network, allowing the availability of information in a secure and fast way. The present project seeks to integrate communication services, allowing the transmission of data from a central point to the different departments of PALINDA. The fact of performing an analysis of the requirements of the infrastructure allows us to determine a solution with the available technical resources and financially with low costs. PALINDA currently does not have any infrastructure communication technology, so to be able to manage the network in a single system, allows to streamline the procedures and processes so that users get updated information, systematized and in real time streamlining functions.La administración de las redes LAN y WAN en la actualidad ha permitido a las empresas e instituciones optimizar el uso de los recursos mediante una red centralizada permitiendo disponer la información de forma segura y rápida. El presente proyecto busca integrar servicios de comunicación, permitiendo la transmisión de datos desde un punto central hacia los diferentes departamentos de PALINDA. El hecho de realizar un análisis de los requerimientos de la infraestructura nos permite determinar una solución con los recursos técnicos disponibles y financieramente con costos bajos. PALINDA actualmente no cuenta con ninguna infraestructura tecnológica de comunicación por lo que poder administrar la red en un solo sistema, permitirá agilizar los trámites y procesos para que los usuarios obtengan la información actualizada, sistematizada y en tiempo real agilitando las funciones

    Stochastic Controlled Averaging for Federated Learning with Communication Compression

    Full text link
    Communication compression, a technique aiming to reduce the information volume to be transmitted over the air, has gained great interests in Federated Learning (FL) for the potential of alleviating its communication overhead. However, communication compression brings forth new challenges in FL due to the interplay of compression-incurred information distortion and inherent characteristics of FL such as partial participation and data heterogeneity. Despite the recent development, the performance of compressed FL approaches has not been fully exploited. The existing approaches either cannot accommodate arbitrary data heterogeneity or partial participation, or require stringent conditions on compression. In this paper, we revisit the seminal stochastic controlled averaging method by proposing an equivalent but more efficient/simplified formulation with halved uplink communication costs. Building upon this implementation, we propose two compressed FL algorithms, SCALLION and SCAFCOM, to support unbiased and biased compression, respectively. Both the proposed methods outperform the existing compressed FL methods in terms of communication and computation complexities. Moreover, SCALLION and SCAFCOM accommodates arbitrary data heterogeneity and do not make any additional assumptions on compression errors. Experiments show that SCALLION and SCAFCOM can match the performance of corresponding full-precision FL approaches with substantially reduced uplink communication, and outperform recent compressed FL methods under the same communication budget.Comment: 45 pages, 4 figure
    • …
    corecore