32 research outputs found

    How hybrid working from home works out

    Get PDF
    Hybrid working from home (WFH), whereby employees work a mix of days at home and at work each week, has become dominant for graduate employees in the US. This paper evaluates a randomized control trial on 1612 engineers, marketing and finance employees of a large technology firm that allowed odd birthday employees to WFH on Wednesday and Friday and kept even birthday employees full time in the office. There are four key results. First, WFH reduced attrition rates by 35% and improved self-reported work satisfaction scores, highlighting how employees place a considerable value on this amenity. Second, WFH reduced hours worked on home days but increased it on other work days and the weekend, highlighting how home-working alters the structure of the working week. Third, WFH employees increased individual messaging and group video call communication, even when in the office, reflecting the impact of remote work on working patterns. Finally, while there was no significant impact of WFH on performance ratings or promotions, lines of code written increased by 8%, and employees' self-assessed productivity was up 1.8%, suggesting a small positive impact. Given these benefits for retention, job satisfaction, and productivity, after the experiment ended the firm extended hybrid WFH to the entire company

    Neural Snowball for Few-Shot Relation Learning

    Full text link
    Knowledge graphs typically undergo open-ended growth of new relations. This cannot be well handled by relation extraction that focuses on pre-defined relations with sufficient training data. To address new relations with few-shot instances, we propose a novel bootstrapping approach, Neural Snowball, to learn new relations by transferring semantic knowledge about existing relations. More specifically, we use Relational Siamese Networks (RSN) to learn the metric of relational similarities between instances based on existing relations and their labeled data. Afterwards, given a new relation and its few-shot instances, we use RSN to accumulate reliable instances from unlabeled corpora; these instances are used to train a relation classifier, which can further identify new facts of the new relation. The process is conducted iteratively like a snowball. Experiments show that our model can gather high-quality instances for better few-shot relation learning and achieves significant improvement compared to baselines. Codes and datasets are released on https://github.com/thunlp/Neural-Snowball.Comment: Accepted by AAAI202

    Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

    Full text link
    Parameter-shared pre-trained language models (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise. However, it is important to note that parameter sharing does not alleviate computational burdens associated with inference, thus impeding its practicality in situations characterized by limited stringent latency requirements or computational resources. Building upon neural ordinary differential equations (ODEs), we introduce a straightforward technique to enhance the inference efficiency of parameter-shared PLMs. Additionally, we propose a simple pre-training technique that leads to fully or partially shared models capable of achieving even greater inference acceleration. The experimental results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs, providing novel insights into more efficient utilization of parameter-shared models in resource-constrained settings.Comment: EMNLP 2023 Finding

    Denoising Relation Extraction from Document-level Distant Supervision

    Full text link
    Distant supervision (DS) has been widely used to generate auto-labeled data for sentence-level relation extraction (RE), which improves RE performance. However, the existing success of DS cannot be directly transferred to the more challenging document-level relation extraction (DocRE), since the inherent noise in DS may be even multiplied in document level and significantly harm the performance of RE. To address this challenge, we propose a novel pre-trained model for DocRE, which denoises the document-level DS data via multiple pre-training tasks. Experimental results on the large-scale DocRE benchmark show that our model can capture useful information from noisy DS data and achieve promising results.Comment: EMNLP 2020 short pape

    Comparative Study on the Effects and Mechanisms of Salt and Alkali on the Quality Formation of Noodles

    Get PDF
    In order to systematically investigate the quality differences between salted and alkaline noodles and the underlying mechanism, the effect of NaCl and K2CO3 on the quality formation of noodles was studied by evaluating the farinograph and extensograph properties and viscosity properties of wheat flour, as well as the cooking properties, texture properties, and storage stability of noodles. Additionally, the microstructure, protein aggregation and volatile components were investigated to explore the underlying mechanisms of the quality differences. The results showed that salt improved the extensibility of dough and noodles, endowed noodles with higher smoothness and elasticity, and showed little effect on starch viscosity, while alkali enhanced the tensile resistance of dough and the hardness and breaking force of cooked noodles, and increased the gelatinization viscosity of starch. The cooking loss of noodles increased with the addition of salt and alkali. Both salt and alkali significantly inhibited the increase of total plate count (TPC) in fresh noodles and consequently enhanced the storage stability, but 0.5% K2CO3 accelerated the browning rate. The scanning electron microscopy (SEM) results showed that NaCl induced a smooth surface microstructure, while K2CO3 resulted in a rough surface of noodles; K2CO3 significantly increased the rate and extent of thermal polymerization of proteins in noodles. NaCl increased the type and concentration of volatile components in noodles, thereby making the aroma of noodles more intense, but the flavor was similar to that of the control. K2CO3 significantly changed the flavor of noodles and resulted in the generation of unique aldehyde compounds

    Emergent Modularity in Pre-trained Transformers

    Full text link
    This work examines the presence of modularity in pre-trained Transformers, a feature commonly found in human brains and thought to be vital for general intelligence. In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes. (2) function-based neuron grouping: we explore finding a structure that groups neurons into modules by function, and each module works for its corresponding function. Given the enormous amount of possible structures, we focus on Mixture-of-Experts as a promising candidate, which partitions neurons into experts and usually activates different experts for different inputs. Experimental results show that there are functional experts, where clustered are the neurons specialized in a certain function. Moreover, perturbing the activations of functional experts significantly affects the corresponding function. Finally, we study how modularity emerges during pre-training, and find that the modular structure is stabilized at the early stage, which is faster than neuron stabilization. It suggests that Transformers first construct the modular structure and then learn fine-grained neuron functions. Our code and data are available at https://github.com/THUNLP/modularity-analysis.Comment: Findings of ACL 202

    Multi-task Neural Network for Non-discrete Attribute Prediction in Knowledge Graphs

    Full text link
    Many popular knowledge graphs such as Freebase, YAGO or DBPedia maintain a list of non-discrete attributes for each entity. Intuitively, these attributes such as height, price or population count are able to richly characterize entities in knowledge graphs. This additional source of information may help to alleviate the inherent sparsity and incompleteness problem that are prevalent in knowledge graphs. Unfortunately, many state-of-the-art relational learning models ignore this information due to the challenging nature of dealing with non-discrete data types in the inherently binary-natured knowledge graphs. In this paper, we propose a novel multi-task neural network approach for both encoding and prediction of non-discrete attribute information in a relational setting. Specifically, we train a neural network for triplet prediction along with a separate network for attribute value regression. Via multi-task learning, we are able to learn representations of entities, relations and attributes that encode information about both tasks. Moreover, such attributes are not only central to many predictive tasks as an information source but also as a prediction target. Therefore, models that are able to encode, incorporate and predict such information in a relational learning context are highly attractive as well. We show that our approach outperforms many state-of-the-art methods for the tasks of relational triplet classification and attribute value prediction.Comment: Accepted at CIKM 201
    corecore