4,329 research outputs found

    Cost-saving or Cost-enhancing Mergers: the Impact of the Distribution of Roles in Oligopoly

    Get PDF
    Horizontal Merger, Efficiency gains, Efficiency losses, Stackelberg oligopoly, Market power

    NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval

    Full text link
    Pseudo-relevance feedback (PRF) is commonly used to boost the performance of traditional information retrieval (IR) models by using top-ranked documents to identify and weight new query terms, thereby reducing the effect of query-document vocabulary mismatches. While neural retrieval models have recently demonstrated strong results for ad-hoc retrieval, combining them with PRF is not straightforward due to incompatibilities between existing PRF approaches and neural architectures. To bridge this gap, we propose an end-to-end neural PRF framework that can be used with existing neural IR models by embedding different neural models as building blocks. Extensive experiments on two standard test collections confirm the effectiveness of the proposed NPRF framework in improving the performance of two state-of-the-art neural IR models.Comment: Full paper in EMNLP 201

    Barriers to Financial Compensation for Artists in the Recording Industry in the Digital Age

    Get PDF
    For decades, consumers, due to frequent technological advances, have utilized a variety of music-listening processes that have each become obsolete as more easily accessible technologies emerged. This change in music consumption methods is often detrimental to parties in the recording industry. The digitalization of the recording industry has allowed consumers to obtain music through means other than physical purchase, leading to well-documented financial insecurity for artists (Eiriz & Leite, 2017). In 2018, the Music Industry Research Association (MIRA) conducted a survey of 1,227 musicians and found that 61% of the group agreed that their music-related income is not enough to cover their living expenses (MIRA, 2018). For this reason, frequent attempts to deter widespread copyright infringement have been made. However, the aggressive litigation strategy of the recording industry and the development of streaming services as a viable music consumption method have instead decreased sales and negatively impacted artists’ revenue from the recording industry (Fedock, 2005; Marshall, 2015)

    Building high-level features using large scale unsupervised learning

    Full text link
    We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting with these learned features, we trained our network to obtain 15.8% accuracy in recognizing 20,000 object categories from ImageNet, a leap of 70% relative improvement over the previous state-of-the-art

    Inference skipping for more efficient real-time speech enhancement with parallel RNNs

    Full text link
    Deep neural network (DNN) based speech enhancement models have attracted extensive attention due to their promising performance. However, it is difficult to deploy a powerful DNN in real-time applications because of its high computational cost. Typical compression methods such as pruning and quantization do not make good use of the data characteristics. In this paper, we introduce the Skip-RNN strategy into speech enhancement models with parallel RNNs. The states of the RNNs update intermittently without interrupting the update of the output mask, which leads to significant reduction of computational load without evident audio artifacts. To better leverage the difference between the voice and the noise, we further regularize the skipping strategy with voice activity detection (VAD) guidance, saving more computational load. Experiments on a high-performance speech enhancement model, dual-path convolutional recurrent network (DPCRN), show the superiority of our strategy over strategies like network pruning or directly training a smaller model. We also validate the generalization of the proposed strategy on two other competitive speech enhancement models.Comment: 11 pages, 8 figures, accepted by IEEE/ACM TASL
    • …
    corecore