2,750 research outputs found

    Principles of ‘Newspeak’ in Polish Translations of British and American Press Articles under Communist Rule

    Get PDF
    The paper analyses selected Polish translations of British and American press articles published in the magazine Forum in the years 1965 - 1989. In communist Poland, all such texts were censored before publication, which forced the translators to avoid content and language that could be banned by censors and to adopt a specific style of expression known as Newspeak. The paper lists the linguistic phenomena in the target language that represent features typical of Newspeak and identifies manipulative procedures which led to their occurrence, using a corpus of 25 English texts and their Polish translations

    SoDeep: a Sorting Deep net to learn ranking loss surrogates

    Get PDF
    International audienceSeveral tasks in machine learning are evaluated using non-differentiable metrics such as mean average precision or Spearman correlation. However, their non-differentiability prevents from using them as objective functions in a learning framework. Surrogate and relaxation methods exist but tend to be specific to a given metric. In the present work, we introduce a new method to learn approximations of such non-differentiable objective functions. Our approach is based on a deep architecture that approximates the sorting of arbitrary sets of scores. It is trained virtually for free using synthetic data. This sorting deep (SoDeep) net can then be combined in a plug-and-play manner with existing deep architectures. We demonstrate the interest of our approach in three different tasks that require ranking: Cross-modal text-image retrieval, multi-label image classification and visual memorability ranking. Our approach yields very competitive results on these three tasks, which validates the merit and the flexibility of SoDeep as a proxy for sorting operation in ranking-based losses

    Generic predictions of output probability based on complexities of inputs and outputs

    Get PDF
    For a broad class of input-output maps, arguments based on the coding theorem from algorithmic information theory (AIT) predict that simple (low Kolmogorov complexity) outputs are exponentially more likely to occur upon uniform random sampling of inputs than complex outputs are. Here, we derive probability bounds that are based on the complexities of the inputs as well as the outputs, rather than just on the complexities of the outputs. The more that outputs deviate from the coding theorem bound, the lower the complexity of their inputs. Our new bounds are tested for an RNA sequence to structure map, a finite state transducer and a perceptron. These results open avenues for AIT to be more widely used in physics.Comment: 6 pages plus supplementary material

    From the Visions of Saint Teresa of Jesus to the Voices of Schizophrenia

    Get PDF
    The life of Saint Teresa of Jesus, the most famous mystic of sixteenth-century Spain, was characterized by recurrent visions and states of ecstasy. In this paper, we examine social components related to Teresa’s personal crises and the historical conditions of her times, factors that must be taken into account to understand these unusual forms of experience and behavior. Many of these factors (e.g., increasing individualism and reflexivity) are precursors of the condition of modern times. Indeed, certain parallels can be observed between Saint Teresa and certain present-day psychopathological disorders. The analogy should not, however, be carried too far. Religion played a particularly crucial role in Teresa’s cultural context; as a result, it would be misleading to view her mystical experiences as resulting from a mental disorder

    Do deep neural networks have an inbuilt Occam's razor?

    Full text link
    The remarkable performance of overparameterized deep neural networks (DNNs) must arise from an interplay between network architecture, training algorithms, and structure in the data. To disentangle these three components, we apply a Bayesian picture, based on the functions expressed by a DNN, to supervised learning. The prior over functions is determined by the network, and is varied by exploiting a transition between ordered and chaotic regimes. For Boolean function classification, we approximate the likelihood using the error spectrum of functions on data. When combined with the prior, this accurately predicts the posterior, measured for DNNs trained with stochastic gradient descent. This analysis reveals that structured data, combined with an intrinsic Occam's razor-like inductive bias towards (Kolmogorov) simple functions that is strong enough to counteract the exponential growth of the number of functions with complexity, is a key to the success of DNNs
    • 

    corecore