1,253 research outputs found

    Changes in Korean Wage Inequality, 1980−2005

    Get PDF
    Korea is known not only for rapid economic growth but also relatively low wage inequality. It is one of the few countries in which wage inequality decreased during the 1980s, though in recent years wage inequality has increased. This paper studies what factors contributed to the changes in wage inequality during the last two decades. This paper implements a recently developed Oaxaca-type inequality decomposition method to decompose "U" shaped changes in inequality into characteristics (quantity), coefficients (price) and residuals effects at both overall and detailed levels. The results of decomposition analysis show that changes in the wage structure significantly contribute to the changes in wage inequality in Korea. The coefficients effect of human capital factors has played a major role not only in increasing wage inequality from mid-1990s, but also decreasing wage inequality in 1980s and early 1990s.decomposition analysis of inequality, earnings equation, coefficients (price) effect, characteristics (quantity) effect, residuals effect

    A Novel Arm-Wrestling Robot Using Motion Dependant Force Control

    Get PDF

    Seeing voices and hearing voices: learning discriminative embeddings using cross-modal self-supervision

    Full text link
    The goal of this work is to train discriminative cross-modal embeddings without access to manually annotated data. Recent advances in self-supervised learning have shown that effective representations can be learnt from natural cross-modal synchrony. We build on earlier work to train embeddings that are more discriminative for uni-modal downstream tasks. To this end, we propose a novel training strategy that not only optimises metrics across modalities, but also enforces intra-class feature separation within each of the modalities. The effectiveness of the method is demonstrated on two downstream tasks: lip reading using the features trained on audio-visual synchronisation, and speaker recognition using the features trained for cross-modal biometric matching. The proposed method outperforms state-of-the-art self-supervised baselines by a signficant margin.Comment: Under submission as a conference pape

    Perfect match: Improved cross-modal embeddings for audio-visual synchronisation

    Full text link
    This paper proposes a new strategy for learning powerful cross-modal embeddings for audio-to-video synchronization. Here, we set up the problem as one of cross-modal retrieval, where the objective is to find the most relevant audio segment given a short video clip. The method builds on the recent advances in learning representations from cross-modal self-supervision. The main contributions of this paper are as follows: (1) we propose a new learning strategy where the embeddings are learnt via a multi-way matching problem, as opposed to a binary classification (matching or non-matching) problem as proposed by recent papers; (2) we demonstrate that performance of this method far exceeds the existing baselines on the synchronization task; (3) we use the learnt embeddings for visual speech recognition in self-supervision, and show that the performance matches the representations learnt end-to-end in a fully-supervised manner.Comment: Preprint. Work in progres

    FaceFilter: Audio-visual speech separation using still images

    Full text link
    The objective of this paper is to separate a target speaker's speech from a mixture of two speakers using a deep audio-visual speech separation network. Unlike previous works that used lip movement on video clips or pre-enrolled speaker information as an auxiliary conditional feature, we use a single face image of the target speaker. In this task, the conditional feature is obtained from facial appearance in cross-modal biometric task, where audio and visual identity representations are shared in latent space. Learnt identities from facial images enforce the network to isolate matched speakers and extract the voices from mixed speech. It solves the permutation problem caused by swapped channel outputs, frequently occurred in speech separation tasks. The proposed method is far more practical than video-based speech separation since user profile images are readily available on many platforms. Also, unlike speaker-aware separation methods, it is applicable on separation with unseen speakers who have never been enrolled before. We show strong qualitative and quantitative results on challenging real-world examples.Comment: Under submission as a conference paper. Video examples: https://youtu.be/ku9xoLh62
    corecore