19 research outputs found

    Language contact and sound change: Reasons for mutual unintelligibility between formal and colloquial registers of Tamil

    Get PDF
    Tamil has since its origination been diglossic, separating the formal high register from the colloquial low register. These two registers are currently mutually unintelligible (Shanmugam Pillai 1965). This analysis explores the reasons why they became unintelligible, which are proposed to be two-fold: historic language contact between Tamil and Sanskrit; and sound changes demonstrated using the Comparative Method. It has been suggested that the decline in mutual intelligibility is due to the removal of Sanskrit loanwords from the formal high register during the Tamil Purist Movement of the 20th century (Kailasapathy 1979). The earliest evidence of Tamil and Sanskrit reciprocal borrowing dates to the first Tamil literary works (Krishnamurti 2003). Where and when this language contact occurred is unclear, but it may have occurred during overlapping occupation of the Indus River Valley region by Sanskrit and Proto-Dravidian (Steever 2009). During the 20th century, the formal register replaced these loanwords with Tamil equivalents wherever possible (Kailasapathy 1979). Currently, low register Tamil is composed of 50% loanwords whereas high register Tamil is composed of only 20% loanwords (Krishnamurti 2003). It has been attested, however, that some diglossia was present before contact between Tamil and Sanskrit. Early diglossia can thus instead be explained by sound changes, which also account for current differences between the registers not attributed to loanwords. Sound changes identified in this analysis include: syncope, apocope, paragoge, stop to fricative lenition, and others. This analysis finds that language contact and sound changes contributed to the decline in intelligibility between formal and colloquial Tamil, however the nature of the language contact is still under investigation

    DeePhy: On Deepfake Phylogeny

    Full text link
    Deepfake refers to tailored and synthetically generated videos which are now prevalent and spreading on a large scale, threatening the trustworthiness of the information available online. While existing datasets contain different kinds of deepfakes which vary in their generation technique, they do not consider progression of deepfakes in a "phylogenetic" manner. It is possible that an existing deepfake face is swapped with another face. This process of face swapping can be performed multiple times and the resultant deepfake can be evolved to confuse the deepfake detection algorithms. Further, many databases do not provide the employed generative model as target labels. Model attribution helps in enhancing the explainability of the detection results by providing information on the generative model employed. In order to enable the research community to address these questions, this paper proposes DeePhy, a novel Deepfake Phylogeny dataset which consists of 5040 deepfake videos generated using three different generation techniques. There are 840 videos of one-time swapped deepfakes, 2520 videos of two-times swapped deepfakes and 1680 videos of three-times swapped deepfakes. With over 30 GBs in size, the database is prepared in over 1100 hours using 18 GPUs of 1,352 GB cumulative memory. We also present the benchmark on DeePhy dataset using six deepfake detection algorithms. The results highlight the need to evolve the research of model attribution of deepfakes and generalize the process over a variety of deepfake generation techniques. The database is available at: http://iab-rubric.org/deephy-databaseComment: Accepted at 2022, International Joint Conference on Biometrics (IJCB 2022

    Handling Location Uncertainty in Event Driven Experimentation

    Get PDF
    Singapore National Research Foundation under International Research Centre @ Singapore Funding Initiativ

    myDeal: A Mobile Shopping Assistant Matching User Preferences to Promotions

    Get PDF
    National Research Foundation (NRF) Singapore under International Research Centre @ Singapore Funding Initiativ

    myDeal: The Context-Aware Urban Shopping Assistant

    Get PDF
    Asking for full text - PP</p

    Activation in isolation : exposure of the actin-binding site in the C-terminal half of gelsolin does not require actin

    No full text
    Gelsolin requires activation to carry out its severing and capping activities on F-actin. Here, we present the structure of the isolated C-terminal half of gelsolin (G4-G6) at 2.0 A resolution in the presence of Ca(2+) ions. This structure completes a triptych of the states of activation of G4-G6 that illuminates its role in the function of gelsolin. Activated G4-G6 displays an open conformation, with the actin-binding site on G4 fully exposed and all three type-2 Ca(2+) sites occupied. Neither actin nor the type-l Ca(2+), which normally is sandwiched between actin and G4, is required to achieve this conformation

    FaceXFormer: A Unified Transformer for Facial Analysis

    Full text link
    In this work, we introduce FaceXformer, an end-to-end unified transformer model for a comprehensive range of facial analysis tasks such as face parsing, landmark detection, head pose estimation, attributes recognition, and estimation of age, gender, race, and landmarks visibility. Conventional methods in face analysis have often relied on task-specific designs and preprocessing techniques, which limit their approach to a unified architecture. Unlike these conventional methods, our FaceXformer leverages a transformer-based encoder-decoder architecture where each task is treated as a learnable token, enabling the integration of multiple tasks within a single framework. Moreover, we propose a parameter-efficient decoder, FaceX, which jointly processes face and task tokens, thereby learning generalized and robust face representations across different tasks. To the best of our knowledge, this is the first work to propose a single model capable of handling all these facial analysis tasks using transformers. We conducted a comprehensive analysis of effective backbones for unified face task processing and evaluated different task queries and the synergy between them. We conduct experiments against state-of-the-art specialized models and previous multi-task models in both intra-dataset and cross-dataset evaluations across multiple benchmarks. Additionally, our model effectively handles images "in-the-wild," demonstrating its robustness and generalizability across eight different tasks, all while maintaining the real-time performance of 37 FPS.Comment: Project page: https://kartik-3004.github.io/facexformer_web
    corecore