19 research outputs found
Language contact and sound change: Reasons for mutual unintelligibility between formal and colloquial registers of Tamil
Tamil has since its origination been diglossic, separating the formal high register from the colloquial low register. These two registers are currently mutually unintelligible (Shanmugam Pillai 1965). This analysis explores the reasons why they became unintelligible, which are proposed to be two-fold: historic language contact between Tamil and Sanskrit; and sound changes demonstrated using the Comparative Method. It has been suggested that the decline in mutual intelligibility is due to the removal of Sanskrit loanwords from the formal high register during the Tamil Purist Movement of the 20th century (Kailasapathy 1979). The earliest evidence of Tamil and Sanskrit reciprocal borrowing dates to the first Tamil literary works (Krishnamurti 2003). Where and when this language contact occurred is unclear, but it may have occurred during overlapping occupation of the Indus River Valley region by Sanskrit and Proto-Dravidian (Steever 2009). During the 20th century, the formal register replaced these loanwords with Tamil equivalents wherever possible (Kailasapathy 1979). Currently, low register Tamil is composed of 50% loanwords whereas high register Tamil is composed of only 20% loanwords (Krishnamurti 2003). It has been attested, however, that some diglossia was present before contact between Tamil and Sanskrit. Early diglossia can thus instead be explained by sound changes, which also account for current differences between the registers not attributed to loanwords. Sound changes identified in this analysis include: syncope, apocope, paragoge, stop to fricative lenition, and others. This analysis finds that language contact and sound changes contributed to the decline in intelligibility between formal and colloquial Tamil, however the nature of the language contact is still under investigation
DeePhy: On Deepfake Phylogeny
Deepfake refers to tailored and synthetically generated videos which are now
prevalent and spreading on a large scale, threatening the trustworthiness of
the information available online. While existing datasets contain different
kinds of deepfakes which vary in their generation technique, they do not
consider progression of deepfakes in a "phylogenetic" manner. It is possible
that an existing deepfake face is swapped with another face. This process of
face swapping can be performed multiple times and the resultant deepfake can be
evolved to confuse the deepfake detection algorithms. Further, many databases
do not provide the employed generative model as target labels. Model
attribution helps in enhancing the explainability of the detection results by
providing information on the generative model employed. In order to enable the
research community to address these questions, this paper proposes DeePhy, a
novel Deepfake Phylogeny dataset which consists of 5040 deepfake videos
generated using three different generation techniques. There are 840 videos of
one-time swapped deepfakes, 2520 videos of two-times swapped deepfakes and 1680
videos of three-times swapped deepfakes. With over 30 GBs in size, the database
is prepared in over 1100 hours using 18 GPUs of 1,352 GB cumulative memory. We
also present the benchmark on DeePhy dataset using six deepfake detection
algorithms. The results highlight the need to evolve the research of model
attribution of deepfakes and generalize the process over a variety of deepfake
generation techniques. The database is available at:
http://iab-rubric.org/deephy-databaseComment: Accepted at 2022, International Joint Conference on Biometrics (IJCB
2022
Handling Location Uncertainty in Event Driven Experimentation
Singapore National Research Foundation under International Research Centre @ Singapore Funding Initiativ
myDeal: A Mobile Shopping Assistant Matching User Preferences to Promotions
National Research Foundation (NRF) Singapore under International Research Centre @ Singapore Funding Initiativ
myDeal: The Context-Aware Urban Shopping Assistant
Asking for full text - PP</p
Recommended from our members
"Any Little Thing of Help": A Qualitative Analysis of the Challenges of Navigating WIC in Chicago and the Roots of Unenrollment
This research project seeks to understand why coverage rates (percentage of eligible individuals that are enrolled) for the Special Supplemental Nutrition Program for Women, Infants, and Children, aka the food assistance program WIC, are so low in Illinois— at 41.8 percent in 2017 compared to the national average of 51.1 percent— and what is causing mothers to unenroll or not enroll at all. Using the city of Chicago as a case study for this issue, this paper will describe the barriers to obtaining and maintaining WIC benefits that low-income mothers face under Chicago’s paper colored coupon system that provides a fundamentally different benefits experience to North Side versus South and West Side residents of the city. In particular, this study will uncover the disparities between mothers’ access to WIC and the challenges they face in obtaining and maintaining their benefits. I will elaborate on ways that WIC can improve in the future to be more accessible to mothers, and ways in which the coming transition to an electronic benefits system (EBT) across the country might aid in creating a benefits process that is easier to navigate. Extensive semi-structured interview data is used in this study to highlight both the experiences of those that are directly affected by WIC enrollment, i.e. low-income mothers, as well as the expertise of those who are practitioners and scholars in the field of public benefits. It is concluded that WIC’s time, resource, and safety demands on low-income mothers is a significant barrier to initial and continued enrollment for many, and that the stakes could not be higher for mothers and their children. In order to create a more accessible system, WIC must make efforts to reduce the restrictions around WIC and increase the flexibility of the program to ease the navigation of this complicated public benefits system, already fraught with many bureaucratic barriers
Activation in isolation : exposure of the actin-binding site in the C-terminal half of gelsolin does not require actin
Gelsolin requires activation to carry out its severing and capping activities on F-actin. Here, we present the structure of the isolated C-terminal half of gelsolin (G4-G6) at 2.0 A resolution in the presence of Ca(2+) ions. This structure completes a triptych of the states of activation of G4-G6 that illuminates its role in the function of gelsolin. Activated G4-G6 displays an open conformation, with the actin-binding site on G4 fully exposed and all three type-2 Ca(2+) sites occupied. Neither actin nor the type-l Ca(2+), which normally is sandwiched between actin and G4, is required to achieve this conformation
FaceXFormer: A Unified Transformer for Facial Analysis
In this work, we introduce FaceXformer, an end-to-end unified transformer
model for a comprehensive range of facial analysis tasks such as face parsing,
landmark detection, head pose estimation, attributes recognition, and
estimation of age, gender, race, and landmarks visibility. Conventional methods
in face analysis have often relied on task-specific designs and preprocessing
techniques, which limit their approach to a unified architecture. Unlike these
conventional methods, our FaceXformer leverages a transformer-based
encoder-decoder architecture where each task is treated as a learnable token,
enabling the integration of multiple tasks within a single framework. Moreover,
we propose a parameter-efficient decoder, FaceX, which jointly processes face
and task tokens, thereby learning generalized and robust face representations
across different tasks. To the best of our knowledge, this is the first work to
propose a single model capable of handling all these facial analysis tasks
using transformers. We conducted a comprehensive analysis of effective
backbones for unified face task processing and evaluated different task queries
and the synergy between them. We conduct experiments against state-of-the-art
specialized models and previous multi-task models in both intra-dataset and
cross-dataset evaluations across multiple benchmarks. Additionally, our model
effectively handles images "in-the-wild," demonstrating its robustness and
generalizability across eight different tasks, all while maintaining the
real-time performance of 37 FPS.Comment: Project page: https://kartik-3004.github.io/facexformer_web