3,004 research outputs found

    TotalDefMeme: A Multi-Attribute Meme dataset on Total Defence in Singapore

    Full text link
    Total Defence is a defence policy combining and extending the concept of military defence and civil defence. While several countries have adopted total defence as their defence policy, very few studies have investigated its effectiveness. With the rapid proliferation of social media and digitalisation, many social studies have been focused on investigating policy effectiveness through specially curated surveys and questionnaires either through digital media or traditional forms. However, such references may not truly reflect the underlying sentiments about the target policies or initiatives of interest. People are more likely to express their sentiment using communication mediums such as starting topic thread on forums or sharing memes on social media. Using Singapore as a case reference, this study aims to address this research gap by proposing TotalDefMeme, a large-scale multi-modal and multi-attribute meme dataset that captures public sentiments toward Singapore's Total Defence policy. Besides supporting social informatics and public policy analysis of the Total Defence policy, TotalDefMeme can also support many downstream multi-modal machine learning tasks, such as aspect-based stance classification and multi-modal meme clustering. We perform baseline machine learning experiments on TotalDefMeme and evaluate its technical validity, and present possible future interdisciplinary research directions and application scenarios using the dataset as a baseline.Comment: 6 pages. Accepted at ACM MMSys 202

    Urban renewal in Hong Kong : a study of governance and policy tools

    Get PDF
    published_or_final_versionPolitics and Public AdministrationMasterMaster of Public Administratio

    Landmark-Matching Transformation with Large Deformation Via n-dimensional Quasi-conformal Maps

    Get PDF
    We propose a new method to obtain landmark-matching transformations between n-dimensional Euclidean spaces with large deformations. Given a set of feature correspondences, our algorithm searches for an optimal folding-free mapping that satisfies the prescribed landmark constraints. The standard conformality distortion defined for mappings between 2-dimensional spaces is first generalized to the n-dimensional conformality distortion K(f) for a mapping f between n-dimensional Euclidean spaces (n ≥ 3). We then propose a variational model involving K(f) to tackle the landmark-matching problem in higher dimensional spaces. The generalized conformality term K(f) enforces the bijectivity of the optimized mapping and minimizes its local geometric distortions even with large deformations. Another challenge is the high computational cost of the proposed model. To tackle this, we have also proposed a numerical method to solve the optimization problem more efficiently. Alternating direction method with multiplier is applied to split the optimization problem into two subproblems. Preconditioned conjugate gradient method with multi-grid preconditioner is applied to solve one of the sub-problems, while a fixed-point iteration is proposed to solve another subproblem. Experiments have been carried out on both synthetic examples and lung CT images to compute the diffeomorphic landmark-matching transformation with different landmark constraints. Results show the efficacy of our proposed model to obtain a folding-free landmark-matching transformation between n-dimensional spaces with large deformations

    On Explaining Multimodal Hateful Meme Detection Models

    Full text link
    Hateful meme detection is a new multimodal task that has gained significant traction in academic and industry research communities. Recently, researchers have applied pre-trained visual-linguistic models to perform the multimodal classification task, and some of these solutions have yielded promising results. However, what these visual-linguistic models learn for the hateful meme classification task remains unclear. For instance, it is unclear if these models are able to capture the derogatory or slurs references in multimodality (i.e., image and text) of the hateful memes. To fill this research gap, this paper propose three research questions to improve our understanding of these visual-linguistic models performing the hateful meme classification task. We found that the image modality contributes more to the hateful meme classification task, and the visual-linguistic models are able to perform visual-text slurs grounding to a certain extent. Our error analysis also shows that the visual-linguistic models have acquired biases, which resulted in false-positive predictions

    Decoding the Underlying Meaning of Multimodal Hateful Memes

    Full text link
    Recent studies have proposed models that yielded promising performance for the hateful meme classification task. Nevertheless, these proposed models do not generate interpretable explanations that uncover the underlying meaning and support the classification output. A major reason for the lack of explainable hateful meme methods is the absence of a hateful meme dataset that contains ground truth explanations for benchmarking or training. Intuitively, having such explanations can educate and assist content moderators in interpreting and removing flagged hateful memes. This paper address this research gap by introducing Hateful meme with Reasons Dataset (HatReD), which is a new multimodal hateful meme dataset annotated with the underlying hateful contextual reasons. We also define a new conditional generation task that aims to automatically generate underlying reasons to explain hateful memes and establish the baseline performance of state-of-the-art pre-trained language models on this task. We further demonstrate the usefulness of HatReD by analyzing the challenges of the new conditional generation task in explaining memes in seen and unseen domains. The dataset and benchmark models are made available here: https://github.com/Social-AI-Studio/HatRedComment: 9 pages. Accepted by IJCAI 202

    Service-learning model at Lingnan University : development strategies and outcome assessment

    Full text link
    Background: The Service-Learning and Research Scheme (SLRS) is the showcase of Lingnan’s Service-Learning model, which is the manifestation of Lingnan University’s Liberal Arts education and mission “Education for Service”. The scheme was a pilot project, from 2004 to 2005, which led to the development of a Universitywide protocol for Service-Learning at Lingnan University. Aims: This paper highlights the processes and the strategies of incorporating Service-Learning into courses, based on the experiences in Lingnan University. Implementation and evaluation models are suggested to provide a framework for other interested parties to apply Service-Learning in their learning and teaching. Results: This is a descriptive analysis, associating outcome measurement (three outcomes: “ABC” quality– Adaptability, Brainpower and Creativity) through the process of Service-Learning. Evaluation contents and guidelines for doing Service-Learning are developed based on the past experience in doing Service-Learning at Lingnan. The research element procedures offer instructors with guidance as well as a well-defined protocol and evaluation for Service-Learning programs in Lingnan. Conclusion: In consolidating the above experience and in detailing the validity of the Lingnan Model of Service-Learning, a manual is produced documenting our efforts. This is the first manual which can be the protocol of applying Service-Learning in higher education for students’ whole-person development

    Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme Detection

    Full text link
    Hateful meme detection is a challenging multimodal task that requires comprehension of both vision and language, as well as cross-modal interactions. Recent studies have tried to fine-tune pre-trained vision-language models (PVLMs) for this task. However, with increasing model sizes, it becomes important to leverage powerful PVLMs more efficiently, rather than simply fine-tuning them. Recently, researchers have attempted to convert meme images into textual captions and prompt language models for predictions. This approach has shown good performance but suffers from non-informative image captions. Considering the two factors mentioned above, we propose a probing-based captioning approach to leverage PVLMs in a zero-shot visual question answering (VQA) manner. Specifically, we prompt a frozen PVLM by asking hateful content-related questions and use the answers as image captions (which we call Pro-Cap), so that the captions contain information critical for hateful content detection. The good performance of models with Pro-Cap on three benchmarks validates the effectiveness and generalization of the proposed method.Comment: Camera-ready for 23, ACM M

    SGHateCheck: Functional Tests for Detecting Hate Speech in Low-Resource Languages of Singapore

    Full text link
    To address the limitations of current hate speech detection models, we introduce \textsf{SGHateCheck}, a novel framework designed for the linguistic and cultural context of Singapore and Southeast Asia. It extends the functional testing approach of HateCheck and MHC, employing large language models for translation and paraphrasing into Singapore's main languages, and refining these with native annotators. \textsf{SGHateCheck} reveals critical flaws in state-of-the-art models, highlighting their inadequacy in sensitive content moderation. This work aims to foster the development of more effective hate speech detection tools for diverse linguistic environments, particularly for Singapore and Southeast Asia contexts

    Evaluating GPT-3 Generated Explanations for Hateful Content Moderation

    Full text link
    Recent research has focused on using large language models (LLMs) to generate explanations for hate speech through fine-tuning or prompting. Despite the growing interest in this area, these generated explanations' effectiveness and potential limitations remain poorly understood. A key concern is that these explanations, generated by LLMs, may lead to erroneous judgments about the nature of flagged content by both users and content moderators. For instance, an LLM-generated explanation might inaccurately convince a content moderator that a benign piece of content is hateful. In light of this, we propose an analytical framework for examining hate speech explanations and conducted an extensive survey on evaluating such explanations. Specifically, we prompted GPT-3 to generate explanations for both hateful and non-hateful content, and a survey was conducted with 2,400 unique respondents to evaluate the generated explanations. Our findings reveal that (1) human evaluators rated the GPT-generated explanations as high quality in terms of linguistic fluency, informativeness, persuasiveness, and logical soundness, (2) the persuasive nature of these explanations, however, varied depending on the prompting strategy employed, and (3) this persuasiveness may result in incorrect judgments about the hatefulness of the content. Our study underscores the need for caution in applying LLM-generated explanations for content moderation. Code and results are available at https://github.com/Social-AI-Studio/GPT3-HateEval.Comment: 9 pages, 2 figures, Accepted by International Joint Conference on Artificial Intelligence(IJCAI
    • …
    corecore