942 research outputs found

    Dual resonant amplitudes from Drinfel'd twists

    Full text link
    We postulate the existence of a family of dual resonant, four-point tachyon amplitudes derived using invertible coproduct maps called Drinfel'd twists. A sub-family of these amplitudes exhibits well-defined ultraviolet behaviour, namely in the fixed angle high-energy and Regge scattering regimes. This discovery emerges from a systematic study of the set of observables that can be constructed out of qq-deformed worldsheet CFTs with the underlying conformal group being the quantum group SU(1,1)qSU(1,1)_q. We conclude our analysis by discussing the possibility (or the lack thereof) of known qq-deformations of the Veneziano amplitude as an observable in such theories, in particular, the Coon amplitude.Comment: 27+7 pages, 1 figur

    Language Model Unalignment: Parametric Red-Teaming to Expose Hidden Harms and Biases

    Full text link
    Red-teaming has been a widely adopted way to evaluate the harmfulness of Large Language Models (LLMs). It aims to jailbreak a model's safety behavior to make it act as a helpful agent disregarding the harmfulness of the query. Existing methods are primarily based on input text-based red-teaming such as adversarial prompts, low-resource prompts, or contextualized prompts to condition the model in a way to bypass its safe behavior. Bypassing the guardrails uncovers hidden harmful information and biases in the model that are left untreated or newly introduced by its safety training. However, prompt-based attacks fail to provide such a diagnosis owing to their low attack success rate, and applicability to specific models. In this paper, we present a new perspective on LLM safety research i.e., parametric red-teaming through Unalignment. It simply (instruction) tunes the model parameters to break model guardrails that are not deeply rooted in the model's behavior. Unalignment using as few as 100 examples can significantly bypass commonly referred to as CHATGPT, to the point where it responds with an 88% success rate to harmful queries on two safety benchmark datasets. On open-source models such as VICUNA-7B and LLAMA-2-CHAT 7B AND 13B, it shows an attack success rate of more than 91%. On bias evaluations, Unalignment exposes inherent biases in safety-aligned models such as CHATGPT and LLAMA- 2-CHAT where the model's responses are strongly biased and opinionated 64% of the time.Comment: Under Revie

    GRACEFULLY RECOVER WIFI

    Get PDF
    During transportation, personal area network (PAN) host devices may often interface with PAN client devices via potentially complex PAN protocols. In the instance of a disconnection, the system may attempt to determine whether the user has intended to disable the connection, or whether the PAN host device experiences a non-intentional disruption of service (sometimes referred to as an “interference drop”) due to, for example, interference jammers or other devices that produce signals that interfere with the PAN session. In these instances, the PAN host device should try to recover the connection only if it is an interference drop so as to respect user intention. In both cases, the PAN host device may detect a ping timeout and attempt to recover the projection state by sending a start request over the PAN. On an intentional disconnect, the PAN client device may respond with a phone network disabled message status, indicating to the PAN host device that the user has intended to disable the connection. On a non-intentional disconnect (e.g., an interference drop), the PAN host device may attempt to reconnect to the PAN client device. Due to interference, it would fail to connect. The PAN host device may then determine that it is likely in a network interference zone, due to the PAN client device not responding that it is able to connect to the PAN host device. The PAN host device would then be able to retry multiple times to recover the connection. In this way, the PAN host device adheres to the user’s request to stay disconnected from the PAN host device if the disconnection was an intentional disconnection or recovers when the disconnection was a non-intentional disconnection (e.g., due to an interference signals)

    Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment

    Full text link
    Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH)

    Adapter Pruning using Tropical Characterization

    Full text link
    Adapters are widely popular parameter-efficient transfer learning approaches in natural language processing that insert trainable modules in between layers of a pre-trained language model. Apart from several heuristics, however, there has been a lack of studies analyzing the optimal number of adapter parameters needed for downstream applications. In this paper, we propose an adapter pruning approach by studying the tropical characteristics of trainable modules. We cast it as an optimization problem that aims to prune parameters from the adapter layers without changing the orientation of underlying tropical hypersurfaces. Our experiments on five NLP datasets show that tropical geometry tends to identify more relevant parameters to prune when compared with the magnitude-based baseline, while a combined approach works best across the tasks.Comment: Accepted at EMNLP 2023, Finding

    KNOT: Knowledge Distillation using Optimal Transport for Solving NLP Tasks

    Full text link
    We propose a new approach, Knowledge Distillation using Optimal Transport (KNOT), to distill the natural language semantic knowledge from multiple teacher networks to a student network. KNOT aims to train a (global) student model by learning to minimize the optimal transport cost of its assigned probability distribution over the labels to the weighted sum of probabilities predicted by the (local) teacher models, under the constraints, that the student model does not have access to teacher models' parameters or training data. To evaluate the quality of knowledge transfer, we introduce a new metric, Semantic Distance (SD), that measures semantic closeness between the predicted and ground truth label distributions. The proposed method shows improvements in the global model's SD performance over the baseline across three NLP tasks while performing on par with Entropy-based distillation on standard accuracy and F1 metrics. The implementation pertaining to this work is publicly available at: https://github.com/declare-lab/KNOT.Comment: COLING 202

    A Review of Deep Learning Techniques for Speech Processing

    Full text link
    The field of speech processing has undergone a transformative shift with the advent of deep learning. The use of multiple processing layers has enabled the creation of models capable of extracting intricate features from speech data. This development has paved the way for unparalleled advancements in speech recognition, text-to-speech synthesis, automatic speech recognition, and emotion recognition, propelling the performance of these tasks to unprecedented heights. The power of deep learning techniques has opened up new avenues for research and innovation in the field of speech processing, with far-reaching implications for a range of industries and applications. This review paper provides a comprehensive overview of the key deep learning models and their applications in speech-processing tasks. We begin by tracing the evolution of speech processing research, from early approaches, such as MFCC and HMM, to more recent advances in deep learning architectures, such as CNNs, RNNs, transformers, conformers, and diffusion models. We categorize the approaches and compare their strengths and weaknesses for solving speech-processing tasks. Furthermore, we extensively cover various speech-processing tasks, datasets, and benchmarks used in the literature and describe how different deep-learning networks have been utilized to tackle these tasks. Additionally, we discuss the challenges and future directions of deep learning in speech processing, including the need for more parameter-efficient, interpretable models and the potential of deep learning for multimodal speech processing. By examining the field's evolution, comparing and contrasting different approaches, and highlighting future directions and challenges, we hope to inspire further research in this exciting and rapidly advancing field

    A Review Paper on Emotion Recognition Using Facial Expression

    Get PDF
    Facial expressions are the quickest means that of communication whereas transference any kind of info. These do not seem to be solely exposes the sensitivity or feelings of anyone, however, may be wont to choose his/her mental views. This paper includes the introduction of the face recognition associate in nursing face expression recognition and an investigation on the recent previous researches for extracting the effective and economical technique for face expression recognition
    • …
    corecore