43 research outputs found
The REconsolidaTion Using RewiNd Study (RETURN): trial protocol
Background: An increasing body of research highlights reconsolidation-based therapies as emerging treatments for post-traumatic stress disorder (PTSD). The Rewind Technique is a non-pharmacological reconsolidation-based therapy with promising early results, which now requires evaluation through an RCT.
Objectives: This is a preliminary efficacy RCT to determine if the Rewind Technique is likely to be a good candidate to test against usual care in a future pragmatic efficacy RCT.
Methods: 40 participants will be randomised to receive either the Rewind Technique immediately, or after an 8 week wait. The primary outcome will be PTSD symptom severity as measured by the Clinician-Administered PTSD Scale for DSM5 (CAPS-5) at 8 and 16 weeks post-randomisation. Secondary outcome measures include the PTSD Checklist (PCL-5), International Trauma Questionnaire (ITQ), Patient Health Questionnaire (PHQ-9), the General Anxiety Disorder-7 (GAD-7), Insomnia Severity Index, the Euro-Qol-5D (EQ5D-5 L), the prominence of re-experiencing specific symptoms (CAPS-5) and an intervention acceptability questionnaire to measure tolerability of the intervention.
Conclusions: This study will be the first RCT to assess the Rewind Technique. Using a cross-over methodology we hope to rigorously assess the efficacy and tolerability of Rewind using pragmatic inclusion criteria. Potential challenges include participant recruitment and retention
NLP meets psychotherapy: Using predicted client emotions and self-reported client emotions to measure emotional coherence
Emotions are experienced and expressed through various response systems.
Coherence between emotional experience and emotional expression is considered
important to clients' well being. To date, emotional coherence (EC) has been
studied at a single time point using lab-based tasks with relatively small
datasets. No study has examined EC between the subjective experience of
emotions and emotion expression in therapy or whether this coherence is
associated with clients' well being. Natural language Processing (NLP)
approaches have been applied to identify emotions from psychotherapy dialogue,
which can be implemented to study emotional processes on a larger scale.
However, these methods have yet to be used to study coherence between emotional
experience and emotional expression over the course of therapy and whether it
relates to clients' well-being. This work presents an end-to-end approach where
we use emotion predictions from our transformer based emotion recognition model
to study emotional coherence and its diagnostic potential in psychotherapy
research. We first employ our transformer based approach on a Hebrew
psychotherapy dataset to automatically label clients' emotions at utterance
level in psychotherapy dialogues. We subsequently investigate the emotional
coherence between clients' self-reported emotional states and our model-based
emotion predictions. We also examine the association between emotional
coherence and clients' well being. Our findings indicate a significant
correlation between clients' self-reported emotions and positive and negative
emotions expressed verbally during psychotherapy sessions. Coherence in
positive emotions was also highly correlated with clients well-being. These
results illustrate how NLP can be applied to identify important emotional
processes in psychotherapy to improve diagnosis and treatment for clients
suffering from mental-health problems.Comment: Accepted at Empowering Communities: A Participatory Approach to AI
for Mental Health, NeurIPS 2022 VIRTUAL Worksho
Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation
Continual relation extraction (CRE) aims to continually learn new relations
from a class-incremental data stream. CRE model usually suffers from
catastrophic forgetting problem, i.e., the performance of old relations
seriously degrades when the model learns new relations. Most previous work
attributes catastrophic forgetting to the corruption of the learned
representations as new relations come, with an implicit assumption that the CRE
models have adequately learned the old relations. In this paper, through
empirical studies we argue that this assumption may not hold, and an important
reason for catastrophic forgetting is that the learned representations do not
have good robustness against the appearance of analogous relations in the
subsequent learning process. To address this issue, we encourage the model to
learn more precise and robust representations through a simple yet effective
adversarial class augmentation mechanism (ACA), which is easy to implement and
model-agnostic. Experimental results show that ACA can consistently improve the
performance of state-of-the-art CRE models on two popular benchmarks.Comment: Accepted by EMNLP 202
Post Traumatic Stress Disorder and Substance Use Disorder as Two Pathologies Affecting Memory Reactivation: Implications for New Therapeutic Approaches
In the present review, we provide evidence indicating that although post traumatic stress disorder (PTSD) and substance use disorder (SUD) are two distinct pathologies with very different impacts on people affected by these chronic illnesses, they share numerous common characteristics, present high rates of co-morbidity, and may result from common physiological dysfunctions. We propose that these pathologies result from hyper reactivity to reminders, and thus should be considered as two disorders of memory, treated as such. We review the different possibilities to intervene on pathological memories such as extinction therapy and reconsolidation blockade. We also introduce new therapeutic avenues directly indicate by our recent proposal to replace the consolidation/reconsolidation hypothesis by the integration concept. State dependency and emotional remodeling are two innovative treatments that have already provided encouraging results. In summary, this review shows that the discovery of reactivation-dependent memory malleability has open new therapeutic avenues based on the reprocessing of pathological memories, which constitute promising approaches to treat PTSD and SUD
Incremental Prompting: Episodic Memory Prompt for Lifelong Event Detection
Lifelong event detection aims to incrementally update a model with new event
types and data while retaining the capability on previously learned old types.
One critical challenge is that the model would catastrophically forget old
types when continually trained on new data. In this paper, we introduce
Episodic Memory Prompts (EMP) to explicitly preserve the learned task-specific
knowledge. Our method adopts continuous prompt for each task and they are
optimized to instruct the model prediction and learn event-specific
representation. The EMPs learned in previous tasks are carried along with the
model in subsequent tasks, and can serve as a memory module that keeps the old
knowledge and transferring to new tasks. Experiment results demonstrate the
effectiveness of our method. Furthermore, we also conduct a comprehensive
analysis of the new and old event types in lifelong learning.Comment: Accepted to COLING'22 Main Conference (Short paper). 9 pages, 2
figures, 3 table
Data Mining to find out the patterns in the data of circulation section log of July, 2022 of Jamia Millia Islamia University library using python
Data mining is a process that is used to find the important and meaningful patterns in the large data-set. It is used to convert the raw data into meaningful information and knowledge. The unknown patterns are extracted from the unstructured data which is stored in proper format and then utilized for developing future strategies. Various questions arise such as what is data mining, its process, tools or software used in the data mining, and what are the applications of data mining in libraries? This article describes the basics of data mining, process, tools and techniques used in data mining. The data of the circulation section of Jamia Millia University is taken to find out the meaningful patterns and to know certain borrowing habits of the patrons of the library. Python as a language is used for the coding for the determination of various helpful sequences which provide a meaningful interpretation for the circulation section log and such kinds of patterns can be used for the decision making for the library
Class-Incremental Learning based on Label Generation
Despite the great success of pre-trained language models, it is still a
challenge to use these models for continual learning, especially for the
class-incremental learning (CIL) setting due to catastrophic forgetting (CF).
This paper reports our finding that if we formulate CIL as a continual label
generation problem, CF is drastically reduced and the generalizable
representations of pre-trained models can be better retained. We thus propose a
new CIL method (VAG) that also leverages the sparsity of vocabulary to focus
the generation and creates pseudo-replay samples by using label semantics.
Experimental results show that VAG outperforms baselines by a large margin.Comment: 12 pages, ACL 2023 Main Conferenc
Orthogonal Subspace Learning for Language Model Continual Learning
Benefiting from massive corpora and advanced hardware, large language models
(LLMs) exhibit remarkable capabilities in language understanding and
generation. However, their performance degrades in scenarios where multiple
tasks are encountered sequentially, also known as catastrophic forgetting. In
this paper, we propose orthogonal low-rank adaptation (O-LoRA), a simple and
efficient approach for continual learning in language models, effectively
mitigating catastrophic forgetting while learning new tasks. Specifically,
O-LoRA learns tasks in different (low-rank) vector subspaces that are kept
orthogonal to each other in order to minimize interference. Our method induces
only marginal additional parameter costs and requires no user data storage for
replay. Experimental results on continual learning benchmarks show that our
method outperforms state-of-the-art methods. Furthermore, compared to previous
approaches, our method excels in preserving the generalization ability of LLMs
on unseen tasks.Comment: EMNLP 2023 finding