447 research outputs found
Does imaging genetics reveal shared mechanisms behind psychotic symptom profile?
Current diagnoses of schizophrenia (SZ) and bipolar disorder (BD) are classified by phenomenological principles and clinical descriptions. The boundaries of the disorders are merging with accumulating shared genetic and brain mechanisms being uncovered. Imaging genetics is a useful tool to understand the impact of genetic variations on the brain. It also enables capturing the behavioral implication of those genes and associated brain alterations.
This study aimed to reveal the associations among sets of genetic variations, structural brain abnormalities, and clinical symptom profiles shared in schizophrenia and bipolar disorders by imaging genetics and multivariate approaches. First, we mapped the symptom profiles onto brain patterns. Distinct structural brain patterns guided with symptom profiles represented by positive and negative syndrome scale (PANSS), through parallel independent component analysis (pICA) were extracted. Brain patterns related to positive symptoms, mood, and apathy were discovered in SZ and BD. Second, we investigated the relationships of symptoms and brain patterns regardless of diagnostic categories by projecting each disorder’s structural brain and PANSS patterns into the other disorder group (e.g., projecting patterns from schizophrenia to bipolar and vice versa) to reassess the associations. The projected brain patterns showed associations with broad symptoms rather than the original PANSS patterns. Finally, we explored the potential shared genetic mechanisms behind symptom-brain patterns by investigating the effect of polygenic risk scores (PRS) from the Psychiatric Genomics Consortium (PGC). Both SZ and BD PRS were significantly associated with the positive symptom-related brain patterns in SZ. Higher genetic risks contributed to more severe gray matter concentration (GMC) reductions in the temporal regions of SZ patients, and it may lead to worse positive symptoms. Correspondingly, in the BD, both SZ and BD PRS were significantly associated with the mood symptom-related brain patterns. Higher risks contributed to more severe gray matter concentration (GMC) reductions in the frontal-temporal-parietal circuits with worse mood symptoms. The polygenic effects behind the apathy component may be subtle. The results helped improve the understanding of categories of psychotic disorders starting from schizophrenia and bipolar disorder. It may essentially contribute to the more precise diagnosis and treatment for heterogeneous populations with psychosis
Fast Incremental SVDD Learning Algorithm with the Gaussian Kernel
Support vector data description (SVDD) is a machine learning technique that
is used for single-class classification and outlier detection. The idea of SVDD
is to find a set of support vectors that defines a boundary around data. When
dealing with online or large data, existing batch SVDD methods have to be rerun
in each iteration. We propose an incremental learning algorithm for SVDD that
uses the Gaussian kernel. This algorithm builds on the observation that all
support vectors on the boundary have the same distance to the center of sphere
in a higher-dimensional feature space as mapped by the Gaussian kernel
function. Each iteration involves only the existing support vectors and the new
data point. Moreover, the algorithm is based solely on matrix manipulations;
the support vectors and their corresponding Lagrange multiplier 's
are automatically selected and determined in each iteration. It can be seen
that the complexity of our algorithm in each iteration is only , where
is the number of support vectors. Experimental results on some real data
sets indicate that FISVDD demonstrates significant gains in efficiency with
almost no loss in either outlier detection accuracy or objective function
value.Comment: 18 pages, 1 table, 4 figure
The Path and Enlightenment of Data-Driven Digital Transformation of Organizational Learning ——A Case Study of the Practice of China Telecom
This paper took China Telecom as a case. It has analyzed data-driven digital transformation in organizational learning, and summarized the methods and enlightenments of digital transformation
Prefix-Tuning Based Unsupervised Text Style Transfer
Unsupervised text style transfer aims at training a generative model that can
alter the style of the input sentence while preserving its content without
using any parallel data. In this paper, we employ powerful pre-trained large
language models and present a new prefix-tuning-based method for unsupervised
text style transfer. We construct three different kinds of prefixes, i.e.,
\textit{shared prefix, style prefix}, and \textit{content prefix}, to encode
task-specific information, target style, and the content information of the
input sentence, respectively. Compared to embeddings used by previous works,
the proposed prefixes can provide richer information for the model.
Furthermore, we adopt a recursive way of using language models in the process
of style transfer. This strategy provides a more effective way for the
interactions between the input sentence and GPT-2, helps the model construct
more informative prefixes, and thus, helps improve the performance. Evaluations
on the well-known datasets show that our method outperforms the
state-of-the-art baselines. Results, analysis of ablation studies, and
subjective evaluations from humans are also provided for a deeper understanding
of the proposed method
Theoretic Analysis and Extremely Easy Algorithms for Domain Adaptive Feature Learning
Domain adaptation problems arise in a variety of applications, where a
training dataset from the \textit{source} domain and a test dataset from the
\textit{target} domain typically follow different distributions. The primary
difficulty in designing effective learning models to solve such problems lies
in how to bridge the gap between the source and target distributions. In this
paper, we provide comprehensive analysis of feature learning algorithms used in
conjunction with linear classifiers for domain adaptation. Our analysis shows
that in order to achieve good adaptation performance, the second moments of the
source domain distribution and target domain distribution should be similar.
Based on our new analysis, a novel extremely easy feature learning algorithm
for domain adaptation is proposed. Furthermore, our algorithm is extended by
leveraging multiple layers, leading to a deep linear model. We evaluate the
effectiveness of the proposed algorithms in terms of domain adaptation tasks on
the Amazon review dataset and the spam dataset from the ECML/PKDD 2006
discovery challenge.Comment: ijca
A Unified Encoder-Decoder Framework with Entity Memory
Entities, as important carriers of real-world knowledge, play a key role in
many NLP tasks. We focus on incorporating entity knowledge into an
encoder-decoder framework for informative text generation. Existing approaches
tried to index, retrieve, and read external documents as evidence, but they
suffered from a large computational overhead. In this work, we propose an
encoder-decoder framework with an entity memory, namely EDMem. The entity
knowledge is stored in the memory as latent representations, and the memory is
pre-trained on Wikipedia along with encoder-decoder parameters. To precisely
generate entity names, we design three decoding methods to constrain entity
generation by linking entities in the memory. EDMem is a unified framework that
can be used on various entity-intensive question answering and generation
tasks. Extensive experimental results show that EDMem outperforms both
memory-based auto-encoder models and non-memory encoder-decoder models.Comment: Accepted by the 2022 Conference on Empirical Methods in Natural
Language Processing (EMNLP 2022
- …