134 research outputs found
Structure-based substrate screening for an enzyme
<p>Abstract</p> <p>Background</p> <p>Nowadays, more and more novel enzymes can be easily found in the whole enzyme pool with the rapid development of genetic operation. However, experimental work for substrate screening of a new enzyme is laborious, time consuming and costly. On the other hand, many computational methods have been widely used in lead screening of drug design. Seeing that the ligand-target protein system in drug design and the substrate-enzyme system in enzyme applications share the similar molecular recognition mechanism, we aim to fulfill the goal of substrate screening by in silico means in the present study.</p> <p>Results</p> <p>A computer-aided substrate screening (CASS) system which was based on the enzyme structure was designed and employed successfully to help screen substrates of <it>Candida antarctica </it>lipase B (CALB). In this system, restricted molecular docking which was derived from the mechanism of the enzyme was applied to predict the energetically favorable poses of substrate-enzyme complexes. Thereafter, substrate conformation, distance between the oxygen atom of the alcohol part of the ester (in some compounds, this oxygen atom was replaced by nitrogen atom of the amine part of acid amine or sulfur atom of the thioester) and the hydrogen atom of imidazole of His224, distance between the carbon atom of the carbonyl group of the compound and the oxygen atom of hydroxyl group of Ser105 were used sequentially as the criteria to screen the binding poses. 223 out of 233 compounds were identified correctly for the enzyme by this screening system. Such high accuracy guaranteed the feasibility and reliability of the CASS system.</p> <p>Conclusion</p> <p>The idea of computer-aided substrate screening is a creative combination of computational skills and enzymology. Although the case studied in this paper is tentative, high accuracy of the CASS system sheds light on the field of computer-aided substrate screening.</p
Ti-MAE: Self-Supervised Masked Time Series Autoencoders
Multivariate Time Series forecasting has been an increasingly popular topic
in various applications and scenarios. Recently, contrastive learning and
Transformer-based models have achieved good performance in many long-term
series forecasting tasks. However, there are still several issues in existing
methods. First, the training paradigm of contrastive learning and downstream
prediction tasks are inconsistent, leading to inaccurate prediction results.
Second, existing Transformer-based models which resort to similar patterns in
historical time series data for predicting future values generally induce
severe distribution shift problems, and do not fully leverage the sequence
information compared to self-supervised methods. To address these issues, we
propose a novel framework named Ti-MAE, in which the input time series are
assumed to follow an integrate distribution. In detail, Ti-MAE randomly masks
out embedded time series data and learns an autoencoder to reconstruct them at
the point-level. Ti-MAE adopts mask modeling (rather than contrastive learning)
as the auxiliary task and bridges the connection between existing
representation learning and generative Transformer-based methods, reducing the
difference between upstream and downstream forecasting tasks while maintaining
the utilization of original time series data. Experiments on several public
real-world datasets demonstrate that our framework of masked autoencoding could
learn strong representations directly from the raw data, yielding better
performance in time series forecasting and classification tasks.Comment: 20 pages, 7 figure
User-Centered Software Design: User Interface Redesign for Blockly–Electron, Artificial Intelligence Educational Software for Primary and Secondary Schools
According to the 2021 and 2022 Horizon Report, AI is emerging in all areas of education, in various forms of educational aids with various applications, and is carving out a similarly ubiquitous presence across campuses and classrooms. This study explores a user-centered approach used in the design of the AI educational software by taking the redesign of the user interface of AI educational software Blockly–Electron as an example. Moreover, by analyzing the relationship between the four variables of software usability, the abstract usability is further certified so as to provide ideas for future improvements to the usability of AI educational software. User-centered design methods and attribution analysis are the main research methods used in this study. The user-centered approach was structured around four phases. Overall, seventy-three middle school students and five teachers participated in the study. The USE scale will be used to measure the usability of Blockly–Electron. Five design deliverables and an attribution model were created and discovered in the linear relationship between Ease of Learning, Ease of Use, Usefulness and Satisfaction, and Ease of use as a mediator variable, which is significantly different from the results of previous regression analysis for the USE scale. This study provides a structural user-centered design methodology with quantitative research. The deliverables and the attribution model can be used in the AI educational software design. Furthermore, this study found that usefulness and ease of learning significantly affect the ease of use, and ease of use significantly affects satisfaction. Based on this, the usability will be further concretized to facilitate the production of software with greater usability
Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention
Transformer-based models, such as BERT and GPT, have been widely adopted in
natural language processing (NLP) due to their exceptional performance.
However, recent studies show their vulnerability to textual adversarial attacks
where the model's output can be misled by intentionally manipulating the text
inputs. Despite various methods that have been proposed to enhance the model's
robustness and mitigate this vulnerability, many require heavy consumption
resources (e.g., adversarial training) or only provide limited protection
(e.g., defensive dropout). In this paper, we propose a novel method called
dynamic attention, tailored for the transformer architecture, to enhance the
inherent robustness of the model itself against various adversarial attacks.
Our method requires no downstream task knowledge and does not incur additional
costs. The proposed dynamic attention consists of two modules: (I) attention
rectification, which masks or weakens the attention value of the chosen tokens,
and (ii) dynamic modeling, which dynamically builds the set of candidate
tokens. Extensive experiments demonstrate that dynamic attention significantly
mitigates the impact of adversarial attacks, improving up to 33\% better
performance than previous methods against widely-used adversarial attacks. The
model-level design of dynamic attention enables it to be easily combined with
other defense methods (e.g., adversarial training) to further enhance the
model's robustness. Furthermore, we demonstrate that dynamic attention
preserves the state-of-the-art robustness space of the original model compared
to other dynamic modeling methods
Recommended from our members
CA1-projecting subiculum neurons facilitate object-place learning.
Recent anatomical evidence suggests a functionally significant back-projection pathway from the subiculum to the CA1. Here we show that the afferent circuitry of CA1-projecting subicular neurons is biased by inputs from CA1 inhibitory neurons and the visual cortex, but lacks input from the entorhinal cortex. Efferents of the CA1-projecting subiculum neurons also target the perirhinal cortex, an area strongly implicated in object-place learning. We identify a critical role for CA1-projecting subicular neurons in object-location learning and memory, and show that this projection modulates place-specific activity of CA1 neurons and their responses to displaced objects. Together, these experiments reveal a novel pathway by which cortical inputs, particularly those from the visual cortex, reach the hippocampal output region CA1. Our findings also implicate this circuitry in the formation of complex spatial representations and learning of object-place associations
LABEL-BASED MULTIPLE KERNEL LEARNING FOR CLASSIFICATION
Abstract This paper provides a novel technique for multiple kernel learning within Support Vector Machine framework. The problem of combining different sources of information arises in several situations, for instance, the classification of data with asymmetric similarity matrices or the construction of an optimal classifier from a collection of kernels. Often, each source of information can be expressed as a similarity matrix. In this paper we propose a new method in order to produce a single optimal kernel matrix from a collection of kernel (similarity) matrices with the label information for classification purposes. Then, the constructed kernel matrix is used to train a Support Vector Machine. The key ideas within the kernel construction are twofold: the quantification, relative to the classification labels, of the difference of information among the similarities; and the linear combination of similarity matrices to the concept of functional combination of similarity matrices. The proposed method has been successfully evaluated and compared with other powerful classifiers on a variety of real classification problems
Ginsenoside Rh1 Improves the Effect of Dexamethasone on Autoantibodies Production and Lymphoproliferation in MRL/lpr Mice
Ginsenoside Rh1 is able to upregulate glucocorticoid receptor (GR) level, suggesting Rh1 may improve glucocorticoid efficacy in hormone-dependent diseases. Therefore, we investigated whether Rh1 could enhance the effect of dexamethasone (Dex) in the treatment of MRL/lpr mice. MRL/lpr mice were treated with vehicle, Dex, Rh1, or Dex + Rh1 for 4 weeks. Dex significantly reduced the proteinuria and anti-dsDNA and anti-ANA autoantibodies. The levels of proteinuria and anti-dsDNA and anti-ANA autoantibodies were further decreased in Dex + Rh1 group. Dex, Rh1, or Dex + Rh1 did not alter the proportion of CD4+ splenic lymphocytes, whereas the proportion of CD8+ splenic lymphocytes was significantly increased in Dex and Dex + Rh1 groups. Dex + Rh1 significantly decreased the ratio of CD4+/CD8+ splenic lymphocytes compared with control. Con A-induced CD4+ splenic lymphocytes proliferation was increased in Dex-treated mice and was inhibited in Dex + Rh1-treated mice. Th1 cytokine IFN-γ mRNA was suppressed and Th2 cytokine IL-4 mRNA was increased by Dex. The effect of Dex on IFN-γ and IL-4 mRNA was enhanced by Rh1. In conclusion, our data suggest that Rh1 may enhance the effect of Dex in the treatment of MRL/lpr mice through regulating CD4+ T cells activation and Th1/Th2 balance
- …