208 research outputs found
Robust Absolute Stability Criteria for a Class of Uncertain Lur'e Systems of Neutral Type
This paper is concerned with the problem of robust absolute stability for a class of uncertain Lur'e systems of neutral type. Some delay-dependent stability criteria are obtained and formulated in the form of linear matrix inequalities (LMIs). Neither model transformation nor bounding technique for cross terms is involved through derivation of the stability criteria. A numerical example shows the effectiveness of the criteria
USAGE: A Unified Seed Area Generation Paradigm for Weakly Supervised Semantic Segmentation
Seed area generation is usually the starting point of weakly supervised
semantic segmentation (WSSS). Computing the Class Activation Map (CAM) from a
multi-label classification network is the de facto paradigm for seed area
generation, but CAMs generated from Convolutional Neural Networks (CNNs) and
Transformers are prone to be under- and over-activated, respectively, which
makes the strategies to refine CAMs for CNNs usually inappropriate for
Transformers, and vice versa. In this paper, we propose a Unified optimization
paradigm for Seed Area GEneration (USAGE) for both types of networks, in which
the objective function to be optimized consists of two terms: One is a
generation loss, which controls the shape of seed areas by a temperature
parameter following a deterministic principle for different types of networks;
The other is a regularization loss, which ensures the consistency between the
seed areas that are generated by self-adaptive network adjustment from
different views, to overturn false activation in seed areas. Experimental
results show that USAGE consistently improves seed area generation for both
CNNs and Transformers by large margins, e.g., outperforming state-of-the-art
methods by a mIoU of 4.1% on PASCAL VOC. Moreover, based on the USAGE-generated
seed areas on Transformers, we achieve state-of-the-art WSSS results on both
PASCAL VOC and MS COCO
Expression of fatty acid and lipid biosynthetic genes in developing endosperm of Jatropha curcas
BACKGROUND: Temporal and spatial expression of fatty acid and lipid biosynthetic genes are associated with the accumulation of storage lipids in the seeds of oil plants. In jatropha (Jatropha curcas L.), a potential biofuel plant, the storage lipids are mainly synthesized and accumulated in the endosperm of seeds. Although the fatty acid and lipid biosynthetic genes in jatropha have been identified, the expression of these genes at different developing stages of endosperm has not been systemically investigated. RESULTS: Transmission electron microscopy study revealed that the oil body formation in developing endosperm of jatropha seeds initially appeared at 28 days after fertilization (DAF), was actively developed at 42 DAF and reached to the maximum number and size at 56 DAF. Sixty-eight genes that encode enzymes, proteins or their subunits involved in fatty acid and lipid biosynthesis were identified from a normalized cDNA library of jatropha developing endosperm. Gene expression with quantitative reverse-transcription polymerase chain reaction analysis demonstrated that the 68 genes could be collectively grouped into five categories based on the patterns of relative expression of the genes during endosperm development. Category I has 47 genes and they displayed a bell-shaped expression pattern with the peak expression at 28 or 42 DAF, but low expression at 14 and 56 DAF. Category II contains 8 genes and expression of the 8 genes was constantly increased from 14 to 56 DAF. Category III comprises of 2 genes and both genes were constitutively expressed throughout endosperm development. Category IV has 9 genes and they showed a high expression at 14 and 28 DAF, but a decreased expression from 42 to 56 DAF. Category V consists of 2 genes and both genes showed a medium expression at 14 DAF, the lowest expression at 28 or 42 DAF, and the highest expression at 56 DAF. In addition, genes encoding enzymes or proteins with similar function were differentially expressed during endosperm development. CONCLUSION: The formation of oil bodies in jatropha endosperm is developmentally regulated. The expression of the majority of fatty acid and lipid biosynthetic genes is highly consistent with the development of oil bodies and endosperm in jatropha seeds, while the genes encoding enzymes with similar function may be differentially expressed during endosperm development. These results not only provide the initial information on spatial and temporal expression of fatty acid and lipid biosynthetic genes in jatropha developing endosperm, but are also valuable to identify the rate-limiting genes for storage lipid biosynthesis and accumulation during seed development
Screening for CCNF Mutations in a Chinese Amyotrophic Lateral Sclerosis Cohort
Previous research has identified CCNF mutations in familial (FALS) and sporadic amyotrophic lateral sclerosis (SALS), as well as in frontotemporal dementia (FTD). The aim of our study was to measure the frequency of CCNF mutations in a Chinese population. In total, 78 FALS patients, 581 SALS patients and 584 controls were included. We found 19 missense mutations, nine synonymous mutations and two intron variants. According to the American College of Medical Genetics and Genomics (ACMG) standards and guidelines for the interpretation of sequence variants, eight variants were judged to be pathogenic or likely pathogenic variants. The frequency of such variants was 2.56% in FALS and 1.03% in SALS. In conclusion, CCNF mutations are common in FALS and SALS patients of Chinese origin, and further study is still needed
GROVE: A Retrieval-augmented Complex Story Generation Framework with A Forest of Evidence
Conditional story generation is significant in human-machine interaction,
particularly in producing stories with complex plots. While Large language
models (LLMs) perform well on multiple NLP tasks, including story generation,
it is challenging to generate stories with both complex and creative plots.
Existing methods often rely on detailed prompts to guide LLMs to meet target
conditions, which inadvertently restrict the creative potential of the
generated stories. We argue that leveraging information from exemplary
human-written stories facilitates generating more diverse plotlines. Delving
deeper into story details helps build complex and credible plots. In this
paper, we propose a retrieval-au\textbf{G}mented sto\textbf{R}y generation
framework with a f\textbf{O}rest of e\textbf{V}id\textbf{E}nce (GROVE) to
enhance stories' complexity. We build a retrieval repository for target
conditions to produce few-shot examples to prompt LLMs. Additionally, we design
an ``asking-why'' prompting scheme that extracts a forest of evidence,
providing compensation for the ambiguities that may occur in the generated
story. This iterative process uncovers underlying story backgrounds. Finally,
we select the most fitting chains of evidence from the evidence forest and
integrate them into the generated story, thereby enhancing the narrative's
complexity and credibility. Experimental results and numerous examples verify
the effectiveness of our method.Comment: Findings of EMNLP 202
Hierarchical Vector Quantized Transformer for Multi-class Unsupervised Anomaly Detection
Unsupervised image Anomaly Detection (UAD) aims to learn robust and
discriminative representations of normal samples. While separate solutions per
class endow expensive computation and limited generalizability, this paper
focuses on building a unified framework for multiple classes. Under such a
challenging setting, popular reconstruction-based networks with continuous
latent representation assumption always suffer from the "identical shortcut"
issue, where both normal and abnormal samples can be well recovered and
difficult to distinguish. To address this pivotal issue, we propose a
hierarchical vector quantized prototype-oriented Transformer under a
probabilistic framework. First, instead of learning the continuous
representations, we preserve the typical normal patterns as discrete iconic
prototypes, and confirm the importance of Vector Quantization in preventing the
model from falling into the shortcut. The vector quantized iconic prototype is
integrated into the Transformer for reconstruction, such that the abnormal data
point is flipped to a normal data point.Second, we investigate an exquisite
hierarchical framework to relieve the codebook collapse issue and replenish
frail normal patterns. Third, a prototype-oriented optimal transport method is
proposed to better regulate the prototypes and hierarchically evaluate the
abnormal score. By evaluating on MVTec-AD and VisA datasets, our model
surpasses the state-of-the-art alternatives and possesses good
interpretability. The code is available at
https://github.com/RuiyingLu/HVQ-Trans
Direct imaging of a zero-field target skyrmion and its polarity switch in a chiral magnetic nanodisk
A target skyrmion is a flux-closed spin texture that has two-fold degeneracy
and is promising as a binary state in next generation universal memories.
Although its formation in nanopatterned chiral magnets has been predicted, its
observation has remained challenging. Here, we use off-axis electron holography
to record images of target skyrmions in a 160-nm-diameter nanodisk of the
chiral magnet FeGe. We compare experimental measurements with numerical
simulations, demonstrate switching between two stable degenerate target
skyrmion ground states that have opposite polarities and rotation senses and
discuss the observed switching mechanism.Comment: 18 pages, 4 figure
Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks
Text classification tasks often encounter few shot scenarios with limited
labeled data, and addressing data scarcity is crucial. Data augmentation with
mixup has shown to be effective on various text classification tasks. However,
most of the mixup methods do not consider the varying degree of learning
difficulty in different stages of training and generate new samples with one
hot labels, resulting in the model over confidence. In this paper, we propose a
self evolution learning (SE) based mixup approach for data augmentation in text
classification, which can generate more adaptive and model friendly pesudo
samples for the model training. SE focuses on the variation of the model's
learning ability. To alleviate the model confidence, we introduce a novel
instance specific label smoothing approach, which linearly interpolates the
model's output and one hot labels of the original samples to generate new soft
for label mixing up. Through experimental analysis, in addition to improving
classification accuracy, we demonstrate that SE also enhances the model's
generalize ability
Direct reprogramming of induced neural progenitors: a new promising strategy for AD treatment.
Alzheimer\u27s disease (AD) is a prominent form of dementia, characterized by aggregation of the amyloid β-peptide (Aβ) plaques and neurofibrillary tangles, loss of synapses and neurons, and degeneration of cognitive functions. Currently, although a variety of medications can relieve some of the symptoms, there is no cure for AD. Recent breakthroughs in the stem cell field provide promising strategies for AD treatment. Stem cells including embryonic stem cells (ESCs), neural stem cells (NSCs), mesenchymal stem cells (MSCs), and induced pluripotent stem cells (iPSCs) are potentials for AD treatment. However, the limitation of cell sources, safety issues, and ethical issues restrict their applications in AD. Recently, the direct reprogramming of induced neural progenitor cells (iNPCs) has shed light on the treatment of AD. In this review, we will discuss the latest progress, challenges, and potential applications of direct reprogramming in AD treatment
A Survey on Label-efficient Deep Image Segmentation: Bridging the Gap between Weak Supervision and Dense Prediction
The rapid development of deep learning has made a great progress in image
segmentation, one of the fundamental tasks of computer vision. However, the
current segmentation algorithms mostly rely on the availability of pixel-level
annotations, which are often expensive, tedious, and laborious. To alleviate
this burden, the past years have witnessed an increasing attention in building
label-efficient, deep-learning-based image segmentation algorithms. This paper
offers a comprehensive review on label-efficient image segmentation methods. To
this end, we first develop a taxonomy to organize these methods according to
the supervision provided by different types of weak labels (including no
supervision, inexact supervision, incomplete supervision and inaccurate
supervision) and supplemented by the types of segmentation problems (including
semantic segmentation, instance segmentation and panoptic segmentation). Next,
we summarize the existing label-efficient image segmentation methods from a
unified perspective that discusses an important question: how to bridge the gap
between weak supervision and dense prediction -- the current methods are mostly
based on heuristic priors, such as cross-pixel similarity, cross-label
constraint, cross-view consistency, and cross-image relation. Finally, we share
our opinions about the future research directions for label-efficient deep
image segmentation.Comment: Accepted to IEEE TPAM
- …