6,085 research outputs found
Observation and management of shallow anterior chamber after glaucoma surgery
AIM: To analyze the cause and management of shallow anterior chamber after glaucoma surgery.<p>METHODS: The cause and management of shallow anterior chamber after glaucoma surgery on 298 cases(462 eyes)were analyzed retrospectively.<p>RESULTS: In 298 cases(462 eyes), 99 eyes(21.4%)had shallow anterior chamber. In 358 eyes treated with trabeculectomy, 77 eyes(21.5%)had shallow anterior chamber. In 85 eyes treated with trabeculectomy+MMC(mitomycin C), 20 eyes(23.5%)had shallow anterior chamber. In 19 eyes treated with trabeculectomy combined with cataract phacoemulsification and intraocular lens implantation, 2 eyes(10.53%)had shallow anterior chamber. Shallow anterior chamber appeared at 1 day to 5 days postoperatively. Forty-two eyes(42.4%)were with excessive filtering, 6 eyes(6.1%)with malignant glaucoma, 29 eyes(29.3%)with choroidal detachment, 2 eyes(2.0%)with malignant glaucoma complicated by choroidal detachment. Of 99 eyes with shallow anterior chamber, anterior chamber of 79 eyes recovered treated by nonsurgical methods, 20 eyes treated by operation.<p>CONCLUSION: The common cause of shallow anterior chamber after glaucoma surgery was preoperative high intraocular pressure, inflammation, excessive filtering, conjunctival flap flushing and choroidal detachment. Most cases can be managed with nonsurgical methods. Surgical interference should be taken if necessary
Understanding Hidden Memories of Recurrent Neural Networks
Recurrent neural networks (RNNs) have been successfully applied to various
natural language processing (NLP) tasks and achieved better results than
conventional methods. However, the lack of understanding of the mechanisms
behind their effectiveness limits further improvements on their architectures.
In this paper, we present a visual analytics method for understanding and
comparing RNN models for NLP tasks. We propose a technique to explain the
function of individual hidden state units based on their expected response to
input texts. We then co-cluster hidden state units and words based on the
expected response and visualize co-clustering results as memory chips and word
clouds to provide more structured knowledge on RNNs' hidden states. We also
propose a glyph-based sequence visualization based on aggregate information to
analyze the behavior of an RNN's hidden state at the sentence-level. The
usability and effectiveness of our method are demonstrated through case studies
and reviews from domain experts.Comment: Published at IEEE Conference on Visual Analytics Science and
Technology (IEEE VAST 2017
Noise-Robust Fine-Tuning of Pretrained Language Models via External Guidance
Adopting a two-stage paradigm of pretraining followed by fine-tuning,
Pretrained Language Models (PLMs) have achieved substantial advancements in the
field of natural language processing. However, in real-world scenarios, data
labels are often noisy due to the complex annotation process, making it
essential to develop strategies for fine-tuning PLMs with such noisy labels. To
this end, we introduce an innovative approach for fine-tuning PLMs using noisy
labels, which incorporates the guidance of Large Language Models (LLMs) like
ChatGPT. This guidance assists in accurately distinguishing between clean and
noisy samples and provides supplementary information beyond the noisy labels,
thereby boosting the learning process during fine-tuning PLMs. Extensive
experiments on synthetic and real-world noisy datasets further demonstrate the
superior advantages of our framework over the state-of-the-art baselines.Comment: EMNLP Findings 202
Contrastive Meta-Learning for Few-shot Node Classification
Few-shot node classification, which aims to predict labels for nodes on
graphs with only limited labeled nodes as references, is of great significance
in real-world graph mining tasks. Particularly, in this paper, we refer to the
task of classifying nodes in classes with a few labeled nodes as the few-shot
node classification problem. To tackle such a label shortage issue, existing
works generally leverage the meta-learning framework, which utilizes a number
of episodes to extract transferable knowledge from classes with abundant
labeled nodes and generalizes the knowledge to other classes with limited
labeled nodes. In essence, the primary aim of few-shot node classification is
to learn node embeddings that are generalizable across different classes. To
accomplish this, the GNN encoder must be able to distinguish node embeddings
between different classes, while also aligning embeddings for nodes in the same
class. Thus, in this work, we propose to consider both the intra-class and
inter-class generalizability of the model. We create a novel contrastive
meta-learning framework on graphs, named COSMIC, with two key designs. First,
we propose to enhance the intra-class generalizability by involving a
contrastive two-step optimization in each episode to explicitly align node
embeddings in the same classes. Second, we strengthen the inter-class
generalizability by generating hard node classes via a novel
similarity-sensitive mix-up strategy. Extensive experiments on few-shot node
classification datasets verify the superiority of our framework over
state-of-the-art baselines. Our code is provided at
https://github.com/SongW-SW/COSMIC.Comment: SIGKDD 202
- …