180 research outputs found
Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation
Past works on multimodal machine translation (MMT) elevate bilingual setup by
incorporating additional aligned vision information. However, an image-must
requirement of the multimodal dataset largely hinders MMT's development --
namely that it demands an aligned form of [image, source text, target text].
This limitation is generally troublesome during the inference phase especially
when the aligned image is not provided as in the normal NMT setup. Thus, in
this work, we introduce IKD-MMT, a novel MMT framework to support the
image-free inference phase via an inversion knowledge distillation scheme. In
particular, a multimodal feature generator is executed with a knowledge
distillation module, which directly generates the multimodal feature from
(only) source texts as the input. While there have been a few prior works
entertaining the possibility to support image-free inference for machine
translation, their performances have yet to rival the image-must translation.
In our experiments, we identify our method as the first image-free approach to
comprehensively rival or even surpass (almost) all image-must frameworks, and
achieved the state-of-the-art result on the often-used Multi30k benchmark. Our
code and data are available at: https://github.com/pengr/IKD-mmt/tree/master..Comment: Long paper accepted by EMNLP2022 main conferenc
Better Sign Language Translation with Monolingual Data
Sign language translation (SLT) systems, which are often decomposed into
video-to-gloss (V2G) recognition and gloss-to-text (G2T) translation through
the pivot gloss, heavily relies on the availability of large-scale parallel G2T
pairs. However, the manual annotation of pivot gloss, which is a sequence of
transcribed written-language words in the order in which they are signed,
further exacerbates the scarcity of data for SLT. To address this issue, this
paper proposes a simple and efficient rule transformation method to transcribe
the large-scale target monolingual data into its pseudo glosses automatically
for enhancing the SLT translation. Empirical results show that the proposed
approach can significantly improve the performance of SLT, especially achieving
state-of-the-art results on two SLT benchmark datasets PHEONIX-WEATHER 2014T
and ASLG-PC12. Our code has been released at:
https://github.com/pengr/Mono\_SLT
Revisiting the Knowledge Injection Frameworks
In recent years, large language models (LLMs), such as GPTs, have attained
great impact worldwide. However, how to adapt these LLMs to better suit the
vertical domain-specific tasks by utilizing external knowledge remains not
completely solved. Indeed, there have emerged a few works on this line where
most of them rely on an alignment heuristic that is built to inject the
corresponding knowledge tuple into the associated text sample.
However, despite the promise, we identify a pivotal problem in this work
ubiquitously. Simply put, we find that injecting unaligned (i.e., random)
knowledge tuple into the LLMs achieves comparable (and sometimes better)
results than the aligned knowledge being injected. We therefore take a thorough
investigation of this frustrating finding on a variety of related prior work
and further provide a chain of potential interpretations for the phenomenon.
Based on all that, we offer a simple remediated technique. Briefly, the core of
this technique is rooted in an ideological emphasis on the pruning and
purification of the external knowledge base to be injected into LLMs. At last,
we show that by integrating this technique into most (if not all) knowledge
injection frameworks and recent LLMs, it manages to overcome the aforementioned
sanity problem and further pushes the boundary of the performance of the
domain-adaptive LLMs.Comment: 9 pages, 6 figures, accepted by EMNLP 2023 Mai
Hormona Paratiróideia Como Factor Predictivo de Hipocalcemia Após Tiroidectomia: Estudo Prospectivo em 100 Doentes
INTRODUCTION:
Hypocalcemia is a frequent complication after total thyroidectomy and the main reason for prolonged hospitalization of these patients.
MATERIAL AND METHODS:
We studied prospectively 112 patients who underwent total or completation thyroidectomy between June 2012 and November 2013. Twelve patients with preoperative changes in parathyroid function were excluded. Parathyroid hormone and calcium levels were determined pre-operatively, immediately after surgery, on 1st day and on 14th day after surgery.
RESULTS:
Of the 100 patients enrolled, 60 have developed hypocalcaemia (60%) but only 14 patients had symptomatic hypocalcaemia. It mostly occurs 24 hours after surgery (76.7%). It was permanent in 3 patients and temporary in the others. In the 60 patients with hypocalcaemia, it has been found hypoparathyroidism in 19 patients immediately after surgery, in 14 patients on 1st day but only 3 had hypoparathyroidism (patients with permanent hypocalcaemia). Comparing the group of patients with and without hypocalcaemia we found a decrease of parathyroid hormone in both (immediately after surgery and on 1st day) but was more important in the hypocalcaemia group (p = 0.004 and p 19.4% determined on the 1st day (sensitivity = 82%; specificity = 63%).
DISCUSSION:
In our study there was a high incidence of hypocalcemia (60%), expressed predominantly 24 hours after surgery and conditioned, in these patients, a longer hospital stay. However, only 3 patients (3%) had permanent hypocalcemia. We still found a match in the oscillation of serum calcium levels and parathyroid hormone which identified the decrease in parathyroid hormone on the first day after surgery as a reliable predictor of hypocalcemia.
CONCLUSION:
Decrease of parathyroid hormone levels > 19.4% determined on 1st day is a good predictor of hypocalcemia after total / completation thyroidectomy, allowing to identify patients at higher risk of hypocalcemia, medicate them prophylactically and get early and safe discharges.info:eu-repo/semantics/publishedVersio
- …