155 research outputs found
The role of glucocorticoid action in the pathophysiology of the Metabolic Syndrome
Glucocorticoids are stress hormones that modulate a large number of physiological actions involved in metabolic, inflammatory, cardiovascular and behavioral processes. The molecular mechanisms and the physiological effects of glucocorticoids have been extensively studied. However, the involvement of glucocorticoid action in the etiology of the Metabolic Syndrome has not been well appreciated. Recently, accumulating clinical evidence and animal genetics studies have attracted growing interest in the role of glucocorticoid action in obesity and insulin resistance. This review will discuss the metabolic effects in the context of glucocorticoid metabolism and establish the association of glucocorticoid action with the features of the Metabolic Syndrome, especially obesity and insulin resistance. Special discussions will be focused on corticosteroid-binding globulin and 11β-hydroxysteroid dehydrogenase type 1, two proteins that mediate glucocorticoid action and have been implicated in the Metabolic Syndrome. Due to the complexities of the glucocorticoid biology and the Metabolic Syndrome and limited space, this review is only intended to provide a general link between the two areas with broad rather than in-depth discussions of clinical, pharmacological and genetic findings
Animatable 3D Gaussian: Fast and High-Quality Reconstruction of Multiple Human Avatars
Neural radiance fields are capable of reconstructing high-quality drivable
human avatars but are expensive to train and render. To reduce consumption, we
propose Animatable 3D Gaussian, which learns human avatars from input images
and poses. We extend 3D Gaussians to dynamic human scenes by modeling a set of
skinned 3D Gaussians and a corresponding skeleton in canonical space and
deforming 3D Gaussians to posed space according to the input poses. We
introduce hash-encoded shape and appearance to speed up training and propose
time-dependent ambient occlusion to achieve high-quality reconstructions in
scenes containing complex motions and dynamic shadows. On both novel view
synthesis and novel pose synthesis tasks, our method outperforms existing
methods in terms of training time, rendering speed, and reconstruction quality.
Our method can be easily extended to multi-human scenes and achieve comparable
novel view synthesis results on a scene with ten people in only 25 seconds of
training
High-Fidelity 3D Head Avatars Reconstruction through Spatially-Varying Expression Conditioned Neural Radiance Field
One crucial aspect of 3D head avatar reconstruction lies in the details of
facial expressions. Although recent NeRF-based photo-realistic 3D head avatar
methods achieve high-quality avatar rendering, they still encounter challenges
retaining intricate facial expression details because they overlook the
potential of specific expression variations at different spatial positions when
conditioning the radiance field. Motivated by this observation, we introduce a
novel Spatially-Varying Expression (SVE) conditioning. The SVE can be obtained
by a simple MLP-based generation network, encompassing both spatial positional
features and global expression information. Benefiting from rich and diverse
information of the SVE at different positions, the proposed SVE-conditioned
neural radiance field can deal with intricate facial expressions and achieve
realistic rendering and geometry details of high-fidelity 3D head avatars.
Additionally, to further elevate the geometric and rendering quality, we
introduce a new coarse-to-fine training strategy, including a geometry
initialization strategy at the coarse stage and an adaptive importance sampling
strategy at the fine stage. Extensive experiments indicate that our method
outperforms other state-of-the-art (SOTA) methods in rendering and geometry
quality on mobile phone-collected and public datasets.Comment: 9 pages, 5 figure
Simultaneous Machine Translation with Large Language Models
Large language models (LLM) have demonstrated their abilities to solve
various natural language processing tasks through dialogue-based interactions.
For instance, research indicates that LLMs can achieve competitive performance
in offline machine translation tasks for high-resource languages. However,
applying LLMs to simultaneous machine translation (SimulMT) poses many
challenges, including issues related to the training-inference mismatch arising
from different decoding patterns. In this paper, we explore the feasibility of
utilizing LLMs for SimulMT. Building upon conventional approaches, we introduce
a simple yet effective mixture policy that enables LLMs to engage in SimulMT
without requiring additional training. Furthermore, after Supervised
Fine-Tuning (SFT) on a mixture of full and prefix sentences, the model exhibits
significant performance improvements. Our experiments, conducted with
Llama2-7B-chat on nine language pairs from the MUST-C dataset, demonstrate that
LLM can achieve translation quality and latency comparable to dedicated SimulMT
models
Generate, Filter, and Fuse: Query Expansion via Multi-Step Keyword Generation for Zero-Shot Neural Rankers
Query expansion has been proved to be effective in improving recall and
precision of first-stage retrievers, and yet its influence on a complicated,
state-of-the-art cross-encoder ranker remains under-explored. We first show
that directly applying the expansion techniques in the current literature to
state-of-the-art neural rankers can result in deteriorated zero-shot
performance. To this end, we propose GFF, a pipeline that includes a large
language model and a neural ranker, to Generate, Filter, and Fuse query
expansions more effectively in order to improve the zero-shot ranking metrics
such as nDCG@10. Specifically, GFF first calls an instruction-following
language model to generate query-related keywords through a reasoning chain.
Leveraging self-consistency and reciprocal rank weighting, GFF further filters
and combines the ranking results of each expanded query dynamically. By
utilizing this pipeline, we show that GFF can improve the zero-shot nDCG@10 on
BEIR and TREC DL 2019/2020. We also analyze different modelling choices in the
GFF pipeline and shed light on the future directions in query expansion for
zero-shot neural rankers
Post-processing CHARIS integral field spectrograph data with PyKLIP
We present the pyKLIP-CHARIS post-processing pipeline, a Python library that
reduces high contrast imaging data for the CHARIS integral field spectrograph
used with the SCExAO project on the Subaru Telescope. The pipeline is a part of
the pyKLIP package, a Python library dedicated to the reduction of direct
imaging data of exoplanets, brown dwarfs, and discs. For PSF subtraction, the
pyKLIP-CHARIS post-processing pipeline relies on the core algorithms
implemented in pyKLIP but uses image registration and calibrations that are
unique to CHARIS. We describe the pipeline procedures, calibration results, and
capabilities in processing imaging data acquired via the angular differential
imaging and spectral differential imaging observing techniques. We showcase its
performance on extracting spectra of injected synthetic point sources as well
as compare the extracted spectra from real data sets on HD 33632 and HR 8799 to
results in the literature. The pipeline is a python-based complement to the
SCExAO project supported, widely used (and currently IDL-based) CHARIS data
post-processing pipeline (CHARIS DPP) and provides an additional approach to
reducing CHARIS data and extracting calibrated planet spectra.Comment: 17 pages, 13 figure
INarIG: Iterative Non-autoregressive Instruct Generation Model For Word-Level Auto Completion
Computer-aided translation (CAT) aims to enhance human translation efficiency
and is still important in scenarios where machine translation cannot meet
quality requirements. One fundamental task within this field is Word-Level Auto
Completion (WLAC). WLAC predicts a target word given a source sentence,
translation context, and a human typed character sequence. Previous works
either employ word classification models to exploit contextual information from
both sides of the target word or directly disregarded the dependencies from the
right-side context. Furthermore, the key information, i.e. human typed
sequences, is only used as prefix constraints in the decoding module. In this
paper, we propose the INarIG (Iterative Non-autoregressive Instruct Generation)
model, which constructs the human typed sequence into Instruction Unit and
employs iterative decoding with subwords to fully utilize input information
given in the task. Our model is more competent in dealing with low-frequency
words (core scenario of this task), and achieves state-of-the-art results on
the WMT22 and benchmark datasets, with a maximum increase of over 10%
prediction accuracy.Comment: EMNLP202
Text Style Transfer Back-Translation
Back Translation (BT) is widely used in the field of machine translation, as
it has been proved effective for enhancing translation quality. However, BT
mainly improves the translation of inputs that share a similar style (to be
more specific, translation-like inputs), since the source side of BT data is
machine-translated. For natural inputs, BT brings only slight improvements and
sometimes even adverse effects. To address this issue, we propose Text Style
Transfer Back Translation (TST BT), which uses a style transfer model to modify
the source side of BT data. By making the style of source-side text more
natural, we aim to improve the translation of natural inputs. Our experiments
on various language pairs, including both high-resource and low-resource ones,
demonstrate that TST BT significantly improves translation performance against
popular BT benchmarks. In addition, TST BT is proved to be effective in domain
adaptation so this strategy can be regarded as a general data augmentation
method. Our training code and text style transfer model are open-sourced.Comment: acl2023, 14 pages, 4 figures, 19 table
- …