3,451 research outputs found

    Astragalus Injection for Hypertensive Renal Damage: A Systematic Review

    Get PDF
    Objective. To evaluate the effectiveness of astragalus injection (a traditional Chinese patent medicine) for patients with renal damage induced by hypertension according to the available evidence. Methods. We searched MEDLINE, China National Knowledge Infrastructure (CNKI), Chinese VIP Information, China Biology Medicine (CBM), and Chinese Medical Citation Index (CMCI), and the date of search starts from the first of database to August 2011. No language restriction was applied. We included randomized controlled trials testing astragalus injection against placebo or astragalus injection plus antihypertensive drugs against antihypertensive drugs. Study selection, data extraction, quality assessment, and data analyses were conducted according to the Cochrane review standards. Results. 5 randomized trials (involving 429 patients) were included and the methodological quality was evaluated as generally low. The pooled results showed that astragalus injection was more effective in lowering β2-microglobulin (β2-MG), microalbuminuria (mAlb) compared with placebo, and it was also superior to prostaglandin in lowering blood urea nitrogen (BUN), creatinine clearance rate (Ccr). There were no adverse effects reported in the trials from astragalus injection. Conclusions. Astragalus injection showed protective effects in hypertensive renal damage patients, although available studies are not adequate to draw a definite conclusion due to low quality of included trials. More rigorous clinical trials with high quality are warranted to give high level of evidence

    Thermoelectric Transport in Holographic Quantum Matter under Shear Strain

    Full text link
    We study the thermoelectric transport under shear strain in two spatial dimensional quantum matter using the holographic duality. General analytic formulae for the DC thermoelectric conductivities subjected to finite shear strain are obtained in terms of the black hole horizon data. Off-diagonal terms in the conductivity matrix appear also at zero magnetic field, resembling an emergent electronic nematicity which cannot nevertheless be identified with the presence of an anomalous Hall effect. For an explicit model study, we numerically construct a family of strained black holes and obtain the corresponding nonlinear stress-strain curves. We then compute all electric, thermoelectric, and thermal conductivities and discuss the effects of strain. While the shear elastic deformation does not affect the temperature dependence of thermoelectric and thermal conductivities quantitatively, it can strongly change the behavior of the electric conductivity. For both shear hardening and softening cases, we find a clear metal-insulator transition driven by the shear deformation. Moreover, the violation of the previously conjectured thermal conductivity bound is observed for large shear deformation.Comment: 35 pages, 13 figure

    A benchmark study on error-correction by read-pairing and tag-clustering in amplicon-based deep sequencing

    Get PDF
    Figure S1. Sequence properties of protein G. (a) The sequence of 88 bp template was shown in DRuMS color schemes. The overlapping region of target sequence and forward primer or reverse primer was shown. (b) The A-T C-G density plot along the target sequence. Matlab nucleotide sequence analysis toolbox was used to plot this figure. (EPS 498 kb

    Breaking rotations without violating the KSS viscosity bound

    Full text link
    We revisit the computation of the shear viscosity to entropy ratio in a holographic p-wave superfluid model, focusing on the role of rotational symmetry breaking. We study the interplay between explicit and spontaneous symmetry breaking and derive a simple horizon formula for η/s\eta/s, which is valid also in the presence of explicit breaking of rotations and is in perfect agreement with the numerical data. We observe that a source which explicitly breaks rotational invariance suppresses the value of η/s\eta/s in the broken phase, competing against the effects of spontaneous symmetry breaking. However, η/s\eta/s always reaches a constant value in the limit of zero temperature, which is never smaller than the Kovtun-Son-Starinets (KSS) bound, 1/4π1/4\pi. This behavior appears to be in contrast with previous holographic anisotropic models which found a power-law vanishing of η/s\eta/s at small temperature. This difference is shown to arise from the properties of the near-horizon geometry in the extremal limit. Thus, our construction shows that the breaking of rotations itself does not necessarily imply a violation of the KSS bound.Comment: 20 pages, 7 figure

    ERNIE-ViL 2.0: Multi-view Contrastive Learning for Image-Text Pre-training

    Full text link
    Recent Vision-Language Pre-trained (VLP) models based on dual encoder have attracted extensive attention from academia and industry due to their superior performance on various cross-modal tasks and high computational efficiency. They attempt to learn cross-modal representation using contrastive learning on image-text pairs, however, the built inter-modal correlations only rely on a single view for each modality. Actually, an image or a text contains various potential views, just as humans could capture a real-world scene via diverse descriptions or photos. In this paper, we propose ERNIE-ViL 2.0, a Multi-View Contrastive learning framework to build intra-modal and inter-modal correlations between diverse views simultaneously, aiming at learning a more robust cross-modal representation. Specifically, we construct multiple views within each modality to learn the intra-modal correlation for enhancing the single-modal representation. Besides the inherent visual/textual views, we construct sequences of object tags as a special textual view to narrow the cross-modal semantic gap on noisy image-text pairs. Pre-trained with 29M publicly available datasets, ERNIE-ViL 2.0 achieves competitive results on English cross-modal retrieval. Additionally, to generalize our method to Chinese cross-modal tasks, we train ERNIE-ViL 2.0 through scaling up the pre-training datasets to 1.5B Chinese image-text pairs, resulting in significant improvements compared to previous SOTA results on Chinese cross-modal retrieval. We release our pre-trained models in https://github.com/PaddlePaddle/ERNIE.Comment: 14 pages, 6 figure

    ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph

    Full text link
    We propose a knowledge-enhanced approach, ERNIE-ViL, which incorporates structured knowledge obtained from scene graphs to learn joint representations of vision-language. ERNIE-ViL tries to build the detailed semantic connections (objects, attributes of objects and relationships between objects) across vision and language, which are essential to vision-language cross-modal tasks. Utilizing scene graphs of visual scenes, ERNIE-ViL constructs Scene Graph Prediction tasks, i.e., Object Prediction, Attribute Prediction and Relationship Prediction tasks in the pre-training phase. Specifically, these prediction tasks are implemented by predicting nodes of different types in the scene graph parsed from the sentence. Thus, ERNIE-ViL can learn the joint representations characterizing the alignments of the detailed semantics across vision and language. After pre-training on large scale image-text aligned datasets, we validate the effectiveness of ERNIE-ViL on 5 cross-modal downstream tasks. ERNIE-ViL achieves state-of-the-art performances on all these tasks and ranks the first place on the VCR leaderboard with an absolute improvement of 3.7%.Comment: Paper has been published in the AAAI2021 conferenc

    Robust Pose Transfer with Dynamic Details using Neural Video Rendering

    Get PDF
    Pose transfer of human videos aims to generate a high fidelity video of a target person imitating actions of a source person. A few studies have made great progress either through image translation with deep latent features or neural rendering with explicit 3D features. However, both of them rely on large amounts of training data to generate realistic results, and the performance degrades on more accessible internet videos due to insufficient training frames. In this paper, we demonstrate that the dynamic details can be preserved even trained from short monocular videos. Overall, we propose a neural video rendering framework coupled with an image-translation-based dynamic details generation network (D2G-Net), which fully utilizes both the stability of explicit 3D features and the capacity of learning components. To be specific, a novel texture representation is presented to encode both the static and pose-varying appearance characteristics, which is then mapped to the image space and rendered as a detail-rich frame in the neural rendering stage. Moreover, we introduce a concise temporal loss in the training stage to suppress the detail flickering that is made more visible due to high-quality dynamic details generated by our method. Through extensive comparisons, we demonstrate that our neural human video renderer is capable of achieving both clearer dynamic details and more robust performance even on accessible short videos with only 2k - 4k frames.Comment: Video link: https://www.bilibili.com/video/BV1y64y1C7ge
    corecore