963 research outputs found
LncRNA LINC01857 drives pancreatic adenocarcinoma progression via modulating miR-19a-3p/SMOC2
Objectives: Emerging evidence has demonstrated that LINC01857 exerts a pivotal function in many cancers. However, its function in Pancreatic Ductal Adenocarcinoma (PDAC) still remains unclear. This study was designed to investigate the regulatory character of LINC01857 in PDAC.
Methods: Bioinformatic tools and databases were used to seek potential miRNAs and mRNAs. Gene expression was evaluated by Reverse Transcription quantitative real-time Polymerase Chain Reaction (RT-qPCR), and western blot was used for protein level detection. A subcellular fraction assay was done to ascertain the location of LINC01857 in PANC-1 and BxPC-3 human pancreatic cancer cells. CCK-8, EdU, wound healing and Transwell assays were performed to inquire into the influence of LINC01857, and SPARC -related Modular Calcium-binding protein-2 (SMOC2) on cell viability, proliferation, migration, and invasion, respectively. The interaction between LINC01857 and its downstream genes was explored by RNA immunoprecipitation and luciferase reporter assays.
Results: LINC01857 levels were significantly elevated in PDAC. Knockdown of LINC01857 significantly restrained the proliferation, migration, invasion, and Epithelial-Mesenchymal Transition (EMT) process of PDAC cells. MiR-19a-3p was a downstream target of LINC01857, and miR-19a-3p levels were significantly decreased in PDAC cells. In addition, SMOC2 expression had a negative correlation with that of miR-19a-3p, and SMOC2 was a downstream target of miR-19a-3p. Furthermore, SMOC2 upregulation partially abolished the inhibitive influence of LINC01857 downregulation on cell proliferation, migration, invasion, and the EMT process.
Conclusion: LINC01857 promotes malignant phenotypes of PDAC cells via upregulation of SMOC2 by interacting with miR-19a-3p
3D GAN Inversion with Facial Symmetry Prior
Recently, a surge of high-quality 3D-aware GANs have been proposed, which
leverage the generative power of neural rendering. It is natural to associate
3D GANs with GAN inversion methods to project a real image into the generator's
latent space, allowing free-view consistent synthesis and editing, referred as
3D GAN inversion. Although with the facial prior preserved in pre-trained 3D
GANs, reconstructing a 3D portrait with only one monocular image is still an
ill-pose problem. The straightforward application of 2D GAN inversion methods
focuses on texture similarity only while ignoring the correctness of 3D
geometry shapes. It may raise geometry collapse effects, especially when
reconstructing a side face under an extreme pose. Besides, the synthetic
results in novel views are prone to be blurry. In this work, we propose a novel
method to promote 3D GAN inversion by introducing facial symmetry prior. We
design a pipeline and constraints to make full use of the pseudo auxiliary view
obtained via image flipping, which helps obtain a robust and reasonable
geometry shape during the inversion process. To enhance texture fidelity in
unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide
extra supervision. We design constraints aimed at filtering out conflict areas
for optimization in asymmetric situations. Comprehensive quantitative and
qualitative evaluations on image reconstruction and editing demonstrate the
superiority of our method.Comment: Project Page is at https://feiiyin.github.io/SPI
TaleCrafter: Interactive Story Visualization with Multiple Characters
Accurate Story visualization requires several necessary elements, such as
identity consistency across frames, the alignment between plain text and visual
content, and a reasonable layout of objects in images. Most previous works
endeavor to meet these requirements by fitting a text-to-image (T2I) model on a
set of videos in the same style and with the same characters, e.g., the
FlintstonesSV dataset. However, the learned T2I models typically struggle to
adapt to new characters, scenes, and styles, and often lack the flexibility to
revise the layout of the synthesized images. This paper proposes a system for
generic interactive story visualization, capable of handling multiple novel
characters and supporting the editing of layout and local structure. It is
developed by leveraging the prior knowledge of large language and T2I models,
trained on massive corpora. The system comprises four interconnected
components: story-to-prompt generation (S2P), text-to-layout generation (T2L),
controllable text-to-image generation (C-T2I), and image-to-video animation
(I2V). First, the S2P module converts concise story information into detailed
prompts required for subsequent stages. Next, T2L generates diverse and
reasonable layouts based on the prompts, offering users the ability to adjust
and refine the layout to their preference. The core component, C-T2I, enables
the creation of images guided by layouts, sketches, and actor-specific
identifiers to maintain consistency and detail across visualizations. Finally,
I2V enriches the visualization process by animating the generated images.
Extensive experiments and a user study are conducted to validate the
effectiveness and flexibility of interactive editing of the proposed system.Comment: Github repository: https://github.com/VideoCrafter/TaleCrafte
Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation
Generating videos for visual storytelling can be a tedious and complex
process that typically requires either live-action filming or graphics
animation rendering. To bypass these challenges, our key idea is to utilize the
abundance of existing video clips and synthesize a coherent storytelling video
by customizing their appearances. We achieve this by developing a framework
comprised of two functional modules: (i) Motion Structure Retrieval, which
provides video candidates with desired scene or motion context described by
query texts, and (ii) Structure-Guided Text-to-Video Synthesis, which generates
plot-aligned videos under the guidance of motion structure and text prompts.
For the first module, we leverage an off-the-shelf video retrieval system and
extract video depths as motion structure. For the second module, we propose a
controllable video generation model that offers flexible controls over
structure and characters. The videos are synthesized by following the
structural guidance and appearance instruction. To ensure visual consistency
across clips, we propose an effective concept personalization approach, which
allows the specification of the desired character identities through text
prompts. Extensive experiments demonstrate that our approach exhibits
significant advantages over various existing baselines.Comment: Github: https://github.com/VideoCrafter/Animate-A-Story Project page:
https://videocrafter.github.io/Animate-A-Stor
Demonstration of Adiabatic Variational Quantum Computing with a Superconducting Quantum Coprocessor
Adiabatic quantum computing enables the preparation of many-body ground
states. This is key for applications in chemistry, materials science, and
beyond. Realisation poses major experimental challenges: Direct analog
implementation requires complex Hamiltonian engineering, while the digitised
version needs deep quantum gate circuits. To bypass these obstacles, we suggest
an adiabatic variational hybrid algorithm, which employs short quantum circuits
and provides a systematic quantum adiabatic optimisation of the circuit
parameters. The quantum adiabatic theorem promises not only the ground state
but also that the excited eigenstates can be found. We report the first
experimental demonstration that many-body eigenstates can be efficiently
prepared by an adiabatic variational algorithm assisted with a multi-qubit
superconducting coprocessor. We track the real-time evolution of the ground and
exited states of transverse-field Ising spins with a fidelity up that can reach
about 99%.Comment: 12 pages, 4 figure
Consistency of P53 immunohistochemical expression between preoperative biopsy and final surgical specimens of endometrial cancer
ObjectiveThe aim of this study is to explore the consistency of P53 immunohistochemical expression between preoperative biopsy and final pathology in endometrial cancer (EC), and to predict the prognosis of patients based on the 4-tier P53 expression and classic clinicopathological parameters.MethodsThe medical data of patients with stage I-III EC who received preoperative biopsy and initial surgical treatment in two medical centers was retrospectively collected. The consistency of P53 immunohistochemistry expression between preoperative biopsy and final pathology was compared using Cohen’s kappa coefficient and Sankey diagram, then 4-tier P53 expression was defined (P53wt/P53wt, P53abn/P53wt, P53wt/P53abn, and P53abn/P53abn). Univariate and multivariate Cox regression analysis was used to determine the correlation between 4-tier P53 expression and the prognosis of patients. On this basis, the nomogram models were established to predict the prognosis of patients by combining 4-layer P53 expression and classic clinicopathological parameters, then risk stratification was performed on patients.ResultsA total of 1186 patients were ultimately included in this study through inclusion and exclusion criteria. Overall, the consistency of P53 expression between preoperative biopsy and final pathology was 83.8%, with a kappa coefficient of 0.624. ROC curve suggested that the AUC of 4-tier P53 expression to predict the prognosis of patients was better than AUC of P53 expression in preoperative biopsy or final pathology alone. Univariate and multivariate Cox regression analysis suggested that 4-tier P53 expression was an independent influencing factor for recurrence and death. On this basis, the nomogram models based on 4-tier P53 expression and classical clinicopathological factors were successfully established. ROC curve suggested that the AUC (AUC for recurrence and death was 0.856 and 0.838, respectively) of the models was superior to the single 4-tier P53 expression or the single classical clinicopathological parameters, which could provide a better risk stratification for patients.ConclusionThe expression of P53 immunohistochemistry had relatively good consistency between preoperative biopsy and final pathology of EC. Due to the discrepancy of P53 immunohistochemistry between preoperative biopsy and final pathology, the prognosis of patients can be better evaluated based on the 4-layer P53 expression and classic clinical pathological parameters
The NTSC VLBI System and its application in UT1 measurement
In order to measure the Universal Time (UT1) in real time, National Time
Service Center (NTSC) has built a VGOS-like (VLBI Global Observing System)
broadband VLBI network, which includes three 13-m radio telescopes located in
Jilin, Sanya and Kashi, and a data analysis center in Xi'an. Each station is
equipped with a highly stable hydrogen atomic clock and a self-developed VLBI
backend, and is co-located with two GPS receivers. This VGOS-like VLBI network
may play an important role in improving the Chinese broadband VLBI technology
and making valuable contributions to domestic VLBI measurements of UT1. In this
paper, we introduce the specifications of this VLBI network, and present the
UT1 measurements at C-band conducted in 2018 using the Jilin-Kashi baseline of
this network. The comparisons between our UT1 estimates and those provided by
IERS suggest that the NTSC VLBI network is capable to determine UT1 accurate at
the level of 58.8 microseconds
- …