53 research outputs found
Proteomics in Pancreatic Cancer Research
Pancreatic cancer is a highly aggressive malignancy with a poor prognosis and deeply affects the life of people. Therefore, the earlier diagnosis and better treatments are urgently needed. In recent years, the proteomic technologies are well established and growing rapidly and have been widely applied in clinical applications, especially in pancreatic cancer research. In this paper, we attempt to discuss the development of current proteomic technologies and the application of proteomics to the field of pancreatic cancer research. This will explore the potential perspective in revealing pathogenesis, making the diagnosis earlier and treatment
Neural Wavelet-domain Diffusion for 3D Shape Generation
This paper presents a new approach for 3D shape generation, enabling direct
generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse
and detail coefficient volumes to implicitly represent 3D shapes via truncated
signed distance functions and multi-scale biorthogonal wavelets, and formulate
a pair of neural networks: a generator based on the diffusion model to produce
diverse shapes in the form of coarse coefficient volumes; and a detail
predictor to further produce compatible detail coefficient volumes for
enriching the generated shapes with fine structures and details. Both
quantitative and qualitative experimental results manifest the superiority of
our approach in generating diverse and high-quality shapes with complex
topology and structures, clean surfaces, and fine details, exceeding the 3D
generation capabilities of the state-of-the-art models
ISS++: Image as Stepping Stone for Text-Guided 3D Shape Generation
In this paper, we present a new text-guided 3D shape generation approach
(ISS++) that uses images as a stepping stone to bridge the gap between text and
shape modalities for generating 3D shapes without requiring paired text and 3D
data. The core of our approach is a two-stage feature-space alignment strategy
that leverages a pre-trained single-view reconstruction (SVR) model to map CLIP
features to shapes: to begin with, map the CLIP image feature to the
detail-rich 3D shape space of the SVR model, then map the CLIP text feature to
the 3D shape space through encouraging the CLIP-consistency between rendered
images and the input text. Besides, to extend beyond the generative capability
of the SVR model, we design a text-guided 3D shape stylization module that can
enhance the output shapes with novel structures and textures. Further, we
exploit pre-trained text-to-image diffusion models to enhance the generative
diversity, fidelity, and stylization capability. Our approach is generic,
flexible, and scalable, and it can be easily integrated with various SVR models
to expand the generative space and improve the generative fidelity. Extensive
experimental results demonstrate that our approach outperforms the
state-of-the-art methods in terms of generative quality and consistency with
the input text. Codes and models are released at
https://github.com/liuzhengzhe/ISS-Image-as-Stepping-Stone-for-Text-Guided-3D-Shape-Generation.Comment: Under review of TPAM
ISS: Image as Stepping Stone for Text-Guided 3D Shape Generation
Text-guided 3D shape generation remains challenging due to the absence of
large paired text-shape data, the substantial semantic gap between these two
modalities, and the structural complexity of 3D shapes. This paper presents a
new framework called Image as Stepping Stone (ISS) for the task by introducing
2D image as a stepping stone to connect the two modalities and to eliminate the
need for paired text-shape data. Our key contribution is a two-stage
feature-space-alignment approach that maps CLIP features to shapes by
harnessing a pre-trained single-view reconstruction (SVR) model with multi-view
supervisions: first map the CLIP image feature to the detail-rich shape space
in the SVR model, then map the CLIP text feature to the shape space and
optimize the mapping by encouraging CLIP consistency between the input text and
the rendered images. Further, we formulate a text-guided shape stylization
module to dress up the output shapes with novel textures. Beyond existing works
on 3D shape generation from text, our new approach is general for creating
shapes in a broad range of categories, without requiring paired text-shape
data. Experimental results manifest that our approach outperforms the
state-of-the-arts and our baselines in terms of fidelity and consistency with
text. Further, our approach can stylize the generated shapes with both
realistic and fantasy structures and textures
Improving Multi-turn Emotional Support Dialogue Generation with Lookahead Strategy Planning
Providing Emotional Support (ES) to soothe people in emotional distress is an
essential capability in social interactions. Most existing researches on
building ES conversation systems only considered single-turn interactions with
users, which was over-simplified. In comparison, multi-turn ES conversation
systems can provide ES more effectively, but face several new technical
challenges, including: (1) how to adopt appropriate support strategies to
achieve the long-term dialogue goal of comforting the user's emotion; (2) how
to dynamically model the user's state. In this paper, we propose a novel system
MultiESC to address these issues. For strategy planning, drawing inspiration
from the A* search algorithm, we propose lookahead heuristics to estimate the
future user feedback after using particular strategies, which helps to select
strategies that can lead to the best long-term effects. For user state
modeling, MultiESC focuses on capturing users' subtle emotional expressions and
understanding their emotion causes. Extensive experiments show that MultiESC
significantly outperforms competitive baselines in both dialogue generation and
strategy planning. Our codes are available at
https://github.com/lwgkzl/MultiESC.Comment: Accepted by the main conference of EMNLP 202
- …