213 research outputs found
Combined administration of nicorandil and atorvastatin in patients with acute myocardial infarction after coronary intervention, and its effect on postoperative cardiac systolic function
Purpose: To study the effect of a combination of nicorandil and atorvastatin calcium in patients with acute myocardial infarction after coronary intervention, and its effect on postoperative cardiac systolic function of patients.Methods: Retrospective analysis was performed on 100 patients with acute myocardial infarctiontreated with coronary interventional therapy in The Third Affiliated Hospital of Qiqihaer MedicalUniversity from April 2019 to August 2020. The patients were randomised into control and study groups, with 50 patients in each group. The control group was treated with nicorandil, while the study group was treated with a combination of nicorandil and atorvastatin. Treatment response, cardiac structural indices, cardiac systolic function, blood lipid profiles, quality of life (QLI) score, Barthel Index (BI), Fugl- Meyer assessment (FMA), motor function score, incidence of adverse reactions, and blood pressure changes on days 1, 2, 3 and 4 after surgery, were compared between the two groups.Results: Treatment effectiveness, cardiac systolic function, QLI score, BI index and FMA motor function score in the study group were higher than the corresponding control values (p < 0.05). However, lower cardiac structure indices, blood lipid profiles and incidence of adverse reactions were greater in the study group than in the control group (p < 0.05). No significant disparity in blood pressure was found between the two groups on post-surgery days 1, 2, 3 and 4.Conclusion: The combination of nicorandil and atorvastatin calcium tablets produced better outcomes in patients with acute myocardial infarction after coronary intervention therapy; furthermore, the combination therapy significantly improved the cardiac systolic function of patients
BEV-DG: Cross-Modal Learning under Bird's-Eye View for Domain Generalization of 3D Semantic Segmentation
Cross-modal Unsupervised Domain Adaptation (UDA) aims to exploit the
complementarity of 2D-3D data to overcome the lack of annotation in a new
domain. However, UDA methods rely on access to the target domain during
training, meaning the trained model only works in a specific target domain. In
light of this, we propose cross-modal learning under bird's-eye view for Domain
Generalization (DG) of 3D semantic segmentation, called BEV-DG. DG is more
challenging because the model cannot access the target domain during training,
meaning it needs to rely on cross-modal learning to alleviate the domain gap.
Since 3D semantic segmentation requires the classification of each point,
existing cross-modal learning is directly conducted point-to-point, which is
sensitive to the misalignment in projections between pixels and points. To this
end, our approach aims to optimize domain-irrelevant representation modeling
with the aid of cross-modal learning under bird's-eye view. We propose
BEV-based Area-to-area Fusion (BAF) to conduct cross-modal learning under
bird's-eye view, which has a higher fault tolerance for point-level
misalignment. Furthermore, to model domain-irrelevant representations, we
propose BEV-driven Domain Contrastive Learning (BDCL) with the help of
cross-modal learning under bird's-eye view. We design three domain
generalization settings based on three 3D datasets, and BEV-DG significantly
outperforms state-of-the-art competitors with tremendous margins in all
settings.Comment: Accepted by ICCV 202
Consistent123: One Image to Highly Consistent 3D Asset Using Case-Aware Diffusion Priors
Reconstructing 3D objects from a single image guided by pretrained diffusion
models has demonstrated promising outcomes. However, due to utilizing the
case-agnostic rigid strategy, their generalization ability to arbitrary cases
and the 3D consistency of reconstruction are still poor. In this work, we
propose Consistent123, a case-aware two-stage method for highly consistent 3D
asset reconstruction from one image with both 2D and 3D diffusion priors. In
the first stage, Consistent123 utilizes only 3D structural priors for
sufficient geometry exploitation, with a CLIP-based case-aware adaptive
detection mechanism embedded within this process. In the second stage, 2D
texture priors are introduced and progressively take on a dominant guiding
role, delicately sculpting the details of the 3D model. Consistent123 aligns
more closely with the evolving trends in guidance requirements, adaptively
providing adequate 3D geometric initialization and suitable 2D texture
refinement for different objects. Consistent123 can obtain highly 3D-consistent
reconstruction and exhibits strong generalization ability across various
objects. Qualitative and quantitative experiments show that our method
significantly outperforms state-of-the-art image-to-3D methods. See
https://Consistent123.github.io for a more comprehensive exploration of our
generated 3D assets
Weakly Supervised Semantic Segmentation for Large-Scale Point Cloud
Existing methods for large-scale point cloud semantic segmentation require
expensive, tedious and error-prone manual point-wise annotations. Intuitively,
weakly supervised training is a direct solution to reduce the cost of labeling.
However, for weakly supervised large-scale point cloud semantic segmentation,
too few annotations will inevitably lead to ineffective learning of network. We
propose an effective weakly supervised method containing two components to
solve the above problem. Firstly, we construct a pretext task, \textit{i.e.,}
point cloud colorization, with a self-supervised learning to transfer the
learned prior knowledge from a large amount of unlabeled point cloud to a
weakly supervised network. In this way, the representation capability of the
weakly supervised network can be improved by the guidance from a heterogeneous
task. Besides, to generate pseudo label for unlabeled data, a sparse label
propagation mechanism is proposed with the help of generated class prototypes,
which is used to measure the classification confidence of unlabeled point. Our
method is evaluated on large-scale point cloud datasets with different
scenarios including indoor and outdoor. The experimental results show the large
gain against existing weakly supervised and comparable results to fully
supervised methods\footnote{Code based on mindspore:
https://github.com/dmcv-ecnu/MindSpore\_ModelZoo/tree/main/WS3\_MindSpore}
Strategic Preys Make Acute Predators: Enhancing Camouflaged Object Detectors by Generating Camouflaged Objects
Camouflaged object detection (COD) is the challenging task of identifying
camouflaged objects visually blended into surroundings. Albeit achieving
remarkable success, existing COD detectors still struggle to obtain precise
results in some challenging cases. To handle this problem, we draw inspiration
from the prey-vs-predator game that leads preys to develop better camouflage
and predators to acquire more acute vision systems and develop algorithms from
both the prey side and the predator side. On the prey side, we propose an
adversarial training framework, Camouflageator, which introduces an auxiliary
generator to generate more camouflaged objects that are harder for a COD method
to detect. Camouflageator trains the generator and detector in an adversarial
way such that the enhanced auxiliary generator helps produce a stronger
detector. On the predator side, we introduce a novel COD method, called
Internal Coherence and Edge Guidance (ICEG), which introduces a camouflaged
feature coherence module to excavate the internal coherence of camouflaged
objects, striving to obtain more complete segmentation results. Additionally,
ICEG proposes a novel edge-guided separated calibration module to remove false
predictions to avoid obtaining ambiguous boundaries. Extensive experiments show
that ICEG outperforms existing COD detectors and Camouflageator is flexible to
improve various COD detectors, including ICEG, which brings state-of-the-art
COD performance.Comment: Accepted at ICLR 202
- …