29 research outputs found

    Systematic review and network meta-analysis of pre-emptive embolization of the aneurysm sac side branches and aneurysm sac coil embolization to improve the outcomes of endovascular aneurysm repair

    Get PDF
    ObjectivePrevious reports have revealed a high incidence of type II endoleak (T2EL) after endovascular aneurysm repair (EVAR). The incidence of T2EL after EVAR is reduced by pre-emptive embolization of aneurysm sac side branches (ASSB) and aneurysm sac coil embolization (ASCE). This study aimed to investigate whether different preventive interventions for T2EL were correlated with suppression of aneurysm sac expansion and reduction of the re-intervention rate.MethodsThe PubMed, Web of Science, MEDLINE and Embase databases, and conference proceedings were searched to identify articles on EVAR with or without embolization. The study was developed in line with the Participants, Interventions, Comparisons, Outcomes, and Study design principles and was conducted and reported in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses guidelines. We used network meta-analysis based on multivariate random-effects meta-analysis to indirectly compare outcomes of different strategies for embolization during EVAR.ResultsA total of 31 studies met all inclusion criteria and were included in the qualitative and quantitative syntheses. The included studies were published between 2001 and 2022 and analyzed a total of 18,542 patients, including 1,882 patients who received prophylactic embolization treatment during EVAR (experimental group) and 16,660 who did not receive prophylactic embolization during EVAR (control group). The effect of pre-emptive embolization of the inferior mesenteric artery (IMA) (IMA-ASSB) in preventing T2EL was similar (relative risk [RR] 1.01, 95% confidence interval [CI] 0.38–2.63) to the effects of non-selective embolization of ASSB (NS-ASSB) and ASCE (RR 0.88, 95% CI 0.40–1.96). IMA-ASSB showed a better clinical effect in suppressing the aneurysm sac expansion (RR 0.27, 95% CI 0.09–2.25 compared with NS-ASSB; RR 0.93, 95% CI 0.16–5.56 compared with ASCE) and reducing the re-intervention rate (RR 0.34, 95% CI 0.08–1.53 compared with NS-ASSB; RR 0.66, 95% CI 0.19–2.22 compared with ASCE). All prophylactic embolization strategies improved the clinical outcomes of EVAR.ConclusionProphylactic embolization during EVAR effectively prevents T2EL, suppresses the aneurysm sac expansion, and reduces the re-intervention rate. IMA embolization demonstrated benefits in achieving long-term aneurysm sac stability and lowering the risk of secondary surgery. NS-ASSB more effectively reduces the incidence of T2EL, while IMA embolization alone or in combination with ASCE enhances the clinical benefits of EVAR. In addition, as network meta-analysis is still an indirect method based on a refinement of existing data, more studies and evidence are still needed in the future to establish more credible conclusions

    Pose Guided Human Image Synthesis with Partially Decoupled GAN

    Full text link
    Pose Guided Human Image Synthesis (PGHIS) is a challenging task of transforming a human image from the reference pose to a target pose while preserving its style. Most existing methods encode the texture of the whole reference human image into a latent space, and then utilize a decoder to synthesize the image texture of the target pose. However, it is difficult to recover the detailed texture of the whole human image. To alleviate this problem, we propose a method by decoupling the human body into several parts (\eg, hair, face, hands, feet, \etc) and then using each of these parts to guide the synthesis of a realistic image of the person, which preserves the detailed information of the generated images. In addition, we design a multi-head attention-based module for PGHIS. Because most convolutional neural network-based methods have difficulty in modeling long-range dependency due to the convolutional operation, the long-range modeling capability of attention mechanism is more suitable than convolutional neural networks for pose transfer task, especially for sharp pose deformation. Extensive experiments on Market-1501 and DeepFashion datasets reveal that our method almost outperforms other existing state-of-the-art methods in terms of both qualitative and quantitative metrics.Comment: 16 pages, 14th Asian Conference on Machine Learning conferenc

    Tailoring Personality Traits in Large Language Models via Unsupervisedly-Built Personalized Lexicons

    Full text link
    Personality plays a pivotal role in shaping human expression patterns, thus regulating the personality of large language models (LLMs) holds significant potential in enhancing the user experience of LLMs. Previous methods either relied on fine-tuning LLMs on specific corpora or necessitated manually crafted prompts to elicit specific personalities from LLMs. However, the former approach is inefficient and costly, while the latter cannot precisely manipulate personality traits at a fine-grained level. To address the above challenges, we have employed a novel Unsupervisedly-Built Personalized Lexicons (UBPL) in a pluggable manner during the decoding phase of LLMs to manipulate their personality traits. UBPL is a lexicon built through an unsupervised approach from a situational judgment test dataset (SJTs4LLM). Users can utilize UBPL to adjust the probability vectors of predicted words in the decoding phase of LLMs, thus influencing the personality expression of LLMs. Extensive experimentation demonstrates the remarkable effectiveness and pluggability of our method for fine-grained manipulation of LLM's personality.Comment: Work in progres

    Improving Human Image Synthesis with Residual Fast Fourier Transformation and Wasserstein Distance

    Full text link
    With the rapid development of the Metaverse, virtual humans have emerged, and human image synthesis and editing techniques, such as pose transfer, have recently become popular. Most of the existing techniques rely on GANs, which can generate good human images even with large variants and occlusions. But from our best knowledge, the existing state-of-the-art method still has the following problems: the first is that the rendering effect of the synthetic image is not realistic, such as poor rendering of some regions. And the second is that the training of GAN is unstable and slow to converge, such as model collapse. Based on the above two problems, we propose several methods to solve them. To improve the rendering effect, we use the Residual Fast Fourier Transform Block to replace the traditional Residual Block. Then, spectral normalization and Wasserstein distance are used to improve the speed and stability of GAN training. Experiments demonstrate that the methods we offer are effective at solving the problems listed above, and we get state-of-the-art scores in LPIPS and PSNR.Comment: This paper is accepted by IJCNN202

    High-Fidelity and High-Efficiency Digital Class-D Audio Power Amplifier

    No full text
    This study presents a high-fidelity and high-efficiency digital class-D audio power amplifier (CDA), which consists of digital and analog modules. To realize a compatible digital input, a fully digital audio digital-to-analog converter (DAC) is implemented on MATLAB and Xilinx System Generator, which consists of a 16x interpolation filter, a fourth-order four-bit quantized delta-sigma (ΔΣ) modulator, and a uniform-sampling pulse width modulator. The CDA utilizes the closed-loop negative feedback and loop-filtering technologies to minimize distortion. The audio DAC, which is based on a field-programmable gate array, consumes 0.128 W and uses 7100 LUTs, which achieves 11.2% of the resource utilization rate. The analog module is fabricated in a 0.18 µm BCD technology. The postlayout simulation results show that the CDA delivers an output power of 1 W with 93.3% efficiency to a 4 Ω speaker and achieves 0.0138% of the total harmonic distortion (THD) with a transient noise for a 1 kHz input sinusoidal test tone and 3.6 V supply. The output power reaches up to 2.73 W for 1% THD (with transient noise). The proposed amplifier occupies an active area of 1 mm2

    Augmentation-induced Consistency Regularization for Classification

    Full text link
    Deep neural networks have become popular in many supervised learning tasks, but they may suffer from overfitting when the training dataset is limited. To mitigate this, many researchers use data augmentation, which is a widely used and effective method for increasing the variety of datasets. However, the randomness introduced by data augmentation causes inevitable inconsistency between training and inference, which leads to poor improvement. In this paper, we propose a consistency regularization framework based on data augmentation, called CR-Aug, which forces the output distributions of different sub models generated by data augmentation to be consistent with each other. Specifically, CR-Aug evaluates the discrepancy between the output distributions of two augmented versions of each sample, and it utilizes a stop-gradient operation to minimize the consistency loss. We implement CR-Aug to image and audio classification tasks and conduct extensive experiments to verify its effectiveness in improving the generalization ability of classifiers. Our CR-Aug framework is ready-to-use, it can be easily adapted to many state-of-the-art network architectures. Our empirical results show that CR-Aug outperforms baseline methods by a significant margin.Comment: This paper is accepted by IJCNN202

    DeepDeblur : text image recovery from blur to sharp

    No full text
    Digital images could be degraded by a variety of blur during the image acquisition (i.e. relative motion of cameras, electronic noise, capturing defocus, and so on). Blurring images can be computationally modeled as the result of a convolution process with the corresponding blur kernel and thus, image deblurring can be regarded as a deconvolution operation. In this paper, we explore to deblur images by approximating blind deconvolutions using a deep neural network. Different deep neural network structures are investigated to evaluate their deblurring capabilities, which contributes to the optimal design of a network architecture. It is found that shallow and narrow networks are not capable of handling complex motion blur. We thus, present a deep network with 20 layers to cope with text image blur. In addition, a novel network structure with Sequential Highway Connections (SHC) is leveraged to gain superior convergence. The experiment results demonstrate the state-of-the-art performance of the proposed framework with the higher visual quality of the delurred images
    corecore