66 research outputs found

    Extremal trees, unicyclic and bicyclic graphs with respect to pp-Sombor spectral radii

    Full text link
    For a graph G=(V,E)G=(V,E) and vi∈Vv_{i}\in V, denote by dvid_{v_{i}} (or did_{i} for short) the degree of vertex viv_{i}. The pp-Sombor matrix Sp(G)\textbf{S}_{\textbf{p}}(G) (pβ‰ 0p\neq0) of a graph GG is a square matrix, where the (i,j)(i,j)-entry is equal to (dip+djp)1p\displaystyle (d_{i}^{p}+d_{j}^{p})^{\frac{1}{p}} if the vertices viv_{i} and vjv_{j} are adjacent, and 0 otherwise. The pp-Sombor spectral radius of GG, denoted by ρ(Sp(G))\displaystyle \rho(\textbf{S}_{\textbf{p}}(G)), is the largest eigenvalue of the pp-Sombor matrix Sp(G)\textbf{S}_{\textbf{p}}(G). In this paper, we consider the extremal trees, unicyclic and bicyclic graphs with respect to the pp-Sombor spectral radii. We characterize completely the extremal graphs with the first three maximum Sombor spectral radii, which answers partially a problem posed by Liu et al. in [MATCH Commun. Math. Comput. Chem. 87 (2022) 59-87]

    Progressive Scene Text Erasing with Self-Supervision

    Full text link
    Scene text erasing seeks to erase text contents from scene images and current state-of-the-art text erasing models are trained on large-scale synthetic data. Although data synthetic engines can provide vast amounts of annotated training samples, there are differences between synthetic and real-world data. In this paper, we employ self-supervision for feature representation on unlabeled real-world scene text images. A novel pretext task is designed to keep consistent among text stroke masks of image variants. We design the Progressive Erasing Network in order to remove residual texts. The scene text is erased progressively by leveraging the intermediate generated results which provide the foundation for subsequent higher quality results. Experiments show that our method significantly improves the generalization of the text erasing task and achieves state-of-the-art performance on public benchmarks

    SpikeBERT: A Language Spikformer Trained with Two-Stage Knowledge Distillation from BERT

    Full text link
    Spiking neural networks (SNNs) offer a promising avenue to implement deep neural networks in a more energy-efficient way. However, the network architectures of existing SNNs for language tasks are too simplistic, and deep architectures have not been fully explored, resulting in a significant performance gap compared to mainstream transformer-based networks such as BERT. To this end, we improve a recently-proposed spiking transformer (i.e., Spikformer) to make it possible to process language tasks and propose a two-stage knowledge distillation method for training it, which combines pre-training by distilling knowledge from BERT with a large collection of unlabelled texts and fine-tuning with task-specific instances via knowledge distillation again from the BERT fine-tuned on the same training examples. Through extensive experimentation, we show that the models trained with our method, named SpikeBERT, outperform state-of-the-art SNNs and even achieve comparable results to BERTs on text classification tasks for both English and Chinese with much less energy consumption

    Comparison of the feasibility and validity of a one-level and a two-level erector spinae plane block combined with general anesthesia for patients undergoing lumbar surgery

    Get PDF
    BackgroundSpinal surgery causes severe postoperative pain. An erector spinae plane (ESP) block can relieve postoperative pain, but the optimal blocking method has not been defined. The aim of this study is to compare the feasibility of a one-level and a two-level lumbar ESP block and their effect on intraoperative and postoperative analgesia in lumbar spinal surgery.MethodsA total of 83 adult patients who were scheduled for posterior lumbar interbody fusion were randomly divided into two groups. Patients in Group I (n = 42) received an ultrasound-guided bilateral one-level ESP block with 0.3% ropivacaine, while patients in Group II (n = 41) received a bilateral two-level ESP block. Blocking effectiveness was evaluated, including whether a sensory block covered the surgical incision, sensory decrease in anterior thigh, and quadriceps strength decrease. Intraoperative anesthetic dosage, postoperative visual analogue scale scores of pain, opioid consumption, rescue analgesia, and opioid-related side effects were analyzed.ResultsOf the total number, 80 patients completed the clinical trial and were included in the analysis, with 40 in each group. The time to complete the ESP block was significantly longer in Group II than in Group I (16.0 [14.3, 17.0] min vs. 9.0 [8.3, 9.0] min, P = 0.000). The rate of the sensory block covering the surgical incision at 30β€…min was significantly higher in Group II than in Group I (100% [40/40] vs. 85.0% [34/40], P = 0.026). The rate of the sensory block in the anterior thigh was higher in Group II (43.8% [35/80] vs. 27.5% [22/80], P = 0.032), but the rate of quadriceps strength decrease did not differ significantly between the groups. The mean effect–site remifentanil concentration during intervertebral decompression was lower in Group II than in Group I (2.9 ± 0.3β€…ng/ml vs. 3.3 ± 0.5β€…ng/ml, P = 0.007).There were no significant differences between the groups in terms of intraoperative analgesic consumption, postoperative analgesic consumption, and postoperative VAS pain scores at rest and with movement within 24β€…h. There were no block failures, block-related complications, and postoperative infection.ConclusionsAmong patients undergoing posterior lumbar interbody fusion, the two-level ESP block provided a higher rate of coverage of the surgical incision by the sensory block when compared with the one-level method, without increasing the incidence of procedure-related complications. Clinical Trial Registrationwww.chictr.org.cn, identifier: ChiCTR210004359

    Symbolic Learning to Optimize: Towards Interpretability and Scalability

    Full text link
    Recent studies on Learning to Optimize (L2O) suggest a promising path to automating and accelerating the optimization procedure for complicated tasks. Existing L2O models parameterize optimization rules by neural networks, and learn those numerical rules via meta-training. However, they face two common pitfalls: (1) scalability: the numerical rules represented by neural networks create extra memory overhead for applying L2O models, and limit their applicability to optimizing larger tasks; (2) interpretability: it is unclear what an L2O model has learned in its black-box optimization rule, nor is it straightforward to compare different L2O models in an explainable way. To avoid both pitfalls, this paper proves the concept that we can "kill two birds by one stone", by introducing the powerful tool of symbolic regression to L2O. In this paper, we establish a holistic symbolic representation and analysis framework for L2O, which yields a series of insights for learnable optimizers. Leveraging our findings, we further propose a lightweight L2O model that can be meta-trained on large-scale problems and outperformed human-designed and tuned optimizers. Our work is set to supply a brand-new perspective to L2O research. Codes are available at: https://github.com/VITA-Group/Symbolic-Learning-To-Optimize.Comment: Published as a conference paper at ICLR 202

    Heat Capacities and Thermodynamic Properties of Pinnoite and Inderite

    No full text
    In this paper, in order to understand the thermodynamic properties of natural minerals of pinnoite (MgB2O4Β·3H2O, Pin) and inderite (Mg2B6O11Β·15H2O, Ind) deposited in salt lakes, heat capacities of two minerals were measured using a precision calorimeter at temperatures from 306.15 to 355.15 K after the high purity was synthesized. It was found that there are no phase transitions and thermal anomalies for the two minerals, and the molar heat capacities against temperature for Pin and Ind were fitted as Cp,m,pin =β€‰βˆ’2029.47058 + 16.94666Tβ€‰βˆ’β€‰0.04396T2 + 3.89409Γ—10βˆ’5T3 and Cp,m,Ind =β€‰βˆ’30814.43795 + 282.68108Tβ€‰βˆ’β€‰0.85605T2 + 8.70708Γ—10βˆ’4T 3, respectively. On the basis of molar heat capacities (Cp,m) of Pin and Ind, the thermodynamic functions of entropy, enthalpy, and Gibbs free energy at the temperature of 1 K interval for the two minerals were obtained for the first time
    • …
    corecore