236 research outputs found

    TorchCP: A Library for Conformal Prediction based on PyTorch

    Full text link
    TorchCP is a Python toolbox for conformal prediction research on deep learning models. It contains various implementations for posthoc and training methods for classification and regression tasks (including multi-dimension output). TorchCP is built on PyTorch (Paszke et al., 2019) and leverages the advantages of matrix computation to provide concise and efficient inference implementations. The code is licensed under the LGPL license and is open-sourced at \href\href{https://github.com/ml-stat-Sustech/TorchCP}{\text{this https URL}}

    Building3D: An Urban-Scale Dataset and Benchmarks for Learning Roof Structures from Point Clouds

    Full text link
    Urban modeling from LiDAR point clouds is an important topic in computer vision, computer graphics, photogrammetry and remote sensing. 3D city models have found a wide range of applications in smart cities, autonomous navigation, urban planning and mapping etc. However, existing datasets for 3D modeling mainly focus on common objects such as furniture or cars. Lack of building datasets has become a major obstacle for applying deep learning technology to specific domains such as urban modeling. In this paper, we present a urban-scale dataset consisting of more than 160 thousands buildings along with corresponding point clouds, mesh and wire-frame models, covering 16 cities in Estonia about 998 Km2. We extensively evaluate performance of state-of-the-art algorithms including handcrafted and deep feature based methods. Experimental results indicate that Building3D has challenges of high intra-class variance, data imbalance and large-scale noises. The Building3D is the first and largest urban-scale building modeling benchmark, allowing a comparison of supervised and self-supervised learning methods. We believe that our Building3D will facilitate future research on urban modeling, aerial path planning, mesh simplification, and semantic/part segmentation etc

    Does Confidence Calibration Help Conformal Prediction?

    Full text link
    Conformal prediction, as an emerging uncertainty qualification technique, constructs prediction sets that are guaranteed to contain the true label with high probability. Previous works usually employ temperature scaling to calibrate the classifier, assuming that confidence calibration can benefit conformal prediction. In this work, we first show that post-hoc calibration methods surprisingly lead to larger prediction sets with improved calibration, while over-confidence with small temperatures benefits the conformal prediction performance instead. Theoretically, we prove that high confidence reduces the probability of appending a new class in the prediction set. Inspired by the analysis, we propose a novel method, Conformal Temperature Scaling\textbf{Conformal Temperature Scaling} (ConfTS), which rectifies the objective through the gap between the threshold and the non-conformity score of the ground-truth label. In this way, the new objective of ConfTS will optimize the temperature value toward an optimal set that satisfies the marginal coverage\textit{marginal coverage}. Experiments demonstrate that our method can effectively improve widely-used conformal prediction methods

    Error reduction method for singularity point detection using Shack–Hartmann wavefront sensor

    Get PDF
    AbstractA new framework is proposed for realizing high-spatial-resolution detection of singularity points in optical vortex beams using a Shack–Hartmann wavefront sensor (SHWS). The method uses a Shack–Hartmann wavefront sensor (SHWS) to record a Hartmanngram. A map of evaluation values related to phase slope is then calculated from the Hartmanngram. We first determined the singularity's position precisely by calculating the centroid of the circulation of 3×3 crosspoints. After that, we analyzed the error distribution of it, and proposed hybrid centroiding framework for reducing its error. Optical experiments were carried out to verify the method. Good linearity was showed in detecting positions of the singularity points, and it was indicated that the accuracy of detection the position of OV was improved. The average root mean square (RMS) error over various measurements was better than correlation matching method, which we proposed before. The method not only shows higher accuracy, but also consumes much less time than our former work

    Elixir: Train a Large Language Model on a Small GPU Cluster

    Full text link
    In recent years, the number of parameters of one deep learning (DL) model has been growing much faster than the growth of GPU memory space. People who are inaccessible to a large number of GPUs resort to heterogeneous training systems for storing model parameters in CPU memory. Existing heterogeneous systems are based on parallelization plans in the scope of the whole model. They apply a consistent parallel training method for all the operators in the computation. Therefore, engineers need to pay a huge effort to incorporate a new type of model parallelism and patch its compatibility with other parallelisms. For example, Mixture-of-Experts (MoE) is still incompatible with ZeRO-3 in Deepspeed. Also, current systems face efficiency problems on small scale, since they are designed and tuned for large-scale training. In this paper, we propose Elixir, a new parallel heterogeneous training system, which is designed for efficiency and flexibility. Elixir utilizes memory resources and computing resources of both GPU and CPU. For flexibility, Elixir generates parallelization plans in the granularity of operators. Any new type of model parallelism can be incorporated by assigning a parallel pattern to the operator. For efficiency, Elixir implements a hierarchical distributed memory management scheme to accelerate inter-GPU communications and CPU-GPU data transmissions. As a result, Elixir can train a 30B OPT model on an A100 with 40GB CUDA memory, meanwhile reaching 84% efficiency of Pytorch GPU training. With its super-linear scalability, the training efficiency becomes the same as Pytorch GPU training on multiple GPUs. Also, large MoE models can be trained 5.3x faster than dense models of the same size. Now Elixir is integrated into ColossalAI and is available on its main branch

    Knee anterior cruciate ligament bio stiffness measuring instrument

    Get PDF
    Aiming at the lack of timely and effective evaluation of knee anterior cruciate ligament (ACL) reconstruction, a knee ACL force and displacement measuring instrument was developed. Test experiments were carried out using a laboratory-made test platform and a robotic arm. Firstly, the importance of anterior cruciate ligament reconstruction surgery is introduced. The necessity of this kind of measuring instrument is proposed. The reliability of the lower stiffness measuring instrument under different measurement conditions in space is verified by the mechanical model of the previous ACL in-situ measurement. Then the design structure and measurement system of the instrument are introduced in detail. Finally, using the laboratory-made test platform and the UR5 robot arm and stiffness measuring instrument for the displacement and force test accuracy experiments, and the pig bone anterior cruciate ligament test and postoperative evaluation experiments, prove that the measuring instrument can be used for ACL Assessment of reconstructive surgery

    Effectiveness and safety of pelareorep plus chemotherapy versus chemotherapy alone for advanced solid tumors: a meta-analysis

    Get PDF
    Background: Pelareorep is an oncolytic virus that causes oncolytic effects in many solid tumors, and it has shown therapeutic benefits. However, few studies have compared pelareorep combined with chemotherapy to traditional chemotherapy alone in advanced solid tumors. Consequently, we intended to evaluate the effectiveness and safety of pelareorep plus chemotherapy in this paper.Methods: We searched four databases including PubMed, Embase, Cochrane Library and Web of Science comprehensively for studies comparing pelareorep combined with chemotherapy to chemotherapy alone in the treatment of advanced solid tumors. The outcomes measures were 1-year overall survival (OS), 2-year OS, 4-month progression-free survival (PFS), 1-year PFS, objective response rate (ORR), any-grade adverse events (any-grade AEs), and severe AEs (grade ≥ 3).Results: There were five studies involving 492 patients included in the study. Combination therapy did not significantly improve clinical outcomes in terms of 1-year OS [RR = 1.02, 95%CI = (0.82–1.25)], 2-year OS [RR = 1.00, 95%CI = (0.67–1.49)], 4-month PFS [RR = 1.00, 95%CI = (0.67–1.49)], 1-year PFS [RR = 0.79, 95%CI = (0.44–1.42)], and ORR [OR = 0.79, 95%CI = (0.49–1.27)] compared to chemotherapy alone, and the subgroup analysis of 2-year OS, 1-year PFS, and ORR based on countries and tumor sites showed similar results. In all grades, the incidence of AEs was greater with combination therapy, including fever [RR = 3.10, 95%CI = (1.48–6.52)], nausea [RR = 1.19, 95%CI = (1.02–1.38)], diarrhea [RR = 1.87, 95%CI = (1.39–2.52)], chills [RR = 4.14, 95%CI = (2.30–7.43)], headache [RR = 1.46, 95%CI = (1.02–2.09)], vomiting [RR = 1.38, 95%CI = (1.06–1.80)] and flu-like symptoms [RR = 4.18, 95%CI = (2.19–7.98)]. However, severe adverse events did not differ significantly between the two arms.Conclusion: Pelareorep addition to traditional chemotherapy did not lead to significant improvements in OS, PFS, or ORR in advanced solid tumor patients, but it did partially increase AEs in all grades, with no discernible differences in serious AEs. Therefore, the combination treatment is not recommended in patients with advanced solid tumors.Systematic Review Registration:https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=400841, identifier CRD4202340084

    Conformal Prediction for Deep Classifier via Label Ranking

    Full text link
    Conformal prediction is a statistical framework that generates prediction sets containing ground-truth labels with a desired coverage guarantee. The predicted probabilities produced by machine learning models are generally miscalibrated, leading to large prediction sets in conformal prediction. In this paper, we empirically and theoretically show that disregarding the probabilities' value will mitigate the undesirable effect of miscalibrated probability values. Then, we propose a novel algorithm named Sorted Adaptive prediction sets\textit{Sorted Adaptive prediction sets} (SAPS), which discards all the probability values except for the maximum softmax probability. The key idea behind SAPS is to minimize the dependence of the non-conformity score on the probability values while retaining the uncertainty information. In this manner, SAPS can produce sets of small size and communicate instance-wise uncertainty. Theoretically, we provide a finite-sample coverage guarantee of SAPS and show that the expected value of set size from SAPS is always smaller than APS. Extensive experiments validate that SAPS not only lessens the prediction sets but also broadly enhances the conditional coverage rate and adaptation of prediction sets
    • …
    corecore