191 research outputs found

    Studies on a Double Poisson-Geometric Insurance Risk Model with Interference

    Get PDF
    This paper mainly studies a generalized double Poisson-Geometric insurance risk model. By martingale and stopping time approach, we obtain adjustment coefficient equation, the Lundberg inequality, and the formula for the ruin probability. Also the Laplace transformation of the time when the surplus reaches a given level for the first time is discussed, and the expectation and its variance are obtained. Finally, we give the numerical examples

    The Gerber-Shiu Discounted Penalty Function of Sparre Andersen Risk Model with a Constant Dividend Barrier

    Get PDF
    This paper constructs a Sparre Andersen risk model with a constant dividend barrier in which the claim interarrival distribution is a mixture of an exponential distribution and an Erlang(n) distribution. We derive the integro-differential equation satisfied by the Gerber-Shiu discounted penalty function of this risk model. Finally, we provide a numerical example

    Psychometric evaluation of the Chinese version of the burnout syndrome assessment scale in nurses

    Get PDF
    ObjectiveThis study aimed to translate the Burnout Syndrome Assessment Scale (BOSAS) into Chinese and validate its reliability and validity among Chinese emergency department and ICU nurses.MethodsThe scale was translated into Chinese using Brislin’s translation principle. A total of 626 nurses from Jiangxi, Zhejiang, and Fujian provinces in China participated in an online questionnaire survey. The survey included the general information questionnaire for nurses developed by the research team and the Chinese version of the Burnout Syndrome Assessment Scale. Reliability and validity of the Chinese version of the scale were analyzed using SPSS.25 and AMOS.24 software.ResultsThe Chinese version of the Burnout Syndrome Assessment Scale consists of a total of 20 items, encompassing two dimensions: personal burnout and job burnout. This structure is consistent with the original English version of the scale. The Chinese version of BOSAS demonstrated high internal consistency, with a Cronbach’s α coefficient of 0.941. Additionally, the scale exhibited good split-half reliability (0.765) and test-retest reliability (0.871). The content validity index (S-CVI) was 0.971, indicating strong content validity. Exploratory factor analysis confirmed the same 2-factor structure as the original scale, and confirmatory factor analysis further validated this structure, with all fit indices indicating appropriateness.ConclusionThe Burnout Syndrome Assessment Scale has been successfully introduced and its reliability and validity have been verified in Chinese emergency department and ICU nurses

    PUMA: Secure Inference of LLaMA-7B in Five Minutes

    Full text link
    With ChatGPT as a representative, tons of companies have began to provide services based on large Transformers models. However, using such a service inevitably leak users' prompts to the model provider. Previous studies have studied secure inference for Transformer models using secure multiparty computation (MPC), where model parameters and clients' prompts are kept secret. Despite this, these frameworks are still limited in terms of model performance, efficiency, and deployment. To address these limitations, we propose framework PUMA to enable fast and secure Transformer model inference. Our framework designs high quality approximations for expensive functions, such as GeLU and Softmax, which significantly reduce the cost of secure inference while preserving the model performance. Additionally, we design secure Embedding and LayerNorm procedures that faithfully implement the desired functionality without undermining the Transformer architecture. PUMA is about 2x faster than the state-of-the-art MPC framework MPCFORMER(ICLR 2023) and has similar accuracy as plaintext models without fine-tuning (which the previous works failed to achieve). One more thing, PUMA can evaluate LLaMA-7B in around 5 minutes to generate 1 token. To our best knowledge, this is the first time that a model with such a parameter size is able to be evaluated under MPC. PUMA has been open-sourced in the Github repository of SecretFlow-SPU

    GraphTheta: A Distributed Graph Neural Network Learning System With Flexible Training Strategy

    Full text link
    Graph neural networks (GNNs) have been demonstrated as a powerful tool for analysing non-Euclidean graph data. However, the lack of efficient distributed graph learning (GL) systems severely hinders applications of GNNs, especially when graphs are big and GNNs are relatively deep. Herein, we present GraphTheta, a novel distributed and scalable GL system implemented in vertex-centric graph programming model. GraphTheta is the first GL system built upon distributed graph processing with neural network operators implemented as user-defined functions. This system supports multiple training strategies, and enables efficient and scalable big graph learning on distributed (virtual) machines with low memory each. To facilitate graph convolution implementations, GraphTheta puts forward a new GL abstraction named NN-TGAR to bridge the gap between graph processing and graph deep learning. A distributed graph engine is proposed to conduct the stochastic gradient descent optimization with a hybrid-parallel execution. Moreover, we add support for a new cluster-batched training strategy besides global-batch and mini-batch. We evaluate GraphTheta using a number of datasets with network size ranging from small-, modest- to large-scale. Experimental results show that GraphTheta can scale well to 1,024 workers for training an in-house developed GNN on an industry-scale Alipay dataset of 1.4 billion nodes and 4.1 billion attributed edges, with a cluster of CPU virtual machines (dockers) of small memory each (5∼\sim12GB). Moreover, GraphTheta obtains comparable or better prediction results than the state-of-the-art GNN implementations, demonstrating its capability of learning GNNs as well as existing frameworks, and can outperform DistDGL by up to 2.02×2.02\times with better scalability. To the best of our knowledge, this work presents the largest edge-attributed GNN learning task conducted in the literature.Comment: 18 pages, 14 figures, 5 table

    Embryonic Porcine Skin Precursors Can Successfully Develop into Integrated Skin without Teratoma Formation Posttransplantation in Nude Mouse Model

    Get PDF
    How to improve the wound healing quality of severe burn patients is still a challenge due to lack of skin appendages and rete ridges, no matter how much progress has been made in the fields of either stem cell or tissue engineering. We thus systematically studied the growth potential and differentiation capacity of porcine embryonic skin precursors. Implantation of embryonic skin precursors (PESPs) of different gestational ages in nude mice can generate the integrity skin, including epidermis, dermis and skin appendages, such as sweat gland, hair follicle, sebaceous gland, etc.. PESPs of embryonic day 42 possess the maximal growth potential, while, the safe window time of PESPs transplantation for prevention of teratoma risk is E56 or later. In conclusion, PESPs can form the 3 dimensional structures of skin with all necessary skin appendages. Our data strongly indicate that porcine embryonic skin precursors harvested from E56 of minipig may provide new hope for high-quality healing of extensive burns and traumas

    BumbleBee: Secure Two-party Inference Framework for Large Transformers

    Get PDF
    Large transformer-based models have realized state- of-the-art performance on lots of real-world tasks such as natural language processing and computer vision. However, with the increasing sensitivity of the data and tasks they handle, privacy has become a major concern during model deployment. In this work, we focus on private inference in two-party settings, where one party holds private inputs and the other holds the model. We introduce BumbleBee, a fast and communication-friendly two-party private transformer inference system. Our contributions are three-fold: Firstly, we present optimized homomorphic encryption-based proto- cols that enable the multiplication of large matrices with 80 – 90% less communication cost than existing methods. Secondly, we offer a general method for designing efficient and accurate protocols for non-linear activation functions in transformers. Our activation protocols have demonstrated speed and reduced the communication overhead by 80 – 95% over two existing methods. Finally, we conducted intensive benchmarks on several large transformer models. Results show that BumbleBee is more than one order of magnitude faster than Iron (NeurIPS22)
    • …
    corecore