91 research outputs found

    Clinical application of a fully automated blood collection robot and its assessment of blood collection quality of anticoagulant specimens

    Get PDF
    Background and objectivesTo investigate the application of intelligent puncture blood collection robots in anticoagulated blood specimens, the satisfaction of subjects with the two blood collection methods, and the feasibility of intelligent blood collection devices to replace manual blood collection methods in clinical work.Materials and methodsA total of 154 volunteers from Zhongshan Hospital Fudan University were recruited to compare the test results of anticoagulant blood samples between blood collection robot and manual blood collection, a questionnaire was used to inquire about the volunteers’ feelings about the two blood collection methods; the blood collection data of 6,255 patients willing to use the robot for blood collection were collected to analyze the success rate of blood collection.ResultsThe blood collection robot is superior to manual specimen collection in terms of volume and pain of specimen collection, and the puncture success rate is 94.3%. The anticoagulated blood specimens collected by the robot had 11 indexes statistically different from the results of manual blood collection, but the differences did not affect the clinical diagnosis and prognosis.ConclusionThe intelligent robotic blood collection is less painful and has better acceptance by patients, which can be used for clinical anticoagulated blood specimen collection

    Microstructure-Empowered Stock Factor Extraction and Utilization

    Full text link
    High-frequency quantitative investment is a crucial aspect of stock investment. Notably, order flow data plays a critical role as it provides the most detailed level of information among high-frequency trading data, including comprehensive data from the order book and transaction records at the tick level. The order flow data is extremely valuable for market analysis as it equips traders with essential insights for making informed decisions. However, extracting and effectively utilizing order flow data present challenges due to the large volume of data involved and the limitations of traditional factor mining techniques, which are primarily designed for coarser-level stock data. To address these challenges, we propose a novel framework that aims to effectively extract essential factors from order flow data for diverse downstream tasks across different granularities and scenarios. Our method consists of a Context Encoder and an Factor Extractor. The Context Encoder learns an embedding for the current order flow data segment's context by considering both the expected and actual market state. In addition, the Factor Extractor uses unsupervised learning methods to select such important signals that are most distinct from the majority within the given context. The extracted factors are then utilized for downstream tasks. In empirical studies, our proposed framework efficiently handles an entire year of stock order flow data across diverse scenarios, offering a broader range of applications compared to existing tick-level approaches that are limited to only a few days of stock data. We demonstrate that our method extracts superior factors from order flow data, enabling significant improvement for stock trend prediction and order execution tasks at the second and minute level

    SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks

    Full text link
    The fast growth of computational power and scales of modern super-computing systems have raised great challenges for the management of exascale scientific data. To maintain the usability of scientific data, error-bound lossy compression is proposed and developed as an essential technique for the size reduction of scientific data with constrained data distortion. Among the diverse datasets generated by various scientific simulations, certain datasets cannot be effectively compressed by existing error-bounded lossy compressors with traditional techniques. The recent success of Artificial Intelligence has inspired several researchers to integrate neural networks into error-bounded lossy compressors. However, those works still suffer from limited compression ratios and/or extremely low efficiencies. To address those issues and improve the compression on the hard-to-compress datasets, in this paper, we propose SRN-SZ, which is a deep learning-based scientific error-bounded lossy compressor leveraging the hierarchical data grid expansion paradigm implemented by super-resolution neural networks. SRN-SZ applies the most advanced super-resolution network HAT for its compression, which is free of time-costing per-data training. In experiments compared with various state-of-the-art compressors, SRN-SZ achieves up to 75% compression ratio improvements under the same error bound and up to 80% compression ratio improvements under the same PSNR than the second-best compressor

    Effects of 60 days of 6° head-down bed rest on the composition and function of the human gut microbiota

    No full text
    Summary: Spaceflight is rigorous and dangerous environment which can negatively affect astronauts’ health and the entire mission. The 60 days of 6° head-down bed rest (HDBR) experiment provided us with an opportunity to trace the change of gut microbiota under simulated microgravity. The gut microbiota of volunteers was analyzed and characterized by 16S rRNA gene sequencing and metagenomic sequencing. Our results showed that the composition and function of the volunteers’ gut microbiota were markedly was affected by 60 days of 6° HDBR. We further confirmed the species and diversity fluctuations. Resistance and virulence genes in the gut microbiota were also affected by 60 days of 6° HDBR, but the species attributions remained stable. The human gut microbiota affected by 60 days of 6° HDBR which was partially consistent with the effect of spaceflight, this implied that HDBR was a simulation of how spaceflight affects the human gut microbiota

    C-Coll: Introducing Error-bounded Lossy Compression into MPI Collectives

    Full text link
    With the ever-increasing computing power of supercomputers and the growing scale of scientific applications, the efficiency of MPI collective communications turns out to be a critical bottleneck in large-scale distributed and parallel processing. Large message size in MPI collectives is a particularly big concern because it may significantly delay the overall parallel performance. To address this issue, prior research simply applies the off-the-shelf fix-rate lossy compressors in the MPI collectives, leading to suboptimal performance, limited generalizability, and unbounded errors. In this paper, we propose a novel solution, called C-Coll, which leverages error-bounded lossy compression to significantly reduce the message size, resulting in a substantial reduction in communication cost. The key contributions are three-fold. (1) We develop two general, optimized lossy-compression-based frameworks for both types of MPI collectives (collective data movement as well as collective computation), based on their particular characteristics. Our framework not only reduces communication cost but also preserves data accuracy. (2) We customize an optimized version based on SZx, an ultra-fast error-bounded lossy compressor, which can meet the specific needs of collective communication. (3) We integrate C-Coll into multiple collectives, such as MPI_Allreduce, MPI_Scatter, and MPI_Bcast, and perform a comprehensive evaluation based on real-world scientific datasets. Experiments show that our solution outperforms the original MPI collectives as well as multiple baselines and related efforts by 3.5-9.7X.Comment: 12 pages, 15 figures, 5 tables, submitted to SC '2

    Photocatalytic Overall Water Splitting Reaction Feature on Photodeposited Ni<i><sub>x</sub></i>P/γ-Ga<sub>2</sub>O<sub>3</sub> Nanosheets

    No full text
    Solar light-driven overall water splitting for hydrogen production is an ideal solution to climate warming and energy shortage issues. Obtaining a highly efficient and stable photocatalyst remains a major challenge at present. Herein, NixP/γ-Ga2O3 nanosheets, which were synthesized from NiCl2, NaH2PO2, and home-made γ-Ga2O3 nanosheets by the photodeposition method under 254 UV irradiation for 30 min, are found as a highly active and durable photocatalyst for pure water splitting into H2 and O2 without a sacrificial reagent. The H2 production rate is as high as 5.5 mmol·g–1·h–1 under 125 W high-pressure mercury lamp irradiation, which is 3.4 and 2.5 times higher than that on the pristine γ-Ga2O3 nanosheets and Pt/γ-Ga2O3 nanosheets, respectively, and is 2.0 times higher than that on the 0.5 wt % Ni2P/γ-Ga2O3 reported previously. However, the O2 evolution rate is much less than the H2 evolution rate in the initial reaction stage. On prolonging the irradiation time, H2 evolution declines, while O2 evolution increases until it reaches its stoichiometric value corresponding to H2. The reason for the photocatalytic behavior of NixP/γ-Ga2O3 is studied and the corresponding mechanism is suggested. The absent or low oxygen evolution in the initial reaction stage is because the dioxygen generated from water oxidation by the photogenerated holes is wholly or partially captured by the surface oxygen vacancies to form the surface peroxide bonds (−O–O−). Once the oxygen vacancies are eliminated by the photogenerated O2, the overall water splitting reaction would reach the steady state. Thereafter, H2 production decreases from 5.5 to 2.0 mmol·g–1·h–1, but the O2 evolution gradually approaches the corresponding stoichiometric value, especially for the photocatalyst treated with H2O2 for 24 h

    Anatomy of High-Performance GEMM with Online Fault Tolerance on GPUs

    Full text link
    General Matrix Multiplication (GEMM) is a crucial algorithm for various applications such as machine learning and scientific computing, and an efficient GEMM implementation is essential for the performance of these systems. While researchers often strive for faster performance by using large compute platforms, the increased scale of these systems can raise concerns about hardware and software reliability. In this paper, we present a design for a high-performance GEMM with algorithm-based fault tolerance for use on GPUs. We describe fault-tolerant designs for GEMM at the thread, warp, and threadblock levels, and also provide a baseline GEMM implementation that is competitive with or faster than the state-of-the-art, proprietary cuBLAS GEMM. We present a kernel fusion strategy to overlap and mitigate the memory latency due to fault tolerance with the original GEMM computation. To support a wide range of input matrix shapes and reduce development costs, we present a template-based approach for automatic code generation for both fault-tolerant and non-fault-tolerant GEMM implementations. We evaluate our work on NVIDIA Tesla T4 and A100 server GPUs. Experimental results demonstrate that our baseline GEMM presents comparable or superior performance compared to the closed-source cuBLAS. The fault-tolerant GEMM incurs only a minimal overhead (8.89\% on average) compared to cuBLAS even with hundreds of errors injected per minute. For irregularly shaped inputs, the code generator-generated kernels show remarkable speedups of 160%183.5%160\% \sim 183.5\% and 148.55%165.12%148.55\% \sim 165.12\% for fault-tolerant and non-fault-tolerant GEMMs, outperforming cuBLAS by up to 41.40%41.40\%.Comment: 11 pages, 2023 International Conference on Supercomputin

    HuR-mediated nucleocytoplasmic translocation of HOTAIR relieves its inhibition of osteogenic differentiation and promotes bone formation

    No full text
    Abstract Bone marrow mesenchymal stem cell (BMSC) osteogenic differentiation and osteoblast function play critical roles in bone formation, which is a highly regulated process. Long noncoding RNAs (lncRNAs) perform diverse functions in a variety of biological processes, including BMSC osteogenic differentiation. Although several studies have reported that HOX transcript antisense RNA (HOTAIR) is involved in BMSC osteogenic differentiation, its effect on bone formation in vivo remains unclear. Here, by constructing transgenic mice with BMSC (Prx1-HOTAIR)- and osteoblast (Bglap-HOTAIR)-specific overexpression of HOTAIR, we found that Prx1-HOTAIR and Bglap-HOTAIR transgenic mice show different bone phenotypes in vivo. Specifically, Prx1-HOTAIR mice showed delayed bone formation, while Bglap-HOTAIR mice showed increased bone formation. HOTAIR inhibits BMSC osteogenic differentiation but promotes osteoblast function in vitro. Furthermore, we identified that HOTAIR is mainly located in the nucleus of BMSCs and in the cytoplasm of osteoblasts. HOTAIR displays a nucleocytoplasmic translocation pattern during BMSC osteogenic differentiation. We first identified that the RNA-binding protein human antigen R (HuR) is responsible for HOTAIR nucleocytoplasmic translocation. HOTAIR is essential for osteoblast function, and cytoplasmic HOTAIR binds to miR-214 and acts as a ceRNA to increase Atf4 protein levels and osteoblast function. Bglap-HOTAIR mice, but not Prx1-HOTAIR mice, showed alleviation of bone loss induced by unloading. This study reveals the importance of temporal and spatial regulation of HOTAIR in BMSC osteogenic differentiation and bone formation, which provides new insights into precise regulation as a target for bone loss

    Dynamic Quality Metric Oriented Error-bounded Lossy Compression for Scientific Datasets

    Full text link
    With the ever-increasing execution scale of high performance computing (HPC) applications, vast amounts of data are being produced by scientific research every day. Error-bounded lossy compression has been considered a very promising solution to address the big-data issue for scientific applications because it can significantly reduce the data volume with low time cost meanwhile allowing users to control the compression errors with a specified error bound. The existing error-bounded lossy compressors, however, are all developed based on inflexible designs or compression pipelines, which cannot adapt to diverse compression quality requirements/metrics favored by different application users. In this paper, we propose a novel dynamic quality metric oriented error-bounded lossy compression framework, namely QoZ. The detailed contribution is three-fold. (1) We design a novel highly-parameterized multi-level interpolation-based data predictor, which can significantly improve the overall compression quality with the same compressed size. (2) We design the error-bounded lossy compression framework QoZ based on the adaptive predictor, which can auto-tune the critical parameters and optimize the compression result according to user-specified quality metrics during online compression. (3) We evaluate QoZ carefully by comparing its compression quality with multiple state-of-the-arts on various real-world scientific application datasets. Experiments show that, compared with the second-best lossy compressor, QoZ can achieve up to 70% compression ratio improvement under the same error bound, up to 150% compression ratio improvement under the same PSNR, or up to 270% compression ratio improvement under the same SSIM

    Exploring Autoencoder-based Error-bounded Compression for Scientific Data

    Full text link
    Error-bounded lossy compression is becoming an indispensable technique for the success of today's scientific projects with vast volumes of data produced during the simulations or instrument data acquisitions. Not only can it significantly reduce data size, but it also can control the compression errors based on user-specified error bounds. Autoencoder (AE) models have been widely used in image compression, but few AE-based compression approaches support error-bounding features, which are highly required by scientific applications. To address this issue, we explore using convolutional autoencoders to improve error-bounded lossy compression for scientific data, with the following three key contributions. (1) We provide an in-depth investigation of the characteristics of various autoencoder models and develop an error-bounded autoencoder-based framework in terms of the SZ model. (2) We optimize the compression quality for main stages in our designed AE-based error-bounded compression framework, fine-tuning the block sizes and latent sizes and also optimizing the compression efficiency of latent vectors. (3) We evaluate our proposed solution using five real-world scientific datasets and comparing them with six other related works. Experiments show that our solution exhibits a very competitive compression quality from among all the compressors in our tests. In absolute terms, it can obtain a much better compression quality (100% ~ 800% improvement in compression ratio with the same data distortion) compared with SZ2.1 and ZFP in cases with a high compression ratio
    corecore