476 research outputs found

    Illegal Intrusion Detection of Internet of Things Based on Deep Mining Algorithm

    Get PDF
    In this study, to reduce the influence of The Internet of Things (IoT) illegal intrusion on the transmission effect, and ensure IoT safe operation, an illegal intrusion detection method of the Internet of Things (IoT) based on deep mining algorithm was designed to accurately detect IoT illegal intrusion. Moreover, this study collected the data in the IoT through data packets and carries out data attribute mapping on the collected data, transformed the character information into numerical information, implemented standardization and normalization processing on the numerical information, and optimized the processed data by using a regional adaptive oversampling algorithm to obtain an IoT data training set. The IoT data training set was taken as the input data of the improved sparse auto-encoder neural network. The hierarchical greedy training strategy was used to extract the feature vector of the sparse IoT illegal intrusion data that were used as the inputs of the extreme learning machine classifier to realize the classification and detection of the IoT illegal intrusion features. The experimental results indicate that the feature extraction of the illegal intrusion data of the IoT can effectively reduce the feature dimension of the illegal intrusion data of the IoT to less than 30 and the dimension of the original data. The recall rate, precision, and F1 value of the IoT intrusion detection are 98.3%, 98.7%, and 98.6%, respectively, which can accurately detect IoT intrusion attacks. The conclusion demonstrates that the intrusion detection of IoT based on deep mining algorithm can achieve accurate detection of IoT illegal intrusion and reduce the influence of IoT illegal intrusion on the transmission effect

    Cobalt sulfide/N,S codoped porous carbon core-shell nanocomposites as superior bifunctional electrocatalysts for oxygen reduction and evolution reactions.

    Get PDF
    Author's postprint version. The final published version is available via doi: 10.1039/C5NR07429KAccepted for publication 11 November 2015© Royal Society of Chemistry 2015Exploring highly-efficient and low-cost bifunctional electrocatalysts for both oxygen reduction reaction (ORR) and oxygen evolution reactions (OER) in the renewable energy area has gained momentum but still remains a significant challenge. Here we present a simple but efficient method that utilizes ZIF-67 as the precursor and template for the one-step generation of homogeneous dispersed cobalt sulfide/N,S-codoped porous carbon nanocomposites as high-performance electrocatalysts. Due to the favourable molecular-like structural features and uniform dispersed active sites in the precursor, the resulting nanocomposites, possessing a unique core-shell structure, high porosity, homogeneous dispersion of active components together with N and S-doping effects, not only show excellent electrocatalytic activity towards ORR with the high onset potential (around -0.04 V vs.-0.02 V for the benchmark Pt/C catalyst) and four-electron pathway and OER with a small overpotential of 0.47 V for 10 mA cm(-2) current density, but also exhibit superior stability (92%) to the commercial Pt/C catalyst (74%) in ORR and promising OER stability (80%) with good methanol tolerance. Our findings suggest that the transition metal sulfide-porous carbon nanocomposites derived from the one-step simultaneous sulfurization and carbonization of zeolitic imidazolate frameworks are excellent alternative bifunctional electrocatalysts towards ORR and OER in the next generation of energy storage and conversion technologies.Royal SocietyRoyal Academy of Engineerin

    MIS-FM: 3D Medical Image Segmentation using Foundation Models Pretrained on a Large-Scale Unannotated Dataset

    Full text link
    Pretraining with large-scale 3D volumes has a potential for improving the segmentation performance on a target medical image dataset where the training images and annotations are limited. Due to the high cost of acquiring pixel-level segmentation annotations on the large-scale pretraining dataset, pretraining with unannotated images is highly desirable. In this work, we propose a novel self-supervised learning strategy named Volume Fusion (VF) for pretraining 3D segmentation models. It fuses several random patches from a foreground sub-volume to a background sub-volume based on a predefined set of discrete fusion coefficients, and forces the model to predict the fusion coefficient of each voxel, which is formulated as a self-supervised segmentation task without manual annotations. Additionally, we propose a novel network architecture based on parallel convolution and transformer blocks that is suitable to be transferred to different downstream segmentation tasks with various scales of organs and lesions. The proposed model was pretrained with 110k unannotated 3D CT volumes, and experiments with different downstream segmentation targets including head and neck organs, thoracic/abdominal organs showed that our pretrained model largely outperformed training from scratch and several state-of-the-art self-supervised training methods and segmentation models. The code and pretrained model are available at https://github.com/openmedlab/MIS-FM.Comment: 13 pages, 8 figure

    Learning A Multi-Task Transformer Via Unified And Customized Instruction Tuning For Chest Radiograph Interpretation

    Full text link
    The emergence of multi-modal deep learning models has made significant impacts on clinical applications in the last decade. However, the majority of models are limited to single-tasking, without considering disease diagnosis is indeed a multi-task procedure. Here, we demonstrate a unified transformer model specifically designed for multi-modal clinical tasks by incorporating customized instruction tuning. We first compose a multi-task training dataset comprising 13.4 million instruction and ground-truth pairs (with approximately one million radiographs) for the customized tuning, involving both image- and pixel-level tasks. Thus, we can unify the various vision-intensive tasks in a single training framework with homogeneous model inputs and outputs to increase clinical interpretability in one reading. Finally, we demonstrate the overall superior performance of our model compared to prior arts on various chest X-ray benchmarks across multi-tasks in both direct inference and finetuning settings. Three radiologists further evaluate the generated reports against the recorded ones, which also exhibit the enhanced explainability of our multi-task model
    corecore