107 research outputs found

    Transient Thermal and Structural Analysis of Cylinder and Bolted Joints on BOG Compressor During Starting Process

    Get PDF
    BOG (Boil off Gas) compressor deals with the gas evaporated from the LNG storage tank. Its working temperature is lower than -120℃. There are two types of starting process for BOG compressor: Direct Starting and No-load Starting. During the starting process, the temperature on cylinder will change rapidly and result in additional thermal stress to some parts on the cylinder, especially to the bolted joints on the cylinder head. In this paper, transient Finite Element thermal analysis is proposed on the cylinder with some improvement of the boundary condition settings, such as the consideration of the ice increase on the cylinder wall. Then, the theoretical and transient FE analysis are proposed subject to the bolted joints of the cylinder head in two starting process. Result shows that the maximum temperature difference on the cylinder is 81.5℃ during direct starting; while it could decrease to 60.9℃in No-Load Starting. Along the bolted joints in the cylinder head, the maximum temperature difference will be up to 52℃ in direct starting and 45℃ in No-Load Starting. The preload increases rapidly over 60% and its mean tensile stress on the bolt is near to the yield strength. Besides, the preload are distributed unevenly during the starting process. The maximum uneven rate is 20% in Direct Starting and 15% in No-Load Starting. It shows that No-Load Starting could offer a more stable starting process. Finally, based on the theoretical analysis, a preload adjustment method is proposed to ensure the safety and validity of bolted joints on BOG compressor

    MuraNet: Multi-task Floor Plan Recognition with Relation Attention

    Full text link
    The recognition of information in floor plan data requires the use of detection and segmentation models. However, relying on several single-task models can result in ineffective utilization of relevant information when there are multiple tasks present simultaneously. To address this challenge, we introduce MuraNet, an attention-based multi-task model for segmentation and detection tasks in floor plan data. In MuraNet, we adopt a unified encoder called MURA as the backbone with two separated branches: an enhanced segmentation decoder branch and a decoupled detection head branch based on YOLOX, for segmentation and detection tasks respectively. The architecture of MuraNet is designed to leverage the fact that walls, doors, and windows usually constitute the primary structure of a floor plan's architecture. By jointly training the model on both detection and segmentation tasks, we believe MuraNet can effectively extract and utilize relevant features for both tasks. Our experiments on the CubiCasa5k public dataset show that MuraNet improves convergence speed during training compared to single-task models like U-Net and YOLOv3. Moreover, we observe improvements in the average AP and IoU in detection and segmentation tasks, respectively.Our ablation experiments demonstrate that the attention-based unified backbone of MuraNet achieves better feature extraction in floor plan recognition tasks, and the use of decoupled multi-head branches for different tasks further improves model performance. We believe that our proposed MuraNet model can address the disadvantages of single-task models and improve the accuracy and efficiency of floor plan data recognition.Comment: Document Analysis and Recognition - ICDAR 2023 Workshops. ICDAR 2023. Lecture Notes in Computer Science, vol 14193. Springer, Cha

    SNN2ANN: A Fast and Memory-Efficient Training Framework for Spiking Neural Networks

    Full text link
    Spiking neural networks are efficient computation models for low-power environments. Spike-based BP algorithms and ANN-to-SNN (ANN2SNN) conversions are successful techniques for SNN training. Nevertheless, the spike-base BP training is slow and requires large memory costs. Though ANN2NN provides a low-cost way to train SNNs, it requires many inference steps to mimic the well-trained ANN for good performance. In this paper, we propose a SNN-to-ANN (SNN2ANN) framework to train the SNN in a fast and memory-efficient way. The SNN2ANN consists of 2 components: a) a weight sharing architecture between ANN and SNN and b) spiking mapping units. Firstly, the architecture trains the weight-sharing parameters on the ANN branch, resulting in fast training and low memory costs for SNN. Secondly, the spiking mapping units ensure that the activation values of the ANN are the spiking features. As a result, the classification error of the SNN can be optimized by training the ANN branch. Besides, we design an adaptive threshold adjustment (ATA) algorithm to address the noisy spike problem. Experiment results show that our SNN2ANN-based models perform well on the benchmark datasets (CIFAR10, CIFAR100, and Tiny-ImageNet). Moreover, the SNN2ANN can achieve comparable accuracy under 0.625x time steps, 0.377x training time, 0.27x GPU memory costs, and 0.33x spike activities of the Spike-based BP model

    Graph Condensation for Graph Neural Networks

    Full text link
    Given the prevalence of large-scale graphs in real-world applications, the storage and time for training neural models have raised increasing concerns. To alleviate the concerns, we propose and study the problem of graph condensation for graph neural networks (GNNs). Specifically, we aim to condense the large, original graph into a small, synthetic and highly-informative graph, such that GNNs trained on the small graph and large graph have comparable performance. We approach the condensation problem by imitating the GNN training trajectory on the original graph through the optimization of a gradient matching loss and design a strategy to condense node futures and structural information simultaneously. Extensive experiments have demonstrated the effectiveness of the proposed framework in condensing different graph datasets into informative smaller graphs. In particular, we are able to approximate the original test accuracy by 95.3% on Reddit, 99.8% on Flickr and 99.0% on Citeseer, while reducing their graph size by more than 99.9%, and the condensed graphs can be used to train various GNN architectures.Comment: 16 pages, 4 figure

    Virtual Reality Based Robot Teleoperation via Human-Scene Interaction

    Full text link
    Robot teleoperation gains great success in various situations, including chemical pollution rescue, disaster relief, and long-distance manipulation. In this article, we propose a virtual reality (VR) based robot teleoperation system to achieve more efficient and natural interaction with humans in different scenes. A user-friendly VR interface is designed to help users interact with a desktop scene using their hands efficiently and intuitively. To improve user experience and reduce workload, we simulate the process in the physics engine to help build a preview of the scene after manipulation in the virtual scene before execution. We conduct experiments with different users and compare our system with a direct control method across several teleoperation tasks. The user study demonstrates that the proposed system enables users to perform operations more instinctively with a lighter mental workload. Users can perform pick-and-place and object-stacking tasks in a considerably short time, even for beginners. Our code is available at https://github.com/lingxiaomeng/VR_Teleoperation_Gen3

    Lymphocytes From Patients With Type 1 Diabetes Display a Distinct Profile of Chromatin Histone H3 Lysine 9 Dimethylation : An Epigenetic Study in Diabetes

    Get PDF
    OBJECTIVE—The complexity of interactions between genes and the environment is a major challenge for type 1 diabetes studies. Nuclear chromatin is the interface between genetics and environment and the principal carrier of epigenetic information. Because histone tail modifications in chromatin are linked to gene transcription, we hypothesized that histone methylation patterns in cells from type 1 diabetic patients can provide novel epigenetic insights into type 1 diabetes and its complications
    corecore