465 research outputs found

    Evaluation of Modulus of Rigidity By Dynamic Plate Shear Testing

    Get PDF
    The modulus of rigidity of wood-based panels was measured by the dynamic plate shear method. This involves measuring the torsional vibration of free, square plates. The proposed equation to calculate modulus of rigidity was found to agree within ±5% of the experimentally determined values. The experimentally determined modulus of rigidity values were very close to moduli determined by other techniques

    Technical Note: Responses of Vertical Sections of Wood Samples to Cyclical Relative Humidity Changes

    Get PDF
    This study investigated moisture responses of the surface, middle, and central portion in the thickness direction of wood samples to cyclic RH changes. Phase lag and amplitude for these sections were determined quantitatively by Fourier analysis. These data were used to suggest a mechanism for the unexpected phenomenon that moisture changes are slower than dimensional changes found in previous work

    Technical Note: Analysis of Mechanical Relaxation Intensity of Wood at Various Moisture Contents

    Get PDF
    This study analyzed mechanical relaxation data by the well-known Gaussian function from which the relaxation intensity was determined for various moisture contents over a range of temperatures (-81-0°C). These data were used to suggest a range of bonding mechanisms for sorbed water

    Image-to-Graph Convolutional Network for 2D/3D Deformable Model Registration of Low-Contrast Organs

    Get PDF
    Organ shape reconstruction based on a single-projection image during treatment has wide clinical scope, e.g., in image-guided radiotherapy and surgical guidance. We propose an image-to-graph convolutional network that achieves deformable registration of a three-dimensional (3D) organ mesh for a low-contrast two-dimensional (2D) projection image. This framework enables simultaneous training of two types of transformation: from the 2D projection image to a displacement map, and from the sampled per-vertex feature to a 3D displacement that satisfies the geometrical constraint of the mesh structure. Assuming application to radiation therapy, the 2D/3D deformable registration performance is verified for multiple abdominal organs that have not been targeted to date, i.e., the liver, stomach, duodenum, and kidney, and for pancreatic cancer. The experimental results show shape prediction considering relationships among multiple organs can be used to predict respiratory motion and deformation from digitally reconstructed radiographs with clinically acceptable accuracy

    2D/3D Deep Image Registration by Learning 3D Displacement Fields for Abdominal Organs

    Full text link
    Deformable registration of two-dimensional/three-dimensional (2D/3D) images of abdominal organs is a complicated task because the abdominal organs deform significantly and their contours are not detected in two-dimensional X-ray images. We propose a supervised deep learning framework that achieves 2D/3D deformable image registration between 3D volumes and single-viewpoint 2D projected images. The proposed method learns the translation from the target 2D projection images and the initial 3D volume to 3D displacement fields. In experiments, we registered 3D-computed tomography (CT) volumes to digitally reconstructed radiographs generated from abdominal 4D-CT volumes. For validation, we used 4D-CT volumes of 35 cases and confirmed that the 3D-CT volumes reflecting the nonlinear and local respiratory organ displacement were reconstructed. The proposed method demonstrate the compatible performance to the conventional methods with a dice similarity coefficient of 91.6 \% for the liver region and 85.9 \% for the stomach region, while estimating a significantly more accurate CT values

    Deep Learning Based Lung Region Segmentation with Data Preprocessing by Generative Adversarial Nets

    Get PDF
    [2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 20-24 July 2020, Montreal, QC, Canada]In endoscopic surgery, it is necessary to understand the three-dimensional structure of the target region to improve safety. For organs that do not deform much during surgery, preoperative computed tomography (CT) images can be used to understand their three-dimensional structure, however, deformation estimation is necessary for organs that deform substantially. Even though the intraoperative deformation estimation of organs has been widely studied, two-dimensional organ region segmentations from camera images are necessary to perform this estimation. In this paper, we propose a region segmentation method using U-net for the lung, which is an organ that deforms substantially during surgery. Because the accuracy of the results for smoker lungs is lower than that for non-smoker lungs, we improved the accuracy by translating the texture of the lung surface using a CycleGAN
    corecore