6 research outputs found

    Reconstructing 3d lung shape from a single 2d image during the deaeration deformation process using model-based data augmentation

    Get PDF
    Three-dimensional (3D) shape reconstruction is particularly important for computer assisted medical systems, especially in the case of lung surgeries, where large deaeration deformation occurs. Recently, 3D reconstruction methods based on machine learning techniques have achieved considerable success in computer vision. However, it is difficult to apply these approaches to the medical field, because the collection of a massive amount of clinic data for training is impractical. To solve this problem, this paper proposes a novel 3D shape reconstruction method that adopts both data augmentation techniques and convolutional neural networks. In the proposed method, a deformable statistical model of the 3D lungs is designed to augment various training data. As the experimental results demonstrate, even with a small database, the proposed method can realize 3D shape reconstruction for lungs during a deaeration deformation process from only one captured 2D image. Moreover, the proposed data augmentation technique can also be used in other fields where the training data are insufficient

    Deformation analysis of surface and bronchial structures in intraoperative pneumothorax using deformable mesh registration

    Get PDF
    The positions of nodules can change because of intraoperative lung deflation, and the modeling of pneumothorax-associated deformation remains a challenging issue for intraoperative tumor localization. In this study, we introduce spatial and geometric analysis methods for inflated/deflated lungs and discuss heterogeneity in pneumothorax-associated lung deformation. Contrast-enhanced CT images simulating intraoperative conditions were acquired from live Beagle dogs. The images contain the overall shape of the lungs, including all lobes and internal bronchial structures, and were analyzed to provide a statistical deformation model that could be used as prior knowledge to predict pneumothorax. To address the difficulties of mapping pneumothorax CT images with topological changes and CT intensity shifts, we designed deformable mesh registration techniques for mixed data structures including the lobe surfaces and the bronchial centerlines. Three global-to-local registration steps were performed under the constraint that the deformation was spatially continuous and smooth, while matching visible bronchial tree structures as much as possible. The developed framework achieved stable registration with a Hausdorff distance of less than 1 mm and a target registration error of less than 5 mm, and visualized deformation fields that demonstrate per-lobe contractions and rotations with high variability between subjects. The deformation analysis results show that the strain of lung parenchyma was 35% higher than that of bronchi, and that deformation in the deflated lung is heterogeneous

    X-ray2Shape: Reconstruction of Organ Shape from a Single X-ray Image using Graph Convolutional Network

    Get PDF
    CT や MRI により生体の高分解能の 3 次元画像が計測可能となったが, 手術時や放射線治療中には内視鏡画像や X-ray 画像などの低次元かつ局所的な単視点画像しか得られないことが多い. また, 呼吸によって臓器は変形しつつ移動するため, 治療時における臓器形状の再構成は難しい課題である. 本研究では, グラフ畳み込みネットワーク (GCN) を用いて 単一X-ray 画像から臓器形状を再構成する X-ray2Shape の枠組みを提案する. 臓器形状メッシュの再構成に有効な損失関数を新たに導入し, 患者個人の 3D-CT データから生成可能な最大吸気時の臓器形状を初期テンプレートとして, 呼気時相の X-ray 画像特徴量と臓器形状間の関係を学習する. 35 症例 10 時相からなる 4D-CT データを用いて, 腹部領域の疑似 X 線画像から肝臓の 3 次元形状を再構成する実験を行い, 提案法の性能を確認したので報告する.High resolution 3D images can be measured by computed tomography and magnetic resonance imaging. However, during surgery or radiotherapy, only low-dimensional and local single-viewpoint 2D images may be obtained, and organs move while deforming due to breathing. Therefore, shape reconstruction from a single-viewpoint 2D image such as an endoscopic image or an X-ray image remains a challenge. In this study, we proposed an X-ray2Shape framework which can reconstruct the 3D organ shape from a single-viewpoint X-ray image using a graph convolution network. The proposed method learns the mesh deformation from organ shape during inspiration and deep features computed from the individual X-ray images. Experiments with organ meshes and digitally reconstructed radiograph images of abdominal regions were performed to confirm the estimation performance of our proposed method

    Deformable model registration for a single projection image by learning displacement fields

    Get PDF
    This article is a technical report without peer review, and its polished and/or extended version may be published elsewhere.治療時に取得可能な単一投影像に基づく臓器形状の再構成は放射線治療や外科手術支援等, 臨床における応用範囲が広い研究課題である. 本研究では単一視点の2次元投影像に対して3次元臓器モデルの可変形位置合わせを達成するimage-to-graph convolutional neural networkの枠組みを構築した. 本枠組みでは, 2次元投影像が変位場へ変換され, グラフ畳み込みネットワークによって3次元メッシュの頂点変位と変位場の関係が学習される. 4D-CTから生成した疑似X線画像を学習済みのネットワークへ適用し, 画像上で輪郭の大部分が視認できない腹部臓器の3次元形状と位置を臨床において利用可能な精度で再構成可能であることを確認したので報告する.Shape reconstruction of organs from a single-viewpoint projection image is a research target including broad clinical applications such as image-guided surgery and radiotherapy. In this study, we constructed an image– to-graph convolutional neural network that achieves deformable registration of 3D organ models to a single-viewpoint 2D projection image. In this framework, the 2D projection image is translated into a displacement field. The graph convolution network learns the relationship between the vertex displacement of the 3D mesh and the displacement field. We applied digitally reconstructed radiographs generated from 4D-CT data to the trained network and confirmed that the 3D shape and location of the abdominal organs, where most of the contours are invisible, can be reconstructed with clinically acceptable errors

    Shape reconstruction from occluded camera images using generative virtual learning

    Get PDF
    近年, 単視点画像を用いた三次元形状の再構成問題に内在する不確実性に対し, 様々な機械学習の応用が考えられている. 十分なデータ数を用意できない医用画像等の分野では学習時にシミュレーション画像を活用したバーチャル学習が試みられているが, 人間が同等とみなす画像であっても実画像との微小な差が推定に大きな影響を与える点が課題となっている. 本研究ではシミュレーション画像と実画像に共通の潜在変数を仮定した画像変換に基づく生成型バーチャル学習の枠組みを提案する. 胸腔鏡下肺がん切除術における肺の内視鏡カメラ画像に対し, 患者個人の三次元 CT 画像から生成したシミュレーション画像と実画像間の類似度を改善した学習が可能であることを確認したので報告する.In recent years, various applications of machine learning have been considered for uncertainty included in reconstructing 3D shapes from single-viewpoint images. In the field of medical imaging, where a sufficient amount of data cannot be prepared, virtual learning using simulation images has been attempted. However, small differences between virtual and real images can degrade the estimation performance even if the images are regarded as equivalent by humans. In this study, we propose a framework for generative virtual learning based on image transformation that assumes common latent variables between simulation and real images. We confirmed that our methods could improve the similarity between simulation images generated from 3D CT images of individual patients and real images for endoscopic camera images of the lung in thoracoscopic lung cancer resection

    Proceedings of the 10th International Chemical and Biological Engineering Conference - CHEMPOR 2008

    Get PDF
    This volume contains full papers presented at the 10th International Chemical and Biological Engineering Conference - CHEMPOR 2008, held in Braga, Portugal, between September 4th and 6th, 2008.FC
    corecore