2 research outputs found

    HOLOTumor:6DoF phantom head pose estimation based deep learning and brain tumour segmentation for AR visualisation and interaction

    Get PDF
    To comprehend a scene, and use and interact with objects in an augmented reality (AR) application correctly, 6 degrees of freedom (DoF) object pose estimation is a crucial task. By supplying exact 3-D pixel coordinates, the incorporation of depth photographs has resulted in substantial gains. However, depth photographs are not always simple to find; for instance, mobile phones and tablets, which are common devices for AR apps, do not provide any depth information. As a result, a lot of study is focused on determining the poses of known objects using simple RGB images. Three-dimensional poorly textured object detection is a challenging task due to their lack of distinctive features, limited appearance variations, ambiguity with the background, occlusion challenges, lighting and reflection effects, and lack of distinct and unique features. In this study, we created a framework to automate the workflow for predicting brain tumor segmentation and 6 DoF phantom head localization. Our framework named Hologram-Tumor “HOLOTumour” provides an augmented-reality rendering with 3-D AR visualization and interaction, generates the 3-D model of the brain with a segmented tumor based on Geodesic-Aided Chan-Vese Model employing a local magnetic resonance imaging (MRI) dataset, and outputs the phantom head 6 DoF pose estimation using generalizable object pose estimator, Gen6D, employing only RGB images for a texture-less object. Based on the comparison study including the accuracy and inference runtimes scores, the 6 DoF pose estimation of the printed phantom head and brain tumor segmentation achieve promising performance. Medical professionals may identify brain tumors with the help of the HOLOTumour platform, which can also be used for professional practice and training in medicine
    corecore