5 research outputs found

    Ultrasonically assisted cutting of bio-tissues in microtomy

    Get PDF
    Modern-day histology of bio-tissues for supporting stratified medicine diagnoses requires high-precision cutting to ensure high quality extremely thin specimens used in analysis. Additionally, the cutting quality is significantly affected by a wide variety of soft and hard tissues in the samples. This paper deals with development of a next generation of microtome employing introduction of controlled ultrasonic vibration to realise a hybrid cutting process of bio-tissues. The study is based on a combination of advanced experimental and numerical (finite-element) studies of multi-body dynamics of a cutting system. The quality of cut samples produced with the prototype is compared with the state-of-the-art

    Hybrid cutting of bio-tissues

    Get PDF
    © 2016 The Authors.Modern-day histology of bio-tissues requires high-precision cutting to ensure high quality thin specimens used in analysis. The cutting quality is significantly affected by a variety of soft and hard tissues in the samples. The paper deals with the next step of microtome development employing controlled ultrasonic vibration to realise a hybrid cutting process of bio-tissues. The study is based on a numerical (finite-element) analysis of multi-body dynamics of a cutting system. Conventional and ultrasonically assisted cutting processes of bio-tissues were simulated using material models representing cancellous bone and incorporating an estimation of friction conditions between a cutting blade and the material to be cut. The models allow adjustments of a section thickness, cutting speed and amplitude of ultrasonic vibration. The efficiency and quality of cutting was dependent on cutting forces, which were compared for both conventional and ultrasonically assisted cutting processes

    Production of high-quality extremely-thin histological sections by ultrasonically assisted cutting

    No full text
    Modern-day histology of biological tissues requires precision cutting of a wide variety of tissue samples for histological analyses. Lots of common problems can be identified at the conventional microtome sectioning including creation of curling sections and sections stick to the blade, which made high-quality sections hard to obtain. This paper deals with the development of next generation of microtomes employing introduction of a controlled ultrasonic vibration to process biological tissues. Based on a combination of advanced experimental and numerical studies of a novel cutting system with multi-body dynamics, this study investigated effects of cutting parameters and characteristics of ultrasonic excitation with the aim to design and manufacture an ultrasonically assisted cutting device (UACD) for microtomy. The cutting mechanism was detailed to show the advantages of the ultrasonically assisted cutting in the creation of high quality, thin sections. The novel prototype was designed and developed to conduct conventional cutting (CC) and ultrasonically assisted cutting (UAC) of biological tissues embedded in wax. Cutting forces, blade wear, blade damage and section quality for these cutting processes were assessed. It was found that the efficiency and quality of cutting were dependent on the level of cutting forces, which were lower in UAC compared with CC. The quality of cut samples with a thickness of 4 μm was better in UAC than CC. The developed ultrasonically assisted cutting device also enables successfully sectioning of the thin biological samples with high precision, reduced blade wear and less blade damage. This will increase the blade life making both environmental and economic impacts

    X4D-SceneFormer: Enhanced scene understanding on 4D point cloud videos through cross-modal knowledge transfer

    No full text
    The field of 4D point cloud understanding is rapidly developing with the goal of analyzing dynamic 3D point cloud sequences. However, it remains a challenging task due to the sparsity and lack of texture in point clouds. Moreover, the irregularity of point cloud poses a difficulty in aligning temporal information within video sequences. To address these issues, we propose a novel cross-modal knowledge transfer framework, called X4D-SceneFormer. This framework enhances 4D-Scene understanding by transferring texture priors from RGB sequences using a Transformer architecture with temporal relationship mining. Specifically, the framework is designed with a dual-branch architecture, consisting of an 4D point cloud transformer and a Gradient-aware Image Transformer (GIT). The GIT combines visual texture and temporal correlation features to offer rich semantics and dynamics for better point cloud representation. During training, we employ multiple knowledge transfer techniques, including temporal consistency losses and masked self-attention, to strengthen the knowledge transfer between modalities. This leads to enhanced performance during inference using singlemodal 4D point cloud inputs. Extensive experiments demonstrate the superior performance of our framework on various 4D point cloud video understanding tasks, including action recognition, action segmentation and semantic segmentation. The results achieve 1st places, i.e., 85.3% (+7.9%) accuracy and 47.3% (+5.0%) mIoU for 4D action segmentation and semantic segmentation, on the HOI4D challenge, outperforming previous state-of-the-art by a large margin.We release the code at https://github.com/jinglinglingling/X4D</p

    X4D-SceneFormer: Enhanced scene understanding on 4D point cloud videos through cross-modal knowledge transfer

    No full text
    The field of 4D point cloud understanding is rapidly developing with the goal of analyzing dynamic 3D point cloud sequences. However, it remains a challenging task due to the sparsity and lack of texture in point clouds. Moreover, the irregularity of point cloud poses a difficulty in aligning temporal information within video sequences. To address these issues, we propose a novel cross-modal knowledge transfer framework, called X4D-SceneFormer. This framework enhances 4D-Scene understanding by transferring texture priors from RGB sequences using a Transformer architecture with temporal relationship mining. Specifically, the framework is designed with a dual-branch architecture, consisting of an 4D point cloud transformer and a Gradient-aware Image Transformer (GIT). The GIT combines visual texture and temporal correlation features to offer rich semantics and dynamics for better point cloud representation. During training, we employ multiple knowledge transfer techniques, including temporal consistency losses and masked self-attention, to strengthen the knowledge transfer between modalities. This leads to enhanced performance during inference using singlemodal 4D point cloud inputs. Extensive experiments demonstrate the superior performance of our framework on various 4D point cloud video understanding tasks, including action recognition, action segmentation and semantic segmentation. The results achieve 1st places, i.e., 85.3% (+7.9%) accuracy and 47.3% (+5.0%) mIoU for 4D action segmentation and semantic segmentation, on the HOI4D challenge, outperforming previous state-of-the-art by a large margin.We release the code at https://github.com/jinglinglingling/X4D</p
    corecore