24 research outputs found

    In situ 4D tomography image analysis framework to follow sintering within 3D-printed glass scaffolds

    Get PDF
    We propose a novel image analysis framework to automate analysis of X-ray microtomography images of sintering ceramics and glasses, using open-source toolkits and machine learning. Additive manufacturing (AM) of glasses and ceramics usually requires sintering of green bodies. Sintering causes shrinkage, which presents a challenge for controlling the metrology of the final architecture. Therefore, being able to monitor sintering in 3D over time (termed 4D) is important when developing new porous ceramics or glasses. Synchrotron X-ray tomographic imaging allows in situ, real-time capture of the sintering process at both micro and macro scales using a furnace rig, facilitating 4D quantitative analysis of the process. The proposed image analysis framework is capable of tracking and quantifying the densification of glass or ceramic particles within multiple volumes of interest (VOIs) along with structural changes over time using 4D image data. The framework is demonstrated by 4D quantitative analysis of bioactive glass ICIE16 within a 3D-printed scaffold. Here, densification of glass particles within 3 VOIs were tracked and quantified along with diameter change of struts and interstrut pore size over the 3D image series, delivering new insights on the sintering mechanism of ICIE16 bioactive glass particles in both micro and macro scale

    Detection and tracking volumes of interest in 3D printed tissue engineering scaffolds using 4D imaging modalities.

    Get PDF
    Additive manufacturing (AM) platforms allow the production of patient tissue engineering scaffolds with desirable architectures. Although AM platforms offer exceptional control on architecture, post-processing methods such as sintering and freeze-drying often deform the printed scaffold structure. In-situ 4D imaging can be used to analyze changes that occur during post-processing. Visualization and analysis of changes in selected volumes of interests (VOIs) over time are essential to understand the underlining mechanisms of scaffold deformations. Yet, automated detection and tracking of VOIs in the 3D printed scaffold over time using 4D image data is currently an unsolved image processing task. This paper proposes a new image processing technique to segment, detect and track volumes of interest in 3D printed tissue engineering scaffolds. The method is validated using a 4D synchrotron sourced microCT image data captured during the sintering of bioactive glass scaffolds in-situ. The proposed method will contribute to the development of scaffolds with controllable designs and optimum properties for the development of patient-specific scaffolds

    In situ 4D tomography image analysis framework to follow sintering within 3D-printed glass scaffolds

    Get PDF
    We propose a novel image analysis framework to automate analysis of X-ray microtomography images of sintering ceramics and glasses, using open-source toolkits and machine learning. Additive manufacturing (AM) of glasses and ceramics usually requires sintering of green bodies. Sintering causes shrinkage, which presents a challenge for controlling the metrology of the final architecture. Therefore, being able to monitor sintering in 3D over time (termed 4D) is important when developing new porous ceramics or glasses. Synchrotron X-ray tomographic imaging allows in situ, real-time capture of the sintering process at both micro and macro scales using a furnace rig, facilitating 4D quantitative analysis of the process. The proposed image analysis framework is capable of tracking and quantifying the densification of glass or ceramic particles within multiple volumes of interest (VOIs) along with structural changes over time using 4D image data. The framework is demonstrated by 4D quantitative analysis of bioactive glass ICIE16 within a 3D-printed scaffold. Here, densification of glass particles within 3 VOIs were tracked and quantified along with diameter change of struts and interstrut pore size over the 3D image series, delivering new insights on the sintering mechanism of ICIE16 bioactive glass particles in both micro and macro scales

    International Consensus Statement on Rhinology and Allergy: Rhinosinusitis

    Get PDF
    Background: The 5 years since the publication of the first International Consensus Statement on Allergy and Rhinology: Rhinosinusitis (ICAR‐RS) has witnessed foundational progress in our understanding and treatment of rhinologic disease. These advances are reflected within the more than 40 new topics covered within the ICAR‐RS‐2021 as well as updates to the original 140 topics. This executive summary consolidates the evidence‐based findings of the document. Methods: ICAR‐RS presents over 180 topics in the forms of evidence‐based reviews with recommendations (EBRRs), evidence‐based reviews, and literature reviews. The highest grade structured recommendations of the EBRR sections are summarized in this executive summary. Results: ICAR‐RS‐2021 covers 22 topics regarding the medical management of RS, which are grade A/B and are presented in the executive summary. Additionally, 4 topics regarding the surgical management of RS are grade A/B and are presented in the executive summary. Finally, a comprehensive evidence‐based management algorithm is provided. Conclusion: This ICAR‐RS‐2021 executive summary provides a compilation of the evidence‐based recommendations for medical and surgical treatment of the most common forms of RS

    Dynamics of four legged multipurpose rope climbing robot

    No full text
    Legged vehicles can walk on rough and uneven surfaces with a high degree of softness. This is one of the key reasons why legged machines have received more attention by the scientific community. The second fact is minimizing the overall cost for the practical usage of the robot to increase the applicability in various scenarios. This paper focuses on construction of a four legged multipurpose rope climbing robot and methods used to minimize the cost for the robot implementation

    Toolkit for extracting electrocardiogram signals from scanned trace reports

    No full text
    Cardiovascular disease (CVD) is the leading cause of death throughout the world. Since electrocardiogram-reports (ECG) have a great CVD predicting potential, the demand for their real-time analysis is high. Although algorithms are present to perform analysis, most countries still use analogue acquisition systems that can only output a printed trace. It is necessary to extract the signal from these printouts to perform analysis. With time, as the reports pile up and the trace fades from the printout, the task becomes increasingly difficult. The method presented specifically focuses on extracting signals from faded traces. Due to the large variability of scans, it is difficult to automate this task completely. In this paper, we propose several tools for ECG extraction while maintaining a minimum user involvement requirement. The proposed method was tested on a dataset of 550 trace snippets and comparative analysis shows an average accuracy of 96%

    Mobile based wound measurement

    No full text
    This paper proposes a portable wound area measurement method based on the segmentation of digital images. Its objective is to provide a practical, fast and non-invasive technique for medical staff to monitor the healing process of chronic wounds. Segmentation is based on active contour models which identifies the wound border irrespective of coloration and shape. The initial segmentation can also be modified by the user, providing higher control and accuracy. Area measurements are further normalized to remove effects of camera distance and angle. The application has been implemented for the Android platform version 2.2 with a prototype model running on Samsung Galaxy Tab. The results to evaluate the efficacy of the application have been encouraging with an accuracy level of 90

    Hardware interface for haptic feedback in laparoscopic surgery simulators

    No full text
    Minimally Invasive Surgeries (MIS) such as laparoscopic procedures are increasingly preferred over conventional surgeries due to many different advantages. Laparoscopic surgical procedures are very complex compared to open surgeries and require high level of experience and expertise. Hybrid surgery simulators available for training using physical phantoms are expensive and not readily available in majority of health care facilities around the world. Therefore, computer simulation or Virtual Reality (VR) is a better way to obtain skills for MIS. A VR simulator incorporated with haptic feedback provides a comprehensive training closer to real world experience. In this paper, we present a novel approach to incorporate force feedback to VR laparoscopic surgery training. The proposed interface incorporates force feedback in all three axes to provide three levels of force feedback. Computational models of abdomen organs were generated using the cryosection data of Visible Human Project of the National Library of Medicine, USA. The organ models were developed with three basic force categories: soft, mild and hard. A hardware interface is developed to provide the force feedback for the interaction of virtual tools with the said organ models while generating the tool navigation information for the VR simulator

    Online status monitoring system for patients in intensive care unit

    No full text
    Continuous monitoring of patient's vital signs presides by the sophistical monitoring equipments in modern ICUs. Various health parameters provided by the continuous monitoring of these quipments has to be observed continuously to take necessary actions at the unstable situations of the patient. So it is very critical to be aware of the real-time health parameter indications of the patient all the time. We have introduced a remotely accessible centralized monitoring system in order to provide a single observation point for the health professionals remotely and locally. The proposed system is capable of access the most critical information displayed in ICU monitoring equipments in to one observation point. Therefore, the doctors can observe patient information real time from anywhere. The system also address the issues of mistakes happen due to unawareness of the time critical situations when observing information separately from each every device around the patient bed. The proposed system interconnects stand-alone ICU equipments and send these data to a database. A web applications running on top of the database will provide the real-time information about the patient to the doctor
    corecore