31 research outputs found

    Multifunctional microbubbles and nanobubbles for photoacoustic and ultrasound imaging

    Get PDF
    We develop a novel dual-modal contrast agent—encapsulated-ink poly(lactic-co-glycolic acid) (PLGA) microbubbles and nanobubbles—for photoacoustic and ultrasound imaging. Soft gelatin phantoms with embedded tumor simulators of encapsulated-ink PLGA microbubbles and nanobubbles in various concentrations are clearly shown in both photoacoustic and ultrasound images. In addition, using photoacoustic imaging, we successfully image the samples positioned below 1.8-cm-thick chicken breast tissues. Potentially, simultaneous photoacoustic and ultrasound imaging enhanced by encapsulated-dye PLGA microbubbles or nanobubbles can be a valuable tool for intraoperative assessment of tumor boundaries and therapeutic margins

    Fabricating multifunctional microbubbles and nanobubbles for concurrent ultrasound and photoacoustic imaging

    Get PDF
    Background: Clinical ultrasound (US) uses ultrasonic scattering contrast to characterize subcutaneous anatomic structures. Photoacoustic (PA) imaging detects the functional properties of thick biological tissue with high optical contrast. In the case of image-guided cancer ablation therapy, simultaneous US and PA imaging can be useful for intraoperative assessment of tumor boundaries and ablation margins. In this regard, accurate co-registration between imaging modalities and high sensitivity to cancer cells are important. Methods: We synthesized poly-lactic-co-glycolic acid (PLGA) microbubbles (MBs) and nanobubbles (NBs) encapsulating India ink or indocyanine green (ICG). Multiple tumor simulators were fabricated by entrapping ink MBs or NBs at various concentrations in gelatin phantoms for simultaneous US and PA imaging. MBs and NBs were also conjugated with CC49 antibody to target TAG-72, a human glycoprotein complex expressed in many epithelial-derived cancers. Results: Accurate co-registration and intensity correlation were observed in US and PA images of MB and NB tumor simulators. MBs and NBs conjugating with CC49 effectively bound with over-expressed TAG-72 in LS174T colon cancer cell cultures. ICG was also encapsulated in MBs and NBs for the potential to integrate US, PA, and fluorescence imaging. Conclusions: Multifunctional MBs and NBs can be potentially used as a general contrast agent for multimodal intraoperative imaging of tumor boundaries and therapeutic margins

    Multifunctional microbubbles and nanobubbles for photoacoustic and ultrasound imaging

    Get PDF
    We develop a novel dual-modal contrast agent—encapsulated-ink poly(lactic-co-glycolic acid) (PLGA) microbubbles and nanobubbles—for photoacoustic and ultrasound imaging. Soft gelatin phantoms with embedded tumor simulators of encapsulated-ink PLGA microbubbles and nanobubbles in various concentrations are clearly shown in both photoacoustic and ultrasound images. In addition, using photoacoustic imaging, we successfully image the samples positioned below 1.8-cm-thick chicken breast tissues. Potentially, simultaneous photoacoustic and ultrasound imaging enhanced by encapsulated-dye PLGA microbubbles or nanobubbles can be a valuable tool for intraoperative assessment of tumor boundaries and therapeutic margins

    REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs

    Full text link
    [EN] Glaucoma is one of the leading causes of irreversible but preventable blindness in working age populations. Color fundus photography (CFP) is the most cost-effective imaging modality to screen for retinal disorders. However, its application to glaucoma has been limited to the computation of a few related biomarkers such as the vertical cup-to-disc ratio. Deep learning approaches, although widely applied for medical image analysis, have not been extensively used for glaucoma assessment due to the limited size of the available data sets. Furthermore, the lack of a standardize benchmark strategy makes difficult to compare existing methods in a uniform way. In order to overcome these issues we set up the Retinal Fundus Glaucoma Challenge, REFUGE (https://refuge.grand-challenge.org), held in conjunction with MIC-CAI 2018. The challenge consisted of two primary tasks, namely optic disc/cup segmentation and glaucoma classification. As part of REFUGE, we have publicly released a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one. We have also built an evaluation framework to ease and ensure fairness in the comparison of different models, encouraging the development of novel techniques in the field. 12 teams qualified and participated in the online challenge. This paper summarizes their methods and analyzes their corresponding results. In particular, we observed that two of the top-ranked teams outperformed two human experts in the glaucoma classification task. Furthermore, the segmentation results were in general consistent with the ground truth annotations, with complementary outcomes that can be further exploited by ensembling the results.This work was supported by the Christian Doppler Research Association, the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology and Development, J.I.O is supported by WWTF (Medical University of Vienna: AugUniWien/FA7464A0249, University of Vienna: VRG12- 009). Team Masker is supported by Natural Science Foundation of Guangdong Province of China (Grant 2017A030310647). Team BUCT is partially supported by the National Natural Science Foundation of China (Grant 11571031). The authors would also like to thank REFUGE study group for collaborating with this challenge.Orlando, JI.; Fu, H.; Breda, JB.; Van Keer, K.; Bathula, DR.; Diaz-Pinto, A.; Fang, R.... (2020). REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Medical Image Analysis. 59:1-21. https://doi.org/10.1016/j.media.2019.101570S12159Abramoff, M. D., Garvin, M. K., & Sonka, M. (2010). Retinal Imaging and Image Analysis. IEEE Reviews in Biomedical Engineering, 3, 169-208. doi:10.1109/rbme.2010.2084567Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. npj Digital Medicine, 1(1). doi:10.1038/s41746-018-0040-6Al-Bander, B., Williams, B., Al-Nuaimy, W., Al-Taee, M., Pratt, H., & Zheng, Y. (2018). Dense Fully Convolutional Segmentation of the Optic Disc and Cup in Colour Fundus for Glaucoma Diagnosis. Symmetry, 10(4), 87. doi:10.3390/sym10040087Almazroa, A., Burman, R., Raahemifar, K., & Lakshminarayanan, V. (2015). Optic Disc and Optic Cup Segmentation Methodologies for Glaucoma Image Detection: A Survey. Journal of Ophthalmology, 2015, 1-28. doi:10.1155/2015/180972Burlina, P. M., Joshi, N., Pekala, M., Pacheco, K. D., Freund, D. E., & Bressler, N. M. (2017). Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks. JAMA Ophthalmology, 135(11), 1170. doi:10.1001/jamaophthalmol.2017.3782Carmona, E. J., Rincón, M., García-Feijoó, J., & Martínez-de-la-Casa, J. M. (2008). Identification of the optic nerve head with genetic algorithms. Artificial Intelligence in Medicine, 43(3), 243-259. doi:10.1016/j.artmed.2008.04.005Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research, 16, 321-357. doi:10.1613/jair.953Christopher, M., Belghith, A., Bowd, C., Proudfoot, J. A., Goldbaum, M. H., Weinreb, R. N., … Zangwill, L. M. (2018). Performance of Deep Learning Architectures and Transfer Learning for Detecting Glaucomatous Optic Neuropathy in Fundus Photographs. Scientific Reports, 8(1). doi:10.1038/s41598-018-35044-9De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., … Ronneberger, O. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine, 24(9), 1342-1350. doi:10.1038/s41591-018-0107-6Decencière, E., Zhang, X., Cazuguel, G., Lay, B., Cochener, B., Trone, C., … Klein, J.-C. (2014). FEEDBACK ON A PUBLICLY DISTRIBUTED IMAGE DATABASE: THE MESSIDOR DATABASE. Image Analysis & Stereology, 33(3), 231. doi:10.5566/ias.1155DeLong, E. R., DeLong, D. M., & Clarke-Pearson, D. L. (1988). Comparing the Areas under Two or More Correlated Receiver Operating Characteristic Curves: A Nonparametric Approach. Biometrics, 44(3), 837. doi:10.2307/2531595European Glaucoma Society Terminology and Guidelines for Glaucoma, 4th Edition - Part 1Supported by the EGS Foundation. (2017). British Journal of Ophthalmology, 101(4), 1-72. doi:10.1136/bjophthalmol-2016-egsguideline.001Farbman, Z., Fattal, R., Lischinski, D., & Szeliski, R. (2008). Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Transactions on Graphics, 27(3), 1-10. doi:10.1145/1360612.1360666Fu, H., Cheng, J., Xu, Y., Wong, D. W. K., Liu, J., & Cao, X. (2018). Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation. IEEE Transactions on Medical Imaging, 37(7), 1597-1605. doi:10.1109/tmi.2018.2791488Gómez-Valverde, J. J., Antón, A., Fatti, G., Liefers, B., Herranz, A., Santos, A., … Ledesma-Carbayo, M. J. (2019). Automatic glaucoma classification using color fundus images based on convolutional neural networks and transfer learning. Biomedical Optics Express, 10(2), 892. doi:10.1364/boe.10.000892Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., … Webster, D. R. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 316(22), 2402. doi:10.1001/jama.2016.17216Hagiwara, Y., Koh, J. E. W., Tan, J. H., Bhandary, S. V., Laude, A., Ciaccio, E. J., … Acharya, U. R. (2018). Computer-aided diagnosis of glaucoma using fundus images: A review. Computer Methods and Programs in Biomedicine, 165, 1-12. doi:10.1016/j.cmpb.2018.07.012Haleem, M. S., Han, L., van Hemert, J., & Li, B. (2013). Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: A review. Computerized Medical Imaging and Graphics, 37(7-8), 581-596. doi:10.1016/j.compmedimag.2013.09.005Holm, S., Russell, G., Nourrit, V., & McLoughlin, N. (2017). DR HAGIS—a fundus image database for the automatic extraction of retinal surface vessels from diabetic patients. Journal of Medical Imaging, 4(1), 014503. doi:10.1117/1.jmi.4.1.014503Joshi, G. D., Sivaswamy, J., & Krishnadas, S. R. (2011). Optic Disk and Cup Segmentation From Monocular Color Retinal Images for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 30(6), 1192-1205. doi:10.1109/tmi.2011.2106509Kaggle, 2015. Diabetic Retinopathy Detection. https://www.kaggle.com/c/diabetic-retinopathy-detection. [Online; accessed 10-January-2019].Kumar, J. R. H., Seelamantula, C. S., Kamath, Y. S., & Jampala, R. (2019). Rim-to-Disc Ratio Outperforms Cup-to-Disc Ratio for Glaucoma Prescreening. Scientific Reports, 9(1). doi:10.1038/s41598-019-43385-2Lavinsky, F., Wollstein, G., Tauber, J., & Schuman, J. S. (2017). The Future of Imaging in Detecting Glaucoma Progression. Ophthalmology, 124(12), S76-S82. doi:10.1016/j.ophtha.2017.10.011Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. doi:10.1109/5.726791Li, Z., He, Y., Keel, S., Meng, W., Chang, R. T., & He, M. (2018). Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology, 125(8), 1199-1206. doi:10.1016/j.ophtha.2018.01.023Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., … Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60-88. doi:10.1016/j.media.2017.07.005Liu, S., Graham, S. L., Schulz, A., Kalloniatis, M., Zangerl, B., Cai, W., … You, Y. (2018). A Deep Learning-Based Algorithm Identifies Glaucomatous Discs Using Monoscopic Fundus Photographs. Ophthalmology Glaucoma, 1(1), 15-22. doi:10.1016/j.ogla.2018.04.002Lowell, J., Hunter, A., Steel, D., Basu, A., Ryder, R., Fletcher, E., & Kennedy, L. (2004). Optic Nerve Head Segmentation. IEEE Transactions on Medical Imaging, 23(2), 256-264. doi:10.1109/tmi.2003.823261Maier-Hein, L., Eisenmann, M., Reinke, A., Onogur, S., Stankovic, M., Scholz, P., … Kopp-Schneider, A. (2018). Why rankings of biomedical image analysis competitions should be interpreted with care. Nature Communications, 9(1). doi:10.1038/s41467-018-07619-7Miri, M. S., Abramoff, M. D., Lee, K., Niemeijer, M., Wang, J.-K., Kwon, Y. H., & Garvin, M. K. (2015). Multimodal Segmentation of Optic Disc and Cup From SD-OCT and Color Fundus Photographs Using a Machine-Learning Graph-Based Approach. IEEE Transactions on Medical Imaging, 34(9), 1854-1866. doi:10.1109/tmi.2015.2412881Niemeijer, M., van Ginneken, B., Cree, M. J., Mizutani, A., Quellec, G., Sanchez, C. I., … Abramoff, M. D. (2010). Retinopathy Online Challenge: Automatic Detection of Microaneurysms in Digital Color Fundus Photographs. IEEE Transactions on Medical Imaging, 29(1), 185-195. doi:10.1109/tmi.2009.2033909Odstrcilik, J., Kolar, R., Budai, A., Hornegger, J., Jan, J., Gazarek, J., … Angelopoulou, E. (2013). Retinal vessel segmentation by improved matched filtering: evaluation on a new high‐resolution fundus image database. IET Image Processing, 7(4), 373-383. doi:10.1049/iet-ipr.2012.0455Orlando, J. I., Prokofyeva, E., & Blaschko, M. B. (2017). A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images. IEEE Transactions on Biomedical Engineering, 64(1), 16-27. doi:10.1109/tbme.2016.2535311Park, S. J., Shin, J. Y., Kim, S., Son, J., Jung, K.-H., & Park, K. H. (2018). A Novel Fundus Image Reading Tool for Efficient Generation of a Multi-dimensional Categorical Image Database for Machine Learning Algorithm Training. Journal of Korean Medical Science, 33(43). doi:10.3346/jkms.2018.33.e239Poplin, R., Varadarajan, A. V., Blumer, K., Liu, Y., McConnell, M. V., Corrado, G. S., … Webster, D. R. (2018). Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering, 2(3), 158-164. doi:10.1038/s41551-018-0195-0Porwal, P., Pachade, S., Kamble, R., Kokare, M., Deshmukh, G., Sahasrabuddhe, V., & Meriaudeau, F. (2018). Indian Diabetic Retinopathy Image Dataset (IDRiD): A Database for Diabetic Retinopathy Screening Research. Data, 3(3), 25. doi:10.3390/data3030025Prokofyeva, E., & Zrenner, E. (2012). Epidemiology of Major Eye Diseases Leading to Blindness in Europe: A Literature Review. Ophthalmic Research, 47(4), 171-188. doi:10.1159/000329603Raghavendra, U., Fujita, H., Bhandary, S. V., Gudigar, A., Tan, J. H., & Acharya, U. R. (2018). Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images. Information Sciences, 441, 41-49. doi:10.1016/j.ins.2018.01.051Reis, A. S. C., Sharpe, G. P., Yang, H., Nicolela, M. T., Burgoyne, C. F., & Chauhan, B. C. (2012). Optic Disc Margin Anatomy in Patients with Glaucoma and Normal Controls with Spectral Domain Optical Coherence Tomography. Ophthalmology, 119(4), 738-747. doi:10.1016/j.ophtha.2011.09.054Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., … Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3), 211-252. doi:10.1007/s11263-015-0816-ySchmidt-Erfurth, U., Sadeghipour, A., Gerendas, B. S., Waldstein, S. M., & Bogunović, H. (2018). Artificial intelligence in retina. Progress in Retinal and Eye Research, 67, 1-29. doi:10.1016/j.preteyeres.2018.07.004Sevastopolsky, A. (2017). Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network. Pattern Recognition and Image Analysis, 27(3), 618-624. doi:10.1134/s1054661817030269Taha, A. A., & Hanbury, A. (2015). Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Medical Imaging, 15(1). doi:10.1186/s12880-015-0068-xThakur, N., & Juneja, M. (2018). Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomedical Signal Processing and Control, 42, 162-189. doi:10.1016/j.bspc.2018.01.014Tham, Y.-C., Li, X., Wong, T. Y., Quigley, H. A., Aung, T., & Cheng, C.-Y. (2014). Global Prevalence of Glaucoma and Projections of Glaucoma Burden through 2040. Ophthalmology, 121(11), 2081-2090. doi:10.1016/j.ophtha.2014.05.013Johnson, S. S., Wang, J.-K., Islam, M. S., Thurtell, M. J., Kardon, R. H., & Garvin, M. K. (2018). Local Estimation of the Degree of Optic Disc Swelling from Color Fundus Photography. Lecture Notes in Computer Science, 277-284. doi:10.1007/978-3-030-00949-6_33Trucco, E., Ruggeri, A., Karnowski, T., Giancardo, L., Chaum, E., Hubschman, J. P., … Dhillon, B. (2013). Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal. Investigative Opthalmology & Visual Science, 54(5), 3546. doi:10.1167/iovs.12-10347Vergara, I. A., Norambuena, T., Ferrada, E., Slater, A. W., & Melo, F. (2008). StAR: a simple tool for the statistical comparison of ROC curves. BMC Bioinformatics, 9(1). doi:10.1186/1471-2105-9-265Wu, Z., Shen, C., & van den Hengel, A. (2019). Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. Pattern Recognition, 90, 119-133. doi:10.1016/j.patcog.2019.01.006Zheng, Y., Hijazi, M. H. A., & Coenen, F. (2012). Automated «Disease/No Disease» Grading of Age-Related Macular Degeneration by an Image Mining Approach. Investigative Opthalmology & Visual Science, 53(13), 8310. doi:10.1167/iovs.12-957

    Informing the Design of Privacy-Empowering Tools for the Connected Home

    Full text link
    Connected devices in the home represent a potentially grave new privacy threat due to their unfettered access to the most personal spaces in people's lives. Prior work has shown that despite concerns about such devices, people often lack sufficient awareness, understanding, or means of taking effective action. To explore the potential for new tools that support such needs directly we developed Aretha, a privacy assistant technology probe that combines a network disaggregator, personal tutor, and firewall, to empower end-users with both the knowledge and mechanisms to control disclosures from their homes. We deployed Aretha in three households over six weeks, with the aim of understanding how this combination of capabilities might enable users to gain awareness of data disclosures by their devices, form educated privacy preferences, and to block unwanted data flows. The probe, with its novel affordances-and its limitations-prompted users to co-adapt, finding new control mechanisms and suggesting new approaches to address the challenge of regaining privacy in the connected home.Comment: 10 pages, 2 figures. To appear in the Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20

    Limb development genes underlie variation in human fingerprint patterns

    Get PDF
    Fingerprints are of long-standing practical and cultural interest, but little is known about the mechanisms that underlie their variation. Using genome-wide scans in Han Chinese cohorts, we identified 18 loci associated with fingerprint type across the digits, including a genetic basis for the long-recognized “pattern-block” correlations among the middle three digits. In particular, we identified a variant near EVI1 that alters regulatory activity and established a role for EVI1 in dermatoglyph patterning in mice. Dynamic EVI1 expression during human development supports its role in shaping the limbs and digits, rather than influencing skin patterning directly. Trans-ethnic meta-analysis identified 43 fingerprint-associated loci, with nearby genes being strongly enriched for general limb development pathways. We also found that fingerprint patterns were genetically correlated with hand proportions. Taken together, these findings support the key role of limb development genes in influencing the outcome of fingerprint patterning

    WizNet: A ZigBee-based sensor system for distributed wireless LAN performance monitoring

    No full text
    802.11-based wireless LANs (WLANs) have become an important communication infrastructure for today's pervasive computing applications. Nevertheless, WLAN users often experience various performance issues such as highly variable signal quality. To diagnose such transient service degradations and plan for future network upgrades, it is essential to closely monitor the performance of a WLAN and collect user statistics. This paper proposes a new WLAN performance monitoring approach motivated by the fact that many low-power wireless technologies such as ZigBee and Bluetooth co-exist with WLAN in the same open radio spectrum and are capable of sensing Received Signal Strength (RSS) of 802.11 transmissions. We have developed a ZigBee-based WLAN monitoring system called WizNet. Powered by batteries, ZigBee sensors of WizNet can be deployed in large quantities to monitor the spatial performance of a WLAN in long periods of time. By adopting digital signal processing techniques, WizNet automatically identifies 802.11 signals from ZigBee RSS measurements and associates them with wireless access points. To ensure the monitoring fidelity, WizNet accounts for the significant differences in ZigBee and WLAN radios, such as bandwidth and susceptibility to multipath and frequency-selective fading. A simple yet accurate linear estimator derived from a signal propagation model is used to infer the access points' signal to noise ratio (SNR). Moreover, WizNet can measure the congestion level of the channel and detect rogue APs. WizNet can also collect WLAN client statistics and classify device models based on RSS signatures of 802.11 access point scans. We have implemented WizNet in TinyOS 2.x and extensively evaluated its performance on a wireless testbed. Our results over a period of 140 hours show that WizNet can accurately capture the spatial and temporal performance variability of a large-scale production WLAN. © 2013 IEEE

    Finite Element Analysis of Stress on Cross-Wavy Primary Surface Recuperator Based on Thermal-Structural Coupling Model

    No full text
    In order to study the stress, strain and deformation of the recuperator, the thermal-structural coupling finite element analysis model of cross-wavy primary surface recuperator of gas microturbine was established. The stress of cross-wavy primary surface recuperator after operation under design conditions was analyzed by finite element method. The reliability of the material selected for the recuperator was verified, and the effects of pressure ratio and gas inlet temperature on stress and displacement of the recuperator were analyzed. The research results show that the maximum stress and strain on the gas outlet side of the recuperator are higher than the maximum stress and strain on the gas inlet side when only pressure is considered, and the result is the opposite when pressure and thermal stress are considered. The air passage of the recuperator deforms to the side of the gas passage, the air passage becomes larger, and the gas passage shrinks. With the increase of pressure ratio between air side and gas side, the maximum stress of recuperator passage also increases. When the pressure ratio increases to 8.4, the strength limit of the heat exchange fin material is reached. When the gas and air outlet temperatures remain unchanged and the thermal ratio decreases, as the gas inlet temperature increases, the maximum stress increases. For every 50 K increase in the gas inlet temperature, the maximum stress of the recuperator increases by about 2.3 MPa. The research results can be used to guide the designing and optimization of recuperator

    Micro-/nano-topography of selective laser melting titanium enhances adhesion and proliferation and regulates adhesion-related gene expressions of human gingival fibroblasts and human gingival epithelial cells

    No full text
    Background: Selective laser melting (SLM) titanium is an ideal option to manufacture customized implants with suitable surface modification to improve its bioactivity. The peri-implant soft tissues form a protective tissue barrier for the underlying osseointegration. Therefore, original microrough SLM surfaces should be treated for favorable attachment of surrounding soft tissues. Material and methods: In this study, anodic oxidation (AO) was applied on the microrough SLM titanium substrate to form TiO2 nanotube arrays. After that, calcium phosphate (CaP) nanoparticles were embedded into the nanotubes or the interval of nanotubes by electrochemical deposition (AOC). These two samples were compared to untreated (SLM) samples and accepted mechanically polished (MP) SLM titanium samples. Scanning electron microscopy, energy dispersive spectrometry, X-ray diffraction, surface roughness, and water contact angle measurements were used for surface characterization. The primary human gingival epithelial cells (HGECs) and human gingival fibroblasts (HGFs) were cultured for cell assays to determine adhesion, proliferation, and adhesion-related gene expressions. Results: For HGECs, AOC samples showed significantly higher adhesion, proliferation, and adhesion-related gene expressions than AO and SLM samples (P<0.05) and similar exceptional ability in above aspects to MP samples. At the same time, AOC samples showed the highest adhesion, proliferation, and adhesion-related gene expressions for HGFs (P<0.05). Conclusion: By comparison between each sample, we could confirm that both anodic oxidation and CaP nanoparticles had improved bioactivity, and their combined utilization may likely be superior to mechanical polishing, which is most commonly used and widely accepted. Our results indicated that creating appropriate micro-/nano-topographies can be an effective method to affect cell behavior and increase the stability of the peri-implant mucosal barrier on SLM titanium surfaces, which contributes to its application in dental and other biomedical implants
    corecore