5 research outputs found

    Klasifikasi Nematoda Parasit Tanaman pada Gambar Mikroskopik Menggunakan Deep Learning

    Get PDF
    Nematoda adalah klasifikasi filum hewan yang memiliki variasi jenis yang tinggi, dengan sebagian spesies berupa parasit tanaman yang dapat merugikan industri pertanian dan perkebunan. Hasil produksi menurun dan ekspor produk pertanian Indonesia ditolak karena keberadaan Nematoda parasit. Maka dari itu diperlukan sistem identifikasi Nematoda yang akurat terhadap spesies yang sering ditemukan di Indonesia, agar pencegahan dan penanggulangan parasit dapat dilakukan dengan efektif tanpa keberadaan ahli taksonomi di tempat. Metode klasifikasi berdasarkan morfologi menggunakan deep learning dapat mempercepat proses identifikasi karena implementasi tersedia secara publik, tidak memerlukan peralatan khusus, serta penggunaannya mudah. Dalam penelitian ini, diteliti 3 model deep learning dalam melakukan klasifikasi Genus Nematoda, yaitu model ResNet101v2, CoAtNet-0, dan EfficientNetV2M, dengan penggunaan augmentasi data berupa sintesis dengan RumiGAN dan transformasi gambar (pembalikan gambar, perubahan pencahayaan, kontras, pengaburan, serta penambahan kebisingan). Optimizer yang digunakan dibuat sama pada semua model untuk menjaga konsistensi hasil. Ditemukan bahwa model EfficientNetV2M memiliki akurasi tertinggi sebesar 97% pada dataset yang digunakan. Penambahan variasi data dengan augmentasi transformasi gambar, serta penggunaan dua transformasi secara serentak tidak selalu menghasilkan peningkatan performa pada semua model. Augmentasi yang digunakan harus cocok secara kontekstual pada dataset yang digunakan. Kemudian augmentasi sintesis gambar dengan RumiGAN belum dapat dilakukan karena Generator gagal konvergen dan fitur diskriminatif Nematoda hilang pada resolusi yang digunakan. Sintesis data dengan GAN pada data dengan perbedaan yang kecil antar kelas nya dinilai tidak dapat dilakukan tanpa mengorbankan akurasi dari dataset

    Improving skeleton algorithm for helping Caenorhabditis elegans trackers

    Full text link
    [EN] One of the main problems when monitoring Caenorhabditis elegans nematodes (C. elegans) is tracking their poses by automatic computer vision systems. This is a challenge given the marked flexibility that their bodies present and the different poses that can be performed during their behaviour individually, which become even more complicated when worms aggregate with others while moving. This work proposes a simple solution by combining some computer vision techniques to help to determine certain worm poses and to identify each one during aggregation or in coiled shapes. This new method is based on the distance transformation function to obtain better worm skeletons. Experiments were performed with 205 plates, each with 10, 15, 30, 60 or 100 worms, which totals 100,000 worm poses approximately. A comparison of the proposed method was made to a classic skeletonisation method to find that 2196 problematic poses had improved by between 22% and 1% on average in the pose predictions of each worm.This study was supported by the Plan Nacional de I+D with Project RTI2018-094312-B-I00 and by European FEDER funds. ADM Nutrition, Biopolis S.L. and Archer Daniels Midland supplied the C. elegans plates. Some strains were provided by the CGC, which is funded by NIH Office of Research Infrastructure Programs (P40 OD010440). Mrs. Maria-Gabriela Salazar-Secada developed the skeleton annotation application. Mr. Jordi Tortosa-Grau annotated worm skeletons.Layana-Castro, PE.; Puchalt-Rodríguez, JC.; Sánchez Salmerón, AJ. (2020). Improving skeleton algorithm for helping Caenorhabditis elegans trackers. Scientific Reports. 10(1):1-12. https://doi.org/10.1038/s41598-020-79430-8S112101Teo, E. et al. A high throughput drug screening paradigm using transgenic Caenorhabditis elegans model of Alzheimer’s disease. Transl. Med. Aging 4, 11–21. https://doi.org/10.1016/j.tma.2019.12.002 (2020).Kim, M., Knoefler, D., Quarles, E., Jakob, U. & Bazopoulou, D. Automated phenotyping and lifespan assessment of a C. elegans model of Parkinson’s disease. Transl. Med. Aging 4, 38–44. https://doi.org/10.1016/j.tma.2020.04.001 (2020).Olsen, A. & Gill, M. S. (eds) Ageing: Lessons from C. elegans (Springer, Berlin, 2017).Wählby, C. et al. An image analysis toolbox for high-throughput C. elegans assays. Nat. Methods 9, 714–6. https://doi.org/10.1038/nmeth.1984 (2012).Rizvandi, N. B., Pižurica, A., Rooms, F. & Philips, W. Skeleton analysis of population images for detection of isolated and overlapped nematode C. elegans. In 2008 16th European Signal Processing Conference, 1–5 (2008).Rizvandi, N. B., Pizurica, A. & Philips, W. Machine vision detection of isolated and overlapped nematode worms using skeleton analysis. In 2008 15th IEEE International Conference on Image Processing, 2972–2975. https://doi.org/10.1109/ICIP.2008.4712419 (2008).Uhlmann, V. & Unser, M. Tip-seeking active contours for bioimage segmentation. In 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), 544–547 (2015).Nagy, S., Goessling, M., Amit, Y. & Biron, D. A generative statistical algorithm for automatic detection of complex postures. PLOS Comput. Biol. 11, 1–23. https://doi.org/10.1371/journal.pcbi.1004517 (2015).Huang, K.-M., Cosman, P. & Schafer, W. R. Machine vision based detection of omega bends and reversals in C. elegans. J. Neurosci. Methods 158, 323–336. https://doi.org/10.1016/j.jneumeth.2006.06.007 (2006).Kiel, M. et al. A multi-purpose worm tracker based on FIM. https://doi.org/10.1101/352948 (2018).Winter, P. B. et al. A network approach to discerning the identities of C. elegans in a free moving population. Sci. Rep. 6, 34859. https://doi.org/10.1038/srep34859 (2016).Fontaine, E., Burdick, J. & Barr, A. Automated tracking of multiple C. Elegans. In 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, 3716–3719. https://doi.org/10.1109/IEMBS.2006.260657 (2006).Roussel, N., Morton, C. A., Finger, F. P. & Roysam, B. A computational model for C. elegans locomotory behavior: application to multiworm tracking. IEEE Trans. Biomed. Eng. 54, 1786–1797. https://doi.org/10.1109/TBME.2007.894981 (2007).Hebert, L., Ahamed, T., Costa, A. C., O’Shaugnessy, L. & Stephens, G. J. Wormpose: image synthesis and convolutional networks for pose estimation in C. elegans. bioRxiv. https://doi.org/10.1101/2020.07.09.193755 (2020).Chen, L. et al. A CNN framework based on line annotations for detecting nematodes in microscopic images. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 508–512. https://doi.org/10.1109/ISBI45749.2020.9098465 (2020).Li, S. et al. Deformation-aware unpaired image translation for pose estimation on laboratory animals. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13155–13165. https://doi.org/10.1109/CVPR42600.2020.01317 (2020).Puchalt, J. C., Sánchez-Salmerón, A.-J., Martorell Guerola, P. & Genovés Martínez, S. Active backlight for automating visual monitoring: an analysis of a lighting control technique for Caenorhabditis elegans cultured on standard petri plates. PLOS ONE 14, 1–18. https://doi.org/10.1371/journal.pone.0215548 (2019).Stiernagle, T. Maintenance of C. elegans. https://doi.org/10.1895/wormbook.1.101.1 (2006).Russ, J. C. & Neal, F. B. The Image Processing Handbook 7th edn, 479–480 (CRC Press, Boca Raton, 2015).Swierczek, N. A., Giles, A. C., Rankin, C. H. & Kerr, R. A. High-throughput behavioral analysis in C. elegans. Nat. Methods 8, 592–598. https://doi.org/10.1038/nmeth.1625 (2011).Restif, C. et al. CELEST: computer vision software for quantitative analysis of C. elegans swim behavior reveals novel features of locomotion. PLOS Comput. Biol. 10, 1–12. https://doi.org/10.1371/journal.pcbi.1003702 (2014).Javer, A. et al. An open-source platform for analyzing and sharing worm-behavior data. Nat. Methods 15, 645–646. https://doi.org/10.1038/s41592-018-0112-1 (2018).Dusenbery, D. B. Using a microcomputer and video camera to simultaneously track 25 animals. Comput. Biol. Med. 15, 169–175. https://doi.org/10.1016/0010-4825(85)90058-7 (1985).Ramot, D., Johnson, B. E., Berry, T. L. Jr., Carnell, L. & Goodman, M. B. The parallel worm tracker: a platform for measuring average speed and drug-induced paralysis in nematodes. PLOS ONE 3, 1–7. https://doi.org/10.1371/journal.pone.0002208 (2008).Puchalt, J. C. et al. Improving lifespan automation for Caenorhabditis elegans by using image processing and a post-processing adaptive data filter. Sci. Rep. 10, 8729. https://doi.org/10.1038/s41598-020-65619-4 (2020).Rezatofighi, H. et al. Generalized intersection over union: a metric and a loss for bounding box regression. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 658–666. https://doi.org/10.1109/CVPR.2019.00075 (2019).Koul, A., Ganju, S. & Kasam, M. Practical Deep Learning for Cloud, Mobile, and Edge: Real-World AI & Computer-Vision Projects Using Python, Keras & TensorFlow, 679–680 (O’Reilly Media, 2019)

    Skeletonizing Caenorhabditis elegans Based on U-Net Architectures Trained with a Multi-worm Low-Resolution Synthetic Dataset

    Full text link
    [EN] Skeletonization algorithms are used as basic methods to solve tracking problems, pose estimation, or predict animal group behavior. Traditional skeletonization techniques, based on image processing algorithms, are very sensitive to the shapes of the connected components in the initial segmented image, especially when these are low-resolution images. Currently, neural networks are an alternative providing more robust results in the presence of image-based noise. However, training a deep neural network requires a very large and balanced dataset, which is sometimes too expensive or impossible to obtain. This work proposes a new training method based on a custom-generated dataset with a synthetic image simulator. This training method was applied to different U-Net neural networks architectures to solve the problem of skeletonization using low-resolution images of multiple Caenorhabditis elegans contained in Petri dishes measuring 55 mm in diameter. These U-Net models had only been trained and validated with a synthetic image; however, they were successfully tested with a dataset of real images. All the U-Net models presented a good generalization of the real dataset, endorsing the proposed learning method, and also gave good skeletonization results in the presence of image-based noise. The best U-Net model presented a significant improvement of 3.32% with respect to previous work using traditional image processing techniques.ADM Nutrition, Biopolis S.L. and Archer Daniels Midland supplied the C. elegans plates. Some strains were provided by the CGC, which is funded by NIH Office of Research Infrastructure Programs (P40 OD010440). Mrs. Maria-Gabriela Salazar-Secada developed the skeleton annotation application. Mr. Jordi Tortosa-Grau and Mr. Ernesto-Jesus Rico-Guardioa annotated worm skeletons.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This study was supported by the Plan Nacional de I+D with Project RTI2018-094312-B-I00, FPI Predoctoral contract PRE2019-088214 and by European FEDER funds.Layana-Castro, PE.; García-Garví, A.; Navarro Moya, F.; Sánchez Salmerón, AJ. (2023). Skeletonizing Caenorhabditis elegans Based on U-Net Architectures Trained with a Multi-worm Low-Resolution Synthetic Dataset. International Journal of Computer Vision. 131(9):2408-2424. https://doi.org/10.1007/s11263-023-01818-6240824241319Alexandre, M. (2019). Pytorch-unet. Code https://github.com/milesial/Pytorch-UNet.Baheti, B., Innani, S., Gajre, S., et al. (2020). Eff-unet: A novel architecture for semantic segmentation in unstructured environment. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, Seattle, pp. 1473–1481, https://doi.org/10.1109/CVPRW50498.2020.00187.Bargsten, L., & Schlaefer, A. (2020). Specklegan: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing. International Journal of Computer Assisted Radiology and Surgery, 15(9), 1427–1436. https://doi.org/10.1007/s11548-020-02203-1Biron, D., Haspel, G. (eds) (2015) C . elegans. Springer Science+Business Media, New York. https://doi.org/10.1007/978-1-4939-2842-2Cao, K., & Zhang, X. (2020). An improved res-unet model for tree species classification using airborne high-resolution images. Remote Sensing. https://doi.org/10.3390/rs12071128Chen, L., Strauch, M., Daub, M., et al (2020) A cnn framework based on line annotations for detecting nematodes in microscopic images. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, Iowa City, IA, USA, pp. 508–512. https://doi.org/10.1109/ISBI45749.2020.9098465Chen, Z., Ouyang, W., Liu, T., et al. (2021). A shape transformation-based dataset augmentation framework for pedestrian detection. International Journal of Computer Vision, 129(4), 1121–1138. https://doi.org/10.1007/s11263-020-01412-0Conn, P. M. (Ed.). (2017). Animal models for the study of human disease. Texas: Sara Tenney.Dewi, C., Chen, R. C., Liu, Y. T., et al. (2021). Yolo v4 for advanced traffic sign recognition with synthetic training data generated by various gan. IEEE Access, 9, 97,228-97,242. https://doi.org/10.1109/ACCESS.2021.3094201Di Rosa, G., Brunetti, G., Scuto, M., et al. (2020). Healthspan enhancement by olive polyphenols in C. elegans wild type and Parkinson’s models. International Journal of Molecular Sciences. https://doi.org/10.3390/ijms21113893Doshi, K. (2019) Synthetic image augmentation for improved classification using generative adversarial networks. arXiv preprint arXiv:1907.13576.García Garví, A., Puchalt, J. C., Layana Castro, P. E., et al. (2021). Towards lifespan automation for Caenorhabditis elegans based on deep learning: Analysing convolutional and recurrent neural networks for dead or live classification. Sensors. https://doi.org/10.3390/s21144943Hahm, J. H., Kim, S., DiLoreto, R., et al. (2015). C. elegans maximum velocity correlates with healthspan and is maintained in worms with an insulin receptor mutation. Nature Communications, 6(1), 1–7. https://doi.org/10.1038/ncomms9919Han, L., Tao, P., & Martin, R. R. (2019). Livestock detection in aerial images using a fully convolutional network. Computational Visual Media, 5(2), 221–228. https://doi.org/10.1007/s41095-019-0132-5Hebert, L., Ahamed, T., Costa, A. C., et al. (2021). Wormpose: Image synthesis and convolutional networks for pose estimation in C. elegans. PLOS Computational Biology, 17(4), 1–20. https://doi.org/10.1371/journal.pcbi.1008914Hinterstoisser, S., Pauly, O., Heibel, H., et al (2019) An annotation saved is an annotation earned: Using fully synthetic training for object detection. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, Seoul, Korea (South), pp. 2787–2796. https://doi.org/10.1109/ICCVW.2019.00340Huang, H., Lin, L., Tong, R., et al (2020) Unet 3+: A full-scale connected unet for medical image segmentation. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, Barcelona, Spain, pp. 1055–1059. https://doi.org/10.1109/ICASSP40776.2020.9053405Ioffe, S., Szegedy, C. (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: F. Bach, D. Blei (eds) Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, vol 37. PMLR, Lille, France, pp. 448–456Iqbal, H. (2018) Harisiqbal88/plotneuralnet v1.0.0. Code https://github.com/HarisIqbal88/PlotNeuralNet.Isensee, F., Jaeger, P. F., Kohl, S. A., et al. (2021). nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2), 203–211. https://doi.org/10.1038/s41592-020-01008-zJaver, A., Currie, M., Lee, C. W., et al. (2018). An open-source platform for analyzing and sharing worm-behavior data. Nature Methods, 15(9), 645–646. https://doi.org/10.1038/s41592-018-0112-1Javer, A., Brown, A.E., Kokkinos, I., et al. (2019). Identification of C. elegans strains using a fully convolutional neural network on behavioural dynamics. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops, vol 11134. Springer, Cham, pp. 0–0. https://doi.org/10.1007/978-3-030-11024-6_35Jung, S. K., Aleman-Meza, B., Riepe, C., et al. (2014). Quantworm: A comprehensive software package for Caenorhabditis elegans phenotypic assays. PLOS ONE, 9(1), 1–9. https://doi.org/10.1371/journal.pone.0084830Koopman, M., Peter, Q., Seinstra, R. I., et al. (2020). Assessing motor-related phenotypes of Caenorhabditis elegans with the wide field-of-view nematode tracking platform. Nature protocols, 15(6), 2071–2106. https://doi.org/10.1038/s41596-020-0321-9Koul, A., Ganju, S., Kasam, M. (2019). Practical Deep Learning for Cloud, Mobile and Edge: Real-World AI and Computer Vision Projects Using Python, Keras and TensorFlow. O’Reilly Media, Incorporated. https://www.oreilly.com/library/view/practical-deep-learning/9781492034858/Kumar, S., Egan, B. M., Kocsisova, Z., et al. (2019). Lifespan extension in C. elegans caused by bacterial colonization of the intestine and subsequent activation of an innate immune response. Developmental Cell, 49(1), 100-117.e6. https://doi.org/10.1016/j.devcel.2019.03.010Layana Castro, P. E., Puchalt, J. C., & Sánchez-Salmerón, A. J. (2020). Improving skeleton algorithm for helping Caenorhabditis elegans trackers. Scientific Reports, 10(1), 22,247. https://doi.org/10.1038/s41598-020-79430-8Layana Castro, P. E., Puchalt, J. C., García Garví, A., et al. (2021). Caenorhabditis elegans multi-tracker based on a modified skeleton algorithm. Sensors. https://doi.org/10.3390/s21165622Le, K. N., Zhan, M., Cho, Y., et al. (2020). An automated platform to monitor long-term behavior and healthspan in Caenorhabditis elegans under precise environmental control. Communications Biology, 3(1), 1–13. https://doi.org/10.1038/s42003-020-1013-2Li, H., Fang, J., Liu, S., et al. (2020). Cr-unet: A composite network for ovary and follicle segmentation in ultrasound images. IEEE Journal of Biomedical and Health Informatics, 24(4), 974–983. https://doi.org/10.1109/JBHI.2019.2946092Li, S., Günel, S., Ostrek, M., et al. (2020b) Deformation-aware unpaired image translation for pose estimation on laboratory animals. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Seattle, WA, USA, pp. 13155–13165. https://doi.org/10.1109/CVPR42600.2020.01317Liu, X., Zhou, T., Lu, M., et al. (2020). Deep learning for ultrasound localization microscopy. IEEE Transactions on Medical Imaging, 39(10), 3064–3078. https://doi.org/10.1109/TMI.2020.2986781Mais, L., Hirsch, P., Kainmueller, D. (2020). Patchperpix for instance segmentation. In: European Conference on Computer Vision, Springer, vol. 12370. Springer, Cham, pp. 288–304. https://doi.org/10.1007/978-3-030-58595-2_18Mane, M. R., Deshmukh, A. A., Iliff A. J. (2020) Head and tail localization of C. elegans. arXiv preprint arXiv:2001.03981. https://doi.org/10.48550/arXiv.2001.03981Mayershofer, C., Ge, T., Fottner, J. (2021). Towards fully-synthetic training for industrial applications. In: LISS 2020. Springer, Singapore, pp. 765–782. https://doi.org/10.1007/978-981-33-4359-7_53McManigle, J. E., Bartz, R. R., Carin, L. (2020). Y-net for chest x-ray preprocessing: Simultaneous classification of geometry and segmentation of annotations. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC). IEEE, Montreal, QC, Canada, pp. 1266–1269. https://doi.org/10.1109/EMBC44109.2020.9176334Moradi, S., Oghli, M. G., Alizadehasl, A., et al. (2019). Mfp-unet: A novel deep learning based approach for left ventricle segmentation in echocardiography. Physica Medica, 67, 58–69. https://doi.org/10.1016/j.ejmp.2019.10.001Olsen, A., Gill, M. S., (eds) (2017) Ageing: Lessons from C. elegans. Springer International Publishing, Switzerland. https://doi.org/10.1007/978-3-319-44703-2.Padubidri, C., Kamilaris, A., Karatsiolis, S., et al. (2021). Counting sea lions and elephants from aerial photography using deep learning with density maps. Animal Biotelemetry, 9(1), 1–10. https://doi.org/10.1186/s40317-021-00247-xPashevich, A., Strudel, R., Kalevatykh, I., et al (2019) Learning to augment synthetic images for sim2real policy transfer. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Macau, China, pp. 2651–2657. https://doi.org/10.1109/IROS40897.2019.8967622.Pitt, J. N., Strait, N. L., Vayndorf, E. M., et al. (2019). Wormbot, an open-source robotics platform for survival and behavior analysis in C. elegans. GeroScience, 41(6), 961–973. https://doi.org/10.1007/s11357-019-00124-9Plebani, E., Biscola, N. P., Havton, L. A., et al. (2022). High-throughput segmentation of unmyelinated axons by deep learning. Scientific Reports, 12(1), 1–16. https://doi.org/10.1038/s41598-022-04854-3Puchalt, J. C., Sánchez-Salmerón, A. J., Martorell Guerola, P., et al. (2019). Active backlight for automating visual monitoring: An analysis of a lighting control technique for Caenorhabditis elegans cultured on standard petri plates. PLOS ONE, 14(4), 1–18. https://doi.org/10.1371/journal.pone.0215548Puchalt, J. C., Layana Castro, P. E., & Sánchez-Salmerón, A. J. (2020). Reducing results variance in lifespan machines: An analysis of the influence of vibrotaxis on wild-type Caenorhabditis elegans for the death criterion. Sensors. https://doi.org/10.3390/s20215981Puchalt, J. C., Sánchez-Salmerón, A. J., Eugenio, I., et al. (2021). Small flexible automated system for monitoring Caenorhabditis elegans lifespan based on active vision and image processing techniques. Scientific Reports. https://doi.org/10.1038/s41598-021-91898-6Puchalt, J. C., Gonzalez-Rojo, J. F., Gómez-Escribano, A. P., et al. (2022). Multiview motion tracking based on a cartesian robot to monitor Caenorhabditis elegans in standard petri dishes. Scientific Reports, 12(1), 1–11. https://doi.org/10.1038/s41598-022-05823-6Qamar, S., Jin, H., Zheng, R., et al. (2020). A variant form of 3d-unet for infant brain segmentation. Future Generation Computer Systems, 108, 613–623. https://doi.org/10.1016/j.future.2019.11.021Rizvandi, N. B., Pizurica, A., Philips, W. (2008a). Machine vision detection of isolated and overlapped nematode worms using skeleton analysis. In: 2008 15th IEEE International Conference on Image Processing. IEEE, San Diego, CA, USA, pp. 2972–2975. https://doi.org/10.1109/ICIP.2008.4712419Rizvandi, N. B., Pižurica, A., Rooms, F., (2008b) Skeleton analysis of population images for detection of isolated and overlapped nematode C. elegans. In: 16th European Signal Processing Conference, pp. 1–5. Lausanne, Switzerland: IEEE.Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention, Springer, vol. 9351, pp. 234–241. Cham: Springer.Schraml, D. (2019). Physically based synthetic image generation for machine learning: a review of pertinent literature. In: Photonics and Education in Measurement Science 2019, International Society for Optics and Photonics, Jena, Germany, pp. 111440J. https://doi.org/10.1117/12.2533485.Stiernagle, T. (2006). Maintenance of C. elegans. https://doi.org/10.1895/wormbook.1.101.1. https://www.ncbi.nlm.nih.gov/books/NBK19649/?report=classicTang, P., Liang, Q., Yan, X., et al. (2019). Efficient skin lesion segmentation using separable-unet with stochastic weight averaging. Computer Methods and Programs in Biomedicine, 178, 289–301. https://doi.org/10.1016/j.cmpb.2019.07.005Trebing, K., Stanczyk, T., & Mehrkanoon, S. (2021). Smaat-unet: Precipitation nowcasting using a small attention-unet architecture. Pattern Recognition Letters, 145, 178–186. https://doi.org/10.1016/j.patrec.2021.01.036Tschandl, P., Sinz, C., & Kittler, H. (2019). Domain-specific classification-pretrained fully convolutional network encoders for skin lesion segmentation. Computers in Biology and Medicine, 104, 111–116. https://doi.org/10.1016/j.compbiomed.2018.11.010Tsibidis, G. D., & Tavernarakis, N. (2007). Nemo: a computational tool for analyzing nematode locomotion. BMC Neuroscience. https://doi.org/10.1186/1471-2202-8-86Uhlmann, V., Unser, M. (2015) Tip-seeking active contours for bioimage segmentation. In: 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). IEEE, Brooklyn, NY, USA, pp. 544–547. https://doi.org/10.1109/ISBI.2015.7163931.Wang, D., Lu, Z., Bao, Z. (2019). Augmenting C. elegans microscopic dataset for accelerated pattern recognition. arXiv preprint arXiv:1906.00078. https://doi.org/10.48550/arXiv.1906.00078Wang, L., Kong, S., Pincus, Z., et al. (2020). Celeganser: Automated analysis of nematode morphology and age. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, Seattle, WA, USA, pp. 4164–4173. https://doi.org/10.1109/CVPRW50498.2020.00492Wiehman, S., de Villiers, H. (2016). Semantic segmentation of bioimages using convolutional neural networks. In: 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, Vancouver, BC, Canada, pp. 624–631, https://doi.org/10.1109/IJCNN.2016.7727258.Wiles, O., & Zisserman, A. (2019). Learning to predict 3d surfaces of sculptures from single and multiple views. International Journal of Computer Vision, 127(11), 1780–1800. https://doi.org/10.1007/s11263-018-1124-0Winter, P. B., Brielmann, R. M., Timkovich, N. P., et al. (2016). A network approach to discerning the identities of C. elegans in a free moving population. Scientific Reports, 6, 34859. https://doi.org/10.1038/srep34859Wöhlby, C., Kamentsky, L., Liu, Z., et al. (2012). An image analysis toolbox for high-throughput C. elegans assays. Nature methods, 9, 714–6. https://doi.org/10.1038/nmeth.1984Yu, C. C. J., Raizen, D. M., & Fang-Yen, C. (2014). Multi-well imaging of development and behavior in Caenorhabditis elegans. Journal of Neuroscience Methods, 223, 35–39. https://doi.org/10.1016/j.jneumeth.2013.11.026Yu, X., Creamer, M. S., Randi, F., et al. (2021). Fast deep neural correspondence for tracking and identifying neurons in C. elegans using semi-synthetic training. eLife, 10, e66,410. https://doi.org/10.7554/eLife.66410Zhao, X., Yuan, Y., Song, M., et al. (2019). Use of unmanned aerial vehicle imagery and deep learning unet to extract rice lodging. Sensors. https://doi.org/10.3390/s1918385
    corecore