121 research outputs found

    Towards Lifespan Automation for Caenorhabditis elegans Based on Deep Learning: Analysing Convolutional and Recurrent Neural Networks for Dead or Live Classification

    Full text link
    [EN] The automation of lifespan assays with C. elegans in standard Petri dishes is a challenging problem because there are several problems hindering detection such as occlusions at the plate edges, dirt accumulation, and worm aggregations. Moreover, determining whether a worm is alive or dead can be complex as they barely move during the last few days of their lives. This paper proposes a method combining traditional computer vision techniques with a live/dead C. elegans classifier based on convolutional and recurrent neural networks from low-resolution image sequences. In addition to proposing a new method to automate lifespan, the use of data augmentation techniques is proposed to train the network in the absence of large numbers of samples. The proposed method achieved small error rates (3.54% +/- 1.30% per plate) with respect to the manual curve, demonstrating its feasibility.This study was supported by the Plan Nacional de I + D under the project RTI2018-094312B-I00 and by the European FEDER funds.García-Garví, A.; Puchalt-Rodríguez, JC.; Layana-Castro, PE.; Navarro Moya, F.; Sánchez Salmerón, AJ. (2021). Towards Lifespan Automation for Caenorhabditis elegans Based on Deep Learning: Analysing Convolutional and Recurrent Neural Networks for Dead or Live Classification. Sensors. 21(14):1-17. https://doi.org/10.3390/s21144943117211

    Desarrollo de técnicas avanzadas de seguimiento de posturas para reconocimiento de comportamientos de C. elegans

    Full text link
    Tesis por compendio[ES] El objetivo principal de esta tesis es el desarrollo de técnicas avanzadas de seguimiento de posturas para reconocimiento de comportamientos del Caenorhabditis elegans o C. elegans. El C. elegans es una clase de nematodo utilizado como organismo modelo para el estudio y tratamientos de diferentes enfermedades patológicas así como neurodegenerativas. Su comportamiento ofrece información valiosa para la investigación de nuevos fármacos (o productos alimenticios y cosméticos saludables) en el estudio de lifespan y healthspan. Al día de hoy, muchos de los ensayos con C. elegans se realizan de forma manual, es decir, usando microscopios para seguirlos y observar sus comportamientos o en laboratorios más modernos utilizando programas específicos. Estos programas no son totalmente automáticos, requieren ajuste de parámetros. Y en otros casos, son programas para visualización de imágenes donde el operador debe etiquetar maualmente el comportamiento de cada C. elegans. Todo esto se traduce a muchas horas de trabajo, lo cual se puede automatizar utilizando técnicas de visión por computador. Además de poder estimar indicadores de movilidad con mayor precisión que un operador humano. El problema principal en el seguimiento de posturas de C. elegans en placas de Petri son las agregaciones entre nematodos o con ruido del entorno. La pérdida o cambios de identidad son muy comunes ya sea de forma manual o usando programas automáticos/semi-automáticos. Y este problema se vuelve más complicado aún en imágenes de baja resolución. Los programas que automatizan estas tareas de seguimiento de posturas trabajan con técnicas de visión por computador usando técnicas tradicionales de procesamiento de imágenes o técnicas de aprendizaje profundo. Ambas técnicas han demostrado excelentes resultados en la detección y seguimiento de posturas de C. elegan}. Por un lado, técnicas tradicionales utilizan algoritmos/optimizadores para obtener la mejor solución, mientras que las técnicas de aprendizaje profundo aprenden de forma automática características del conjunto de datos de entrenamiento. El problema con las técnicas de aprendizaje profundo es que necesitan un conjunto de datos dedicado y grande para entrenar los modelos. La metodología utilizada para el desarrollo de esta tesis (técnicas avanzadas de seguimiento de posturas) se encuadran dentro del área de investigación de la visión artificial. Y ha sido abordada explorando ambas ramas de visión por computador para resolver los problemas de seguimiento de posturas de C. elegans en imágenes de baja resolución. La primera parte, es decir, secciones 1 y 2, capítulo 2, utilizó técnicas tradicionales de procesamiento de imágenes para realizar la detección y seguimiento de posturas de los C. elegans. Para ello se propuso una nueva técnica de esqueletización y dos nuevos criterios de evaluación para obtener mejores resultados de seguimiento, detección, y segmentación de posturas. Las siguientes secciones del capítulo 2 utilizan técnicas de aprendizaje profundo, y simulación de imágenes sintéticas para entrenar modelos y mejorar los resultados de detección y predicción de posturas. Los resultados demostraron ser más rápidos y más precisos en comparación con técnicas tradicionales. También se demostró que los métodos de aprendizaje profundo son más robustos ante la presencia de ruido en la placa.[CA] L'objectiu principal d'aquesta tesi és el desenvolupament de tècniques avançades de seguiment de postures per a reconeixement de comportaments del Caenorhabditis elegans o C. elegans. El C. elegans és una classe de nematodo utilitzat com a organisme model per a l'estudi i tractaments de diferents malalties patològiques així com neurodegeneratives. El seu comportament ofereix informació valuosa per a la investigació de nous fàrmacs (o productes alimentosos i cosmètics saludables) en l'estudi de lifespan i healthspan. Al dia de hui, molts dels assajos amb C. elegans es realitzen de manera manual, és a dir, usant microscopis per a seguir-los i observar els seus comportaments o en laboratoris més moderns utilitzant programes específics. Aquests programes no són totalment automàtics, requereixen ajust de paràmetres. I en altres casos, són programes per a visualització d'imatges on l'operador ha d'etiquetar maualment el comportament de cada C. elegans. Tot això es tradueix a moltes hores de treball, la qual cosa es pot automatitzar utilitzant tècniques de visió per computador. A més de poder estimar indicadors de mobilitat amb major precisió que un operador humà. El problema principal en el seguiment de postures de C. elegans en plaques de Petri són les agregacions entre nematodes o amb soroll de l'entorn. La pèrdua o canvis d'identitat són molt comuns ja siga de manera manual o usant programes automàtics/semi-automàtics. I aquest problema es torna més complicat encara en imatges de baixa resolució. Els programes que automatitzen aquestes tasques de seguiment de postures treballen amb tècniques de visió per computador usant tècniques tradicionals de processament d'imatges o tècniques d'aprenentatge profund. Totes dues tècniques han demostrat excel·lents resultats en la detecció i seguiment de postures de C. elegans. D'una banda, tècniques tradicionals utilitzen algorismes/optimizadors per a obtindre la millor solució, mentre que les tècniques d'aprenentatge profund aprenen de manera automàtica característiques del conjunt de dades d'entrenament. El problema amb les tècniques d'aprenentatge profund és que necessiten un conjunt de dades dedicat i gran per a entrenar els models. La metodologia utilitzada per al desenvolupament d'aquesta tesi (tècniques avançades de seguiment de postures) s'enquadren dins de l'àrea d'investigació de la visió artificial. I ha sigut abordada explorant totes dues branques de visió per computador per a resoldre els problemes de seguiment de postures de C. elegans en imatges de baixa resolució. La primera part, és a dir, secció 1 i 2, capítol 2, va utilitzar tècniques tradicionals de processament d'imatges per a realitzar la detecció i seguiment de postures dels C. elegans. Per a això es va proposar una nova tècnica de esqueletizació i dos nous criteris d'avaluació per a obtindre millors resultats de seguiment, detecció i segmentació de postures. Les següents seccions del capítol 2 utilitzen tècniques d'aprenentatge profund i simulació d'imatges sintètiques per a entrenar models i millorar els resultats de detecció i predicció de postures. Els resultats van demostrar ser més ràpids i més precisos en comparació amb tècniques tradicionals. També es va demostrar que els mètodes d'aprenentatge profund són més robustos davant la presència de soroll en la placa.[EN] The main objective of this thesis is the development of advanced posture-tracking techniques for behavioural recognition of Caenorhabditis elegans or C. elegans. C. elegans is a kind of nematode used as a model organism for the study and treatment of different pathological and neurodegenerative diseases. Their behaviour provides valuable information for the research of new drugs (or healthy food and cosmetic products) in the study of lifespan and healthspan. Today, many of the tests on C. elegans are performed manually, i.e. using microscopes to track them and observe their behaviour, or in more modern laboratories using specific software. These programmes are not fully automatic, requiring parameter adjustment. And in other cases, they are programmes for image visualisation where the operator must label the behaviour of each C. elegans manually. All this translates into many hours of work, which can be automated using computer vision techniques. In addition to being able to estimate mobility indicators more accurately than a human operator. The main problem in tracking C. elegans postures in Petri dishes is aggregations between nematodes or with noise from the environment. Loss or changes of identity are very common either manually or using automatic/semi-automatic programs. And this problem becomes even more complicated in low-resolution images. Programs that automate these pose-tracking tasks work with computer vision techniques using either traditional image processing techniques or deep learning techniques. Both techniques have shown excellent results in the detection and tracking of C. elegans postures. On the one hand, traditional techniques use algorithms/optimizers to obtain the best solution, while deep learning techniques automatically learn features from the training dataset. The problem with deep learning techniques is that they need a dedicated and large dataset to train the models. The methodology used for the development of this thesis (advanced posture-tracking techniques) falls within the research area of computer vision. It has been approached by exploring both branches of computer vision to solve the posture-tracking problems of C. elegans in low-resolution images. The first part, i.e. sections 1 and 2, chapter 2, used traditional image processing techniques to perform posture detection and tracking of C. elegans. For this purpose, a new skeletonization technique and two new evaluation criteria were proposed to obtain better posture-tracking, detection, and segmentation results. The next sections of chapter 2 use deep learning techniques, and synthetic image simulation to train models and improve posture detection and prediction results. The results proved to be faster and more accurate compared to traditional techniques. Deep learning methods were also shown to be more robust in the presence of plate noise.This research was supported by Ministerio de Ciencia, Innovación y Universidades [RTI2018-094312-B-I00 (European FEDER funds); FPI PRE2019-088214], and also was supported by Universitat Politècnica de València [“Funding for open access charge: Uni- versitat Politècnica de València”]. The author received a scholarship from the grant: Ayudas para contratos predoctorales para la formación de doctores 2019.Layana Castro, PE. (2023). Desarrollo de técnicas avanzadas de seguimiento de posturas para reconocimiento de comportamientos de C. elegans [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/198879Compendi

    Gait optimality for undulatory locomotion with applications to C. elegans phenotyping

    Get PDF
    This thesis focuses on the optimality and efficiency of organism locomotion strategies, specifically of microscopic undulators, in two distinct parts. Undulators loco- mote by propagating waves of bending deformation along their bodies, and at the microscale (ie low Reynolds number) interactions between undulators and their surroundings are well-described by biomechanical models due to high viscosity and negligible inertia. Frameworks such as resistive force theory enable the determination of optimal gaits for micro-undulators, often defined as the waveform maximising the ratio of swimming speed to energetic cost. Part I explores this avenue of research in a theoretical setting. Primary mathematical focus has been on finding optimal waveforms for straight-path forwards locomotion, but organisms do not move exclusively this way: turning and manoeuvring is key to survival. Here we establish a mathematical model, extend- ing previous approaches to modelling swimming micro-undulators, now introducing path curvature, to obtain optimal turning gaits. We obtain an analytical result demonstrating that high-curvature shapes minimise energetic cost when the penalty for bending is reduced. Imposing limitations on the curvature, and investigating multiple high-dimensional shape-spaces, we show that optimal turning results can be closely approximated as constant-curvature travelling waves. Part II adopts an experimental approach. Quantitative phenotyping tools can be used in behavioural screens of the model organism C. elegans to detect differences between wildtype and mutant strains. Expanding the current set of tools to include more orthogonal features could enable increased detection of deficiencies. Here we develop efficiency as a phenotyping lens for C. elegans, quantifying the gait optimality of rare human genetic disease model strains. Genetic diseases in humans are modelled in C. elegans with disease-associated orthologs. We find worm gait efficiency is found to correlate highly with percentage time paused. High efficiencies are exhibited during reversals and backing motions, due to suppressed head-swinging and increase in speed.Open Acces

    Tools for Behavioral Phenotyping of C. elegans

    Get PDF
    Animal behavior is critical to survival and provides a window into how the brain makes decisions and integrates sensory information. A simple model organism that allows researchers to more precisely interrogate the relationships between behavior and the brain is the nematode C. elegans. However, current phenotyping tools have technical limitations that make observing, intervening in, and quantifying behavior in diverse settings difficult. In this thesis, I develop enabling technological systems to resolve these challenges. To address scaling issues in observation and intervention in long-term behavior, I develop a platform for long-term continuous imaging, online behavior quantification, and online behavior-conditional intervention. I show that this tool is easy to build and use and can operate in an automated fashion for days at a time. I then use this platform to understand the consequences of quiescence deprivation to C. elegans health. To quantify complex animal postures, and plant and stem cell aggregate morphology, I develop an app to enable fast, versatile and quantitative annotation and demonstrate that it is both ~ 130-fold faster and in some cases less error-prone than state-of-the-art computational methods. This app is agnostic to image content and allows freehand annotation of curves and other complex and non-uniform shapes while also providing an automated way to distribute annotation tasks. This tool may be used to generate ground truth sets for testing or creating automated algorithms. Finally, I quantify C. elegans behavior using quantitative machine-learning analysis and map the worm’s behavioral repertoire across multiple physical environments that more closely mimic C. elegans’ natural environment. From this analysis, I identified subtle behaviors that are not easily distinguishable by eye and built a tool that allows others to explore our video dataset and behaviors in a facile way. I also use this analysis to examine the richness of C. elegans behavior across selected environments and find that behavior diversity is not uniform across environments. This has important implications for choice of media for behavioral phenotyping, as it suggests that the appropriate media choice may increase our ability to distinguish behavioral phenotypes in C. elegans. Together, these tools enable novel behavior experiments at a larger scale and with more nuanced phenotyping compared to currently available tools.Ph.D

    Skeletonizing Caenorhabditis elegans Based on U-Net Architectures Trained with a Multi-worm Low-Resolution Synthetic Dataset

    Full text link
    [EN] Skeletonization algorithms are used as basic methods to solve tracking problems, pose estimation, or predict animal group behavior. Traditional skeletonization techniques, based on image processing algorithms, are very sensitive to the shapes of the connected components in the initial segmented image, especially when these are low-resolution images. Currently, neural networks are an alternative providing more robust results in the presence of image-based noise. However, training a deep neural network requires a very large and balanced dataset, which is sometimes too expensive or impossible to obtain. This work proposes a new training method based on a custom-generated dataset with a synthetic image simulator. This training method was applied to different U-Net neural networks architectures to solve the problem of skeletonization using low-resolution images of multiple Caenorhabditis elegans contained in Petri dishes measuring 55 mm in diameter. These U-Net models had only been trained and validated with a synthetic image; however, they were successfully tested with a dataset of real images. All the U-Net models presented a good generalization of the real dataset, endorsing the proposed learning method, and also gave good skeletonization results in the presence of image-based noise. The best U-Net model presented a significant improvement of 3.32% with respect to previous work using traditional image processing techniques.ADM Nutrition, Biopolis S.L. and Archer Daniels Midland supplied the C. elegans plates. Some strains were provided by the CGC, which is funded by NIH Office of Research Infrastructure Programs (P40 OD010440). Mrs. Maria-Gabriela Salazar-Secada developed the skeleton annotation application. Mr. Jordi Tortosa-Grau and Mr. Ernesto-Jesus Rico-Guardioa annotated worm skeletons.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This study was supported by the Plan Nacional de I+D with Project RTI2018-094312-B-I00, FPI Predoctoral contract PRE2019-088214 and by European FEDER funds.Layana-Castro, PE.; García-Garví, A.; Navarro Moya, F.; Sánchez Salmerón, AJ. (2023). Skeletonizing Caenorhabditis elegans Based on U-Net Architectures Trained with a Multi-worm Low-Resolution Synthetic Dataset. International Journal of Computer Vision. 131(9):2408-2424. https://doi.org/10.1007/s11263-023-01818-6240824241319Alexandre, M. (2019). Pytorch-unet. Code https://github.com/milesial/Pytorch-UNet.Baheti, B., Innani, S., Gajre, S., et al. (2020). Eff-unet: A novel architecture for semantic segmentation in unstructured environment. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, Seattle, pp. 1473–1481, https://doi.org/10.1109/CVPRW50498.2020.00187.Bargsten, L., & Schlaefer, A. (2020). Specklegan: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing. International Journal of Computer Assisted Radiology and Surgery, 15(9), 1427–1436. https://doi.org/10.1007/s11548-020-02203-1Biron, D., Haspel, G. (eds) (2015) C . elegans. Springer Science+Business Media, New York. https://doi.org/10.1007/978-1-4939-2842-2Cao, K., & Zhang, X. (2020). An improved res-unet model for tree species classification using airborne high-resolution images. Remote Sensing. https://doi.org/10.3390/rs12071128Chen, L., Strauch, M., Daub, M., et al (2020) A cnn framework based on line annotations for detecting nematodes in microscopic images. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, Iowa City, IA, USA, pp. 508–512. https://doi.org/10.1109/ISBI45749.2020.9098465Chen, Z., Ouyang, W., Liu, T., et al. (2021). A shape transformation-based dataset augmentation framework for pedestrian detection. International Journal of Computer Vision, 129(4), 1121–1138. https://doi.org/10.1007/s11263-020-01412-0Conn, P. M. (Ed.). (2017). Animal models for the study of human disease. Texas: Sara Tenney.Dewi, C., Chen, R. C., Liu, Y. T., et al. (2021). Yolo v4 for advanced traffic sign recognition with synthetic training data generated by various gan. IEEE Access, 9, 97,228-97,242. https://doi.org/10.1109/ACCESS.2021.3094201Di Rosa, G., Brunetti, G., Scuto, M., et al. (2020). Healthspan enhancement by olive polyphenols in C. elegans wild type and Parkinson’s models. International Journal of Molecular Sciences. https://doi.org/10.3390/ijms21113893Doshi, K. (2019) Synthetic image augmentation for improved classification using generative adversarial networks. arXiv preprint arXiv:1907.13576.García Garví, A., Puchalt, J. C., Layana Castro, P. E., et al. (2021). Towards lifespan automation for Caenorhabditis elegans based on deep learning: Analysing convolutional and recurrent neural networks for dead or live classification. Sensors. https://doi.org/10.3390/s21144943Hahm, J. H., Kim, S., DiLoreto, R., et al. (2015). C. elegans maximum velocity correlates with healthspan and is maintained in worms with an insulin receptor mutation. Nature Communications, 6(1), 1–7. https://doi.org/10.1038/ncomms9919Han, L., Tao, P., & Martin, R. R. (2019). Livestock detection in aerial images using a fully convolutional network. Computational Visual Media, 5(2), 221–228. https://doi.org/10.1007/s41095-019-0132-5Hebert, L., Ahamed, T., Costa, A. C., et al. (2021). Wormpose: Image synthesis and convolutional networks for pose estimation in C. elegans. PLOS Computational Biology, 17(4), 1–20. https://doi.org/10.1371/journal.pcbi.1008914Hinterstoisser, S., Pauly, O., Heibel, H., et al (2019) An annotation saved is an annotation earned: Using fully synthetic training for object detection. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, Seoul, Korea (South), pp. 2787–2796. https://doi.org/10.1109/ICCVW.2019.00340Huang, H., Lin, L., Tong, R., et al (2020) Unet 3+: A full-scale connected unet for medical image segmentation. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, Barcelona, Spain, pp. 1055–1059. https://doi.org/10.1109/ICASSP40776.2020.9053405Ioffe, S., Szegedy, C. (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: F. Bach, D. Blei (eds) Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, vol 37. PMLR, Lille, France, pp. 448–456Iqbal, H. (2018) Harisiqbal88/plotneuralnet v1.0.0. Code https://github.com/HarisIqbal88/PlotNeuralNet.Isensee, F., Jaeger, P. F., Kohl, S. A., et al. (2021). nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2), 203–211. https://doi.org/10.1038/s41592-020-01008-zJaver, A., Currie, M., Lee, C. W., et al. (2018). An open-source platform for analyzing and sharing worm-behavior data. Nature Methods, 15(9), 645–646. https://doi.org/10.1038/s41592-018-0112-1Javer, A., Brown, A.E., Kokkinos, I., et al. (2019). Identification of C. elegans strains using a fully convolutional neural network on behavioural dynamics. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops, vol 11134. Springer, Cham, pp. 0–0. https://doi.org/10.1007/978-3-030-11024-6_35Jung, S. K., Aleman-Meza, B., Riepe, C., et al. (2014). Quantworm: A comprehensive software package for Caenorhabditis elegans phenotypic assays. PLOS ONE, 9(1), 1–9. https://doi.org/10.1371/journal.pone.0084830Koopman, M., Peter, Q., Seinstra, R. I., et al. (2020). Assessing motor-related phenotypes of Caenorhabditis elegans with the wide field-of-view nematode tracking platform. Nature protocols, 15(6), 2071–2106. https://doi.org/10.1038/s41596-020-0321-9Koul, A., Ganju, S., Kasam, M. (2019). Practical Deep Learning for Cloud, Mobile and Edge: Real-World AI and Computer Vision Projects Using Python, Keras and TensorFlow. O’Reilly Media, Incorporated. https://www.oreilly.com/library/view/practical-deep-learning/9781492034858/Kumar, S., Egan, B. M., Kocsisova, Z., et al. (2019). Lifespan extension in C. elegans caused by bacterial colonization of the intestine and subsequent activation of an innate immune response. Developmental Cell, 49(1), 100-117.e6. https://doi.org/10.1016/j.devcel.2019.03.010Layana Castro, P. E., Puchalt, J. C., & Sánchez-Salmerón, A. J. (2020). Improving skeleton algorithm for helping Caenorhabditis elegans trackers. Scientific Reports, 10(1), 22,247. https://doi.org/10.1038/s41598-020-79430-8Layana Castro, P. E., Puchalt, J. C., García Garví, A., et al. (2021). Caenorhabditis elegans multi-tracker based on a modified skeleton algorithm. Sensors. https://doi.org/10.3390/s21165622Le, K. N., Zhan, M., Cho, Y., et al. (2020). An automated platform to monitor long-term behavior and healthspan in Caenorhabditis elegans under precise environmental control. Communications Biology, 3(1), 1–13. https://doi.org/10.1038/s42003-020-1013-2Li, H., Fang, J., Liu, S., et al. (2020). Cr-unet: A composite network for ovary and follicle segmentation in ultrasound images. IEEE Journal of Biomedical and Health Informatics, 24(4), 974–983. https://doi.org/10.1109/JBHI.2019.2946092Li, S., Günel, S., Ostrek, M., et al. (2020b) Deformation-aware unpaired image translation for pose estimation on laboratory animals. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Seattle, WA, USA, pp. 13155–13165. https://doi.org/10.1109/CVPR42600.2020.01317Liu, X., Zhou, T., Lu, M., et al. (2020). Deep learning for ultrasound localization microscopy. IEEE Transactions on Medical Imaging, 39(10), 3064–3078. https://doi.org/10.1109/TMI.2020.2986781Mais, L., Hirsch, P., Kainmueller, D. (2020). Patchperpix for instance segmentation. In: European Conference on Computer Vision, Springer, vol. 12370. Springer, Cham, pp. 288–304. https://doi.org/10.1007/978-3-030-58595-2_18Mane, M. R., Deshmukh, A. A., Iliff A. J. (2020) Head and tail localization of C. elegans. arXiv preprint arXiv:2001.03981. https://doi.org/10.48550/arXiv.2001.03981Mayershofer, C., Ge, T., Fottner, J. (2021). Towards fully-synthetic training for industrial applications. In: LISS 2020. Springer, Singapore, pp. 765–782. https://doi.org/10.1007/978-981-33-4359-7_53McManigle, J. E., Bartz, R. R., Carin, L. (2020). Y-net for chest x-ray preprocessing: Simultaneous classification of geometry and segmentation of annotations. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC). IEEE, Montreal, QC, Canada, pp. 1266–1269. https://doi.org/10.1109/EMBC44109.2020.9176334Moradi, S., Oghli, M. G., Alizadehasl, A., et al. (2019). Mfp-unet: A novel deep learning based approach for left ventricle segmentation in echocardiography. Physica Medica, 67, 58–69. https://doi.org/10.1016/j.ejmp.2019.10.001Olsen, A., Gill, M. S., (eds) (2017) Ageing: Lessons from C. elegans. Springer International Publishing, Switzerland. https://doi.org/10.1007/978-3-319-44703-2.Padubidri, C., Kamilaris, A., Karatsiolis, S., et al. (2021). Counting sea lions and elephants from aerial photography using deep learning with density maps. Animal Biotelemetry, 9(1), 1–10. https://doi.org/10.1186/s40317-021-00247-xPashevich, A., Strudel, R., Kalevatykh, I., et al (2019) Learning to augment synthetic images for sim2real policy transfer. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Macau, China, pp. 2651–2657. https://doi.org/10.1109/IROS40897.2019.8967622.Pitt, J. N., Strait, N. L., Vayndorf, E. M., et al. (2019). Wormbot, an open-source robotics platform for survival and behavior analysis in C. elegans. GeroScience, 41(6), 961–973. https://doi.org/10.1007/s11357-019-00124-9Plebani, E., Biscola, N. P., Havton, L. A., et al. (2022). High-throughput segmentation of unmyelinated axons by deep learning. Scientific Reports, 12(1), 1–16. https://doi.org/10.1038/s41598-022-04854-3Puchalt, J. C., Sánchez-Salmerón, A. J., Martorell Guerola, P., et al. (2019). Active backlight for automating visual monitoring: An analysis of a lighting control technique for Caenorhabditis elegans cultured on standard petri plates. PLOS ONE, 14(4), 1–18. https://doi.org/10.1371/journal.pone.0215548Puchalt, J. C., Layana Castro, P. E., & Sánchez-Salmerón, A. J. (2020). Reducing results variance in lifespan machines: An analysis of the influence of vibrotaxis on wild-type Caenorhabditis elegans for the death criterion. Sensors. https://doi.org/10.3390/s20215981Puchalt, J. C., Sánchez-Salmerón, A. J., Eugenio, I., et al. (2021). Small flexible automated system for monitoring Caenorhabditis elegans lifespan based on active vision and image processing techniques. Scientific Reports. https://doi.org/10.1038/s41598-021-91898-6Puchalt, J. C., Gonzalez-Rojo, J. F., Gómez-Escribano, A. P., et al. (2022). Multiview motion tracking based on a cartesian robot to monitor Caenorhabditis elegans in standard petri dishes. Scientific Reports, 12(1), 1–11. https://doi.org/10.1038/s41598-022-05823-6Qamar, S., Jin, H., Zheng, R., et al. (2020). A variant form of 3d-unet for infant brain segmentation. Future Generation Computer Systems, 108, 613–623. https://doi.org/10.1016/j.future.2019.11.021Rizvandi, N. B., Pizurica, A., Philips, W. (2008a). Machine vision detection of isolated and overlapped nematode worms using skeleton analysis. In: 2008 15th IEEE International Conference on Image Processing. IEEE, San Diego, CA, USA, pp. 2972–2975. https://doi.org/10.1109/ICIP.2008.4712419Rizvandi, N. B., Pižurica, A., Rooms, F., (2008b) Skeleton analysis of population images for detection of isolated and overlapped nematode C. elegans. In: 16th European Signal Processing Conference, pp. 1–5. Lausanne, Switzerland: IEEE.Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention, Springer, vol. 9351, pp. 234–241. Cham: Springer.Schraml, D. (2019). Physically based synthetic image generation for machine learning: a review of pertinent literature. In: Photonics and Education in Measurement Science 2019, International Society for Optics and Photonics, Jena, Germany, pp. 111440J. https://doi.org/10.1117/12.2533485.Stiernagle, T. (2006). Maintenance of C. elegans. https://doi.org/10.1895/wormbook.1.101.1. https://www.ncbi.nlm.nih.gov/books/NBK19649/?report=classicTang, P., Liang, Q., Yan, X., et al. (2019). Efficient skin lesion segmentation using separable-unet with stochastic weight averaging. Computer Methods and Programs in Biomedicine, 178, 289–301. https://doi.org/10.1016/j.cmpb.2019.07.005Trebing, K., Stanczyk, T., & Mehrkanoon, S. (2021). Smaat-unet: Precipitation nowcasting using a small attention-unet architecture. Pattern Recognition Letters, 145, 178–186. https://doi.org/10.1016/j.patrec.2021.01.036Tschandl, P., Sinz, C., & Kittler, H. (2019). Domain-specific classification-pretrained fully convolutional network encoders for skin lesion segmentation. Computers in Biology and Medicine, 104, 111–116. https://doi.org/10.1016/j.compbiomed.2018.11.010Tsibidis, G. D., & Tavernarakis, N. (2007). Nemo: a computational tool for analyzing nematode locomotion. BMC Neuroscience. https://doi.org/10.1186/1471-2202-8-86Uhlmann, V., Unser, M. (2015) Tip-seeking active contours for bioimage segmentation. In: 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). IEEE, Brooklyn, NY, USA, pp. 544–547. https://doi.org/10.1109/ISBI.2015.7163931.Wang, D., Lu, Z., Bao, Z. (2019). Augmenting C. elegans microscopic dataset for accelerated pattern recognition. arXiv preprint arXiv:1906.00078. https://doi.org/10.48550/arXiv.1906.00078Wang, L., Kong, S., Pincus, Z., et al. (2020). Celeganser: Automated analysis of nematode morphology and age. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, Seattle, WA, USA, pp. 4164–4173. https://doi.org/10.1109/CVPRW50498.2020.00492Wiehman, S., de Villiers, H. (2016). Semantic segmentation of bioimages using convolutional neural networks. In: 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, Vancouver, BC, Canada, pp. 624–631, https://doi.org/10.1109/IJCNN.2016.7727258.Wiles, O., & Zisserman, A. (2019). Learning to predict 3d surfaces of sculptures from single and multiple views. International Journal of Computer Vision, 127(11), 1780–1800. https://doi.org/10.1007/s11263-018-1124-0Winter, P. B., Brielmann, R. M., Timkovich, N. P., et al. (2016). A network approach to discerning the identities of C. elegans in a free moving population. Scientific Reports, 6, 34859. https://doi.org/10.1038/srep34859Wöhlby, C., Kamentsky, L., Liu, Z., et al. (2012). An image analysis toolbox for high-throughput C. elegans assays. Nature methods, 9, 714–6. https://doi.org/10.1038/nmeth.1984Yu, C. C. J., Raizen, D. M., & Fang-Yen, C. (2014). Multi-well imaging of development and behavior in Caenorhabditis elegans. Journal of Neuroscience Methods, 223, 35–39. https://doi.org/10.1016/j.jneumeth.2013.11.026Yu, X., Creamer, M. S., Randi, F., et al. (2021). Fast deep neural correspondence for tracking and identifying neurons in C. elegans using semi-synthetic training. eLife, 10, e66,410. https://doi.org/10.7554/eLife.66410Zhao, X., Yuan, Y., Song, M., et al. (2019). Use of unmanned aerial vehicle imagery and deep learning unet to extract rice lodging. Sensors. https://doi.org/10.3390/s1918385

    Diseño, desarrollo y evaluación de algoritmos basados en aprendizaje profundo para automatización de experimentos Lifespan con C. elegans

    Full text link
    [ES] En los últimos años, los nematodos C. elegans cultivados en placas de Petri se han utilizado en muchas investigaciones relacionadas con el envejecimiento. El desarrollo de nuevas herramientas para automatizar los experimentos de lifespan permite realizar más ensayos en menos tiempo y evitar errores humanos, obteniendo resultados más precisos. El objetivo de este TFM consiste en diseñar y desarrollar métodos para abordar este problema utilizando técnicas de aprendizaje profundo. Posteriormente, se evaluarán los resultados comparando los resultados con los obtenidos empleando técnicas tradicionales de visión por computador. Inicialmente, el trabajo se centrará en la creación y edición de forma supervisada de un conjunto de imágenes bien etiquetadas. Posteriormente se diseñarán distintas arquitecturas de redes neuronales y se optimizará cada una de ellas sobre el espacio de hiperparámetros utilizando Python y Pytorch. Finalmente, se evaluarán las distintas arquitecturas propuestas, utilizando como criterios de optimización tanto las tasas de aciertos como los costes temporales de computación.[EN] In recent years, C. elegans nematodes grown in Petri dishes have been used in many investigations related to aging. The development of new tools to automate lifespan experiments allows more tests to be carried out in less time and to avoid human error, obtaining more accurate results. The objective of this TFM is to design and develop methods to address this problem using deep learning techniques. Subsequently, the results will be evaluated by comparing the results with those obtained using traditional computer vision techniques. Initially, work will focus on supervised creation and editing of a set of well-labeled images. Subsequently, different neural network architectures will be designed and each one will be optimized on the hyperparameter space using Python and Pytorch. Finally, the different proposed architectures will be evaluated, using both the accuracies and the temporary computing costs as optimization criteria.[CA] En els últims anys, els nematodes C. elegans conreats en plaques de Petri s'han utilitzat en moltes recerques relacionades amb l'envelliment. El desenvolupament de noves eines per a automatitzar els experiments de lifespan permet realitzar més assajos en menys temps i evitar errors humans, obtenint resultats més precisos. L'objectiu d'aquest TFM consisteix a dissenyar i desenvolupar mètodes per a abordar aquest problema utilitzant tècniques d'aprenentatge profund. Posteriorment, s'avaluaran els resultats comparant els resultats amb els obtinguts emprant tècniques tradicionals de visió per computador. Inicialment, el treball se centrarà en la creació i edició de forma supervisada d'un conjunt d'imatges ben etiquetades. Posteriorment es dissenyaran diferents arquitectures de xarxes neuronals i s'optimitzarà cadascuna d'elles sobre l'espai de hiperparámetros utilitzant Python i Pytorch. Finalment, s'avaluaran les diferents arquitectures proposades, utilitzant com a criteris d'optimització tant les taxes d'encerts com els costos temporals de computació.García Garví, A. (2020). Diseño, desarrollo y evaluación de algoritmos basados en aprendizaje profundo para automatización de experimentos Lifespan con C. elegans. http://hdl.handle.net/10251/151938TFG

    Scalable image analysis for quantitative microscopy

    Get PDF
    Seit der Erfindung des Mikroskops haben mikroskopische Bilder zu neuen Erkenntnissen in der biomedizinischen Forschung geführt. Moderne Mikroskope sind in der Lage große Bilddatensätze von zunehmender Komplexität zu erzeugen, was eine manuelle Analyse ineffizient, wenn nicht gar undurchführbar macht. In dieser Arbeit stelle ich zwei neue Methoden für die automatische Bildanalyse von Mikroskopiedaten vor. 1. Die Fourier-Ringkorrelations-basierte Qualitätsschätzung (FRC-QE), ist eine neue Metrik für die automatisierte Bildqualitätsschätzung von 3D-Fluoreszenzmikroskopieaufnahmen, hier getestet am Beispiel von menschlichen Hirnorganoiden. FRC-QE automatisiert die Qualitätskontrolle, eine Aufgabe, die häufig manuell durchgeführt wird und somit einen Engpass bei der Skalierung bildbasierter Experimente auf tausend oder mehr Bilder darstellt. Die Methode kann die Clearing-Effizienz über experimentelle Replikate und Protokolle quantifizieren. Sie ist auf verschiedene Mikroskopiemodelle übertragbar und lässt sich effizient auf Tausende von Bildern skalieren. 2. Der von mir entwickelte "WormObserver" ermöglicht Langzeitaufnahmen, verarbeitet automatisch die aufgenommenen Videos und erleichtert die Datenintegration über Tausende von Individuen hinweg, um Verhaltensmuster zu entschlüsseln. Darauf aufbauend, habe ich mich auf ein Beispiel für die Plastizität des Nervensystems konzentriert: Die Verhaltenstrajektorie des "C. elegans Dauer Exits". Um den Entscheidungsmechanismus beim Verlassen des Dauer Larvenstadiums zu charakterisieren, habe ich Zeitrafferdaten von Larvenpopulationen in verschiedenen Umgebungen erfasst, analysiert und wichtige Entscheidungspunkte identifiziert. Indem ich die Verhaltensanpassung mit der Genexpression kontextualisiert habe, konnte ich neue Erkenntnisse gewinnen, wie ein sich entwickelndes Nervensystem externe Stimuli robust integrieren und das Verhalten des Organismus an neue Umgebungen anpassen kann.Since the invention of the microscope, microscopy images have generated new insights in biomedical research. While in the past these images were used for illustrative purposes, state-of-the-art microscopy images provide quantitative measurements. Moreover, modern microscopes are capable of autonomously producing large image datasets of increasing complexity, rendering manual analysis inefficient if not infeasible. Thus, extracting biologically relevant information from these datasets requires computational analysis using appropriate algorithms and software. While some analysis methods generalize to different microscope set-ups and types of images, others need to be well tailored to a particular problem. In this work, I present two new methods for automated image analysis of microscopy data. First, Fourier ring correlation-based quality estimation (FRC-QE) is a new metric for automated image quality estimation of 3D fluorescence microscopy acquisitions. I benchmarked the method in the context of evaluating clearing efficiency in human brain organoids. FRC-QE automates image quality control, a task that is often performed manually and thereby represents a bottleneck when scaling image-based experiments to thousand or more images. The method can estimate clearing efficiency across experimental replicates and clearing protocols. It generalizes to different microscopy modalities and efficiently scales to thousands of images. Second, I have developed a new method for behavioral imaging of C. elegans larvae. The “WormObserver” enables long-term imaging (>12h, >80k images/experiment), automatically processes the acquired videos
    corecore