549 research outputs found

    Technologies bringing young Zebrafish from a niche field to the limelight

    Get PDF
    Fundamental life science and pharmaceutical research are continually striving to provide physiologically relevant context for their biological studies. Zebrafish present an opportunity for high-content screening (HCS) to bring a true in vivo model system to screening studies. Zebrafish embryos and young larvae are an economical, human-relevant model organism that are amenable to both genetic engineering and modification, and direct inspection via microscopy. The use of these organisms entails unique challenges that new technologies are overcoming, including artificial intelligence (AI). In this perspective article, we describe the state-of-the-art in terms of automated sample handling, imaging, and data analysis with zebrafish during early developmental stages. We highlight advances in orienting the embryos, including the use of robots, microfluidics, and creative multi-well plate solutions. Analyzing the micrographs in a fast, reliable fashion that maintains the anatomical context of the fluorescently labeled cells is a crucial step. Existing software solutions range from AI-driven commercial solutions to bespoke analysis algorithms. Deep learning appears to be a critical tool that researchers are only beginning to apply, but already facilitates many automated steps in the experimental workflow. Currently, such work has permitted the cellular quantification of multiple cell types in vivo, including stem cell responses to stress and drugs, neuronal myelination and macrophage behavior during inflammation and infection. We evaluate pro and cons of proprietary versus open-source methodologies for combining technologies into fully automated workflows of zebrafish studies. Zebrafish are poised to charge into HCS with ever-greater presence, bringing a new level of physiological context

    Automatic detection and classification of honey bee comb cells using deep learning

    Get PDF
    In a scenario of worldwide honey bee decline, assessing colony strength is becoming increasingly important for sustainable beekeeping. Temporal counts of number of comb cells with brood and food reserves offers researchers data for multiple applications, such as modelling colony dynamics, and beekeepers information on colony strength, an indicator of colony health and honey yield. Counting cells manually in comb images is labour intensive, tedious, and prone to error. Herein, we developed a free software, named DeepBee©, capable of automatically detecting cells in comb images and classifying their contents into seven classes. By distinguishing cells occupied by eggs, larvae, capped brood, pollen, nectar, honey, and other, DeepBee© allows an unprecedented level of accuracy in cell classification. Using Circle Hough Transform and the semantic segmentation technique, we obtained a cell detection rate of 98.7%, which is 16.2% higher than the best result found in the literature. For classification of comb cells, we trained and evaluated thirteen different convolutional neural network (CNN) architectures, including: DenseNet (121, 169 and 201); InceptionResNetV2; InceptionV3; MobileNet; MobileNetV2; NasNet; NasNetMobile; ResNet50; VGG (16 and 19) and Xception. MobileNet revealed to be the best compromise between training cost, with ~9 s for processing all cells in a comb image, and accuracy, with an F1-Score of 94.3%. We show the technical details to build a complete pipeline for classifying and counting comb cells and we made the CNN models, source code, and datasets publicly available. With this effort, we hope to have expanded the frontier of apicultural precision analysis by providing a tool with high performance and source codes to foster improvement by third parties (https://github.com/AvsThiago/DeepBeesource).This research was developed in the framework of the project “BeeHope - Honeybee conservation centers in Western Europe: an innovative strategy using sustainable beekeeping to reduce honeybee decline”, funded through the 2013-2014 BiodivERsA/FACCE-JPI Joint call for research proposals, with the national funders FCT (Portugal), CNRS (France), and MEC (Spain).info:eu-repo/semantics/publishedVersio

    Design and Development of Imaging Platforms for Phenotypic Characterization of Early Zebrafish

    Get PDF
    Der Zebrabärbling hat sich in den letzten Jahrzehnten als ein beliebter und vielversprechender Modellorganismus herausgestellt. Mit seiner Hilfe werden zunehmend die grundlegenden biologischen Funktionsweisen von Wirbeltieren untersucht und anhand der Erkenntnisse neue Therapien und Medikamente für Krankheiten entwickelt. Zusätzlich hat sich die Verhaltensforschung als Gebiet mit hohem Potential für neue Entdeckungen entpuppt, da es hier möglich ist, deutlich feinere Unterscheidungen und Effekte nachzuvollziehen als es bei stark abgegrenzten Endpunkten wie Verformungen oder Toxizität der Fall ist. Im frühen Stadium bis fünf Tage nach Befruchtung zeigen die Embryonen und Larven des Zebrabärblings einige charakteristische Verhaltensweisen, die durch künstliche Stimulation hervorgerufen werden können. Noch in der Eischale bei einem Alter von nur 30 bis 42 Stunden nach der Befruchtung reagieren die Embryonen auf einen Lichtblitz mit erhöhter Bewegung, der sogenannten Photomotor Response. Bei wiederholtem Belichten bleibt diese Reaktion aus, was als ein typisches Verhaltensmuster interpretiert werden kann. Werden die Embryonen jedoch Chemikalien oder Mutationen ausgesetzt, kann sich dieses Muster verändern und es können Rückschlüsse über die Funktionsweise der verursachenden Methoden gewonnen werden. Als zusätzliche Verhaltensweisen lassen sich die beiden Schreckreaktionen auf Vibration und Berührung nutzen. Bereits in der Eischale lassen sich die Embryonen durch Berührung zum Bewegen bringen. Sobald sie in einem Alter von ca. drei Tagen nach Befruchtung geschlüpft sind, wird die Reaktion als C-Krümmung bezeichnet, da die Larve eine charakteristische Biegung entlang ihrer Körperachse einnimmt bevor sie davonschwimmt. Dasselbe gilt für die Vibrationsreaktion ab einem Alter von ca. fünf Tagen nach Befruchtung. Um diese Verhalten sinnvoll nutzen zu können sind automatisierte Lösungen notwendig, die die Vorbereitung, die Abläufe und die Analyse soweit vereinfachen, dass kaum noch menschliches Eingreifen notwendig ist. Nur so kann der notwendige Durchsatz und die Reproduzierbarkeit gewährleistet werden um statistisch aussagekräftige Effekte nachzuweisen. Aus diesem Grund wurden drei unabhängige mechatronische Systeme entwickelt, die je eines der drei genannten Verhaltensmuster automatisiert auslösen, aufzeichnen und analysieren können. Dazu waren neben der Hard- und Softwareentwicklung auch biologische Vorgehensweisen notwendig um die Systeme zu validieren und sie bereits in ersten biologischen Untersuchungen einzusetzen. Für das PMR System wurde ein hochautomatisierter Versuchsablauf entwickelt, der anhand eines Roboters die Embryonen zur Vorbereitung sortiert und anschließend in einem automatisierten Mikroskop mit vollständig eigenentwickelter Steuerungssoftware die Aufzeichnung der Reaktion gewährleistet. Anschließend können die Rohdaten in Form von Videos automatisiert analysiert werden um numerische Daten aus den Bildreihen zu extrahieren. Das Vibrationssystem umfasst einen neuentwickelten Vibrationserreger in Form eines modifizierten Lautsprechers, der es erlaubt, mehrere Proben parallel zu untersuchen. Dazu wurde der Erreger ausgiebig charakterisiert um zu gewährleisten, dass die erzielten Beschleunigungswerte sowie die Impulsdauer und Frequenz den angestrebten Werten von 14 g, 1 ms und 500 Hz entsprechen. Durch den Einsatz von Beschleunigungssensoren wurden die Erreger kalibriert und die Steuerungssoftware an die Ergebnisse so angepasst, dass ein einheitlicher Effekt zwischen den Erregern gewährleistet ist. Die Implementierung einer Hochgeschwindigkeitskamera erlaubt die Aufzeichnung der Reaktion bei bis zu 1000 Bildern pro Sekunde, was aufgrund der äußerst schnellen Reaktionszeit der Larven im Millisekundenbereich notwendig ist um den vollen Umfang der Reaktion abzubilden. Um Hochdurchsatzversuche zur Berührung der Larven zu ermöglichen, wurde das erste automatisierte System entwickelt, welches durch den Einsatz einer motorisiert positionierbaren Nadel einen computergesteuerten Berührungsvorgang ermöglicht. Ein berührungsempfindliches Mehrachsensystem wurde so konstruiert, dass der Nutzer über eine grafische Oberfläche das System fernsteuern kann und so die subjektiven und unnötig langwierigen Aspekte von manuellen Versuchsaufbauten umgangen werden können. Das System wurde mit einer digitalen Objekterkennung so erweitert, dass auch autonome Versuche möglich wurden. Die Systeme wurden im Rahmen von mehreren biologischen Untersuchungen am ITG ausgiebig getestet. Mit Hilfe des PMR Systems wurde eine mehrere hundert Proben umfassende Sammlung von Cannabinoid-ähnlichen Substanzen auf ihre neuroaktive Wirkung untersucht. So konnten charakteristische Reaktionsmuster identifiziert werden, die nun dabei helfen können, das Verständnis über die Struktur- und Wirkungszusammenhänge zu erhöhen. An den beiden Schreckreaktionen konnte die unterschiedliche Wirkung von Anästhetika auf Phänokopien von genetisch veränderten Zebrabärblingen nachgewiesen werden, was die Einsatzfähigkeit für chemische sowie genetische Versuche substantiiert

    Light-sheet microscopy for everyone? Experience of building an OpenSPIM to study flatworm development.

    Get PDF
    Background: Selective plane illumination microscopy (SPIM a type of light-sheet microscopy) involves focusing a thin sheet of laser light through a specimen at right angles to the objective lens. As only the thin section of the specimen at the focal plane of the lens is illuminated, out of focus light is naturally absent and toxicity due to light (phototoxicity) is greatly reduced enabling longer term live imaging. OpenSPIM is an open access platform (Pitrone et al. 2013 and OpenSPIM.org) created to give new users step-by-step instructions on building a basic configuration of a SPIM microscope, which can in principle be adapted and upgraded to each laboratory’s own requirements and budget. Here we describe our own experience with the process of designing, building, configuring and using an OpenSPIM for our research into the early development of the polyclad flatworm Maritigrella crozieri – a non-model animal. Results: Our OpenSPIM builds on the standard design with the addition of two colour laser illumination for simultaneous detection of two probes/molecules and dual sided illumination, which provides more even signal intensity across a specimen. Our OpenSPIM provides high resolution 3d images and time lapse recordings, and we demonstrate the use of two colour lasers and the benefits of two color dual-sided imaging. We used our microscope to study the development of the embryo of the polyclad flatworm M. crozieri. The capabilities of our microscope are demonstrated by our ability to record the stereotypical spiral cleavage pattern of M. crozieri with high-speed multi-view time lapse imaging. 3D and 4D (3D + time) reconstruction of early development from these data is possible using image registration and deconvolution tools provided as part of the open source Fiji platform. We discuss our findings on the pros and cons of a self built microscope. Conclusions: We conclude that home-built microscopes, such as an OpenSPIM, together with the available open source software, such as MicroManager and Fiji, make SPIM accessible to anyone interested in having continuous access to their own light-sheet microscope. However, building an OpenSPIM is not without challenges and an open access microscope is a worthwhile, if significant, investment of time and money. Multi-view 4D microscopy is more challenging than we had expected. We hope that our experience gained during this project will help future OpenSPIM users with similar ambitions

    Automated processing of zebrafish imaging data: a survey

    Get PDF
    Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines

    Automatic assessment of honey bee cells using deep learning

    Get PDF
    Temporal assessment of honey bee colony strength is required for different applications in many research projects, which often involves counting the number of comb cells with brood and food reserves multiple times a year. There are thousands of cells in each comb, which makes manual counting a time-consuming, tedious and thereby an error-prone task. Therefore, the automation of this task using modern imaging processing techniques represents a major advance. Herein, we developed a software capable of (i) detecting each cell from comb images, (ii) classifying its content and (iii) display the results to the researcher in a simple way. The cells’ contents typically display a high variation of patterns which make their classification by software a challenging endeavour. To address this challenge, we used Deep Neural Networks (DNNs). DNNs are known for achieving the state of art in many fields of study including image classification, because they can learn features that best describe the content being classified by themselves. Our DNN model was trained with over 70,000 manually labelled cell images whose cells were separated into seven classes. Our contribution is an end-to-end software capable of doing automatic background removal, cell detection, and classification of cell content based on an input comb image. With this software, colony assessment achieves an average accuracy of 94% across the seven classes in our dataset, representing a substantial progress regarding the approximation methods (e.g. Lieberfeld) currently used by honey bee researchers and previous techniques based on machine learning that used handmade features like colour and texture.A análise temporal sobre a qualidade e força de colônias de abelha melífera (Apis mellifera L.) é necessária em muitos projetos de pesquisa. Ela pode ser realizada contando alvéolos com alimento (pólen e néctar) e criação. É comum que ela seja feita diversas vezes ao ano. A grande quantidade de alvéolos em cada favo torna a tarefa demorada e tediosa ao pesquisador. Assim, frequentemente essa contagem é feita forma aproximada usando métodos como o de Lieberfeld. Automatizar este processo usando técnicas modernas de processamento de imagem representa um grande avanço, pois resultados mais precisos e padronizados poderão ser obtidos em menos tempo. O objetivo deste trabalho é construir de um software capaz de detectar, classificar e contar alvéolos a partir de uma imagem. Após, ele deverá apresentar os dados de forma simplificada ao usuário. Para tratar da alta variação de padrões como textura, cor e iluminação presente nas alvéolos, usaremos Deep Neural Network (DNN), que são modelos computacionais conhecidos por terem alcançado o estado da arte em várias tarefas relacionadas a processamento de sinais e imagens. Para o treinamento desses modelos utilizamos mais de 70.000 alvéolos anotadas por um apicultor experiente, separadas em sete classes. Entre nossas contribuições estão métodos de préprocessamento que garantem uma alta taxa de detecção de alvéolos, aliados a modelos de segmentação baseados em DNNs que asseguram uma baixa taxa de falsos positivos. Com nossos classificadores conseguimos uma acurácia média de 94% em nosso dataset e obtivemos resultados superiores a outros métodos baseados em contagens aproximadas e técnicas de análise por imagem que não utilizam DNNs.This research was conducted in the framework of the project BEEHOPE, funded through the 2013-2014 BiodivERsA/FACCE-JPI Joint call for research proposals, with the national founders FCT(Portugal), CNRS(France), and MEC(Spain)
    corecore