1,460 research outputs found

    Functional anatomy of a visuomotor transformation in the optic tectum of zebrafish

    Get PDF

    Functional anatomy of a visuomotor transformation in the optic tectum of zebrafish

    Get PDF
    Animals detect sensory cues in their environment and process this information in order to carry out adaptive behavioral responses. To generate target - directed movements, the brain transforms structured sensory inputs into coordinated motor com mands. Some of these behaviors, such as escaping from a predator or approaching a prey, need to be fast and reproducible. The optic tectum of vertebrates (named "superior colliculus" in mammals) is the main target of visual information and is known to play a pivotal role in these kinds of visuomotor transformation. In my dissertation, I investigated the neuronal circuits that map visual cues to motor commands, with a focus on the axonal projections that connect the tectum to premotor areas of the tegmentum and hindbrain. To address these questions, I developed and combined several techniques to link functional information and anatomy to behavior. The animal I chose for my studies is the zebrafish larva, which is amenable to transgenesis, optical imaging appr oaches, optogenetics and behavioral recordings in virtual reality arenas. In a first study, I designed, generated and characterized BAC transgenic lines, which allow gene - specific labelling of neurons and intersectional genetics using Cre - mediated recombination. Importantly, I generated a pan - neuronal line that facilitates brain registrations in order to compare different expression patterns (Förster et al., 2017 b ). In a second project , I contributed to the development of an approach that combines t wo - photon holographic optogenetic stimulation with whole brain calcium imaging, behavior tracking and morphological reconstruction. In this study, I designed the protocol to reveal the anatomical identity of optogenetically targeted individual neurons (dal Maschio et al., 2017). In a third project, I took advantage of some of these methods, including whole - brain calcium imaging, optogenetics and brain registrations, to elucidate how the tectum is wired to make behavioral decisions and to steer behavior dire ctionality. The results culminated in a third manuscript (Helmbrecht et al., submitted), which reported four main findings. First, I optogenetically demonstrated a retinotopic organization of the tectal motor map in zebrafish larvae. Second, I generated a tectal "projectome" with cellular resolution, by reconstructing and registering stochastically labeled tectal projection neurons. Third, by employing this anatomical atlas to interpret functional imaging data, I asked whether visual information leaves the tectum via distinct projection neurons. This revealed that two distinct uncrossed tectobulbar pathways (ipsilateral tectobulbar tract, iTB) are involved in either avoidance (medial iTB, iTB - M) or approach (lateral iTB, iTB - L) behavior. Finally, I showed th at the location of a prey - like object, and therefore the direction of orientation swims towards prey, is functionally encoded in iTB - L projection neurons. In summary, I demonstrated in this thesis how refined genetic and optical methods can be used to stu dy neuronal circuits with cellular and subcellular resolution. Importantly, apart from the biological findings on the visuomotor transformation, these newly developed tools can be broadly employed to link brain anatomy to circuit activity and behavior

    Imaging White Blood Cells using a Snapshot Hyper-Spectral Imaging System

    Get PDF
    Automated white blood cell (WBC) counting systems process an extracted whole blood sample and provide a cell count. A step that would not be ideal for onsite screening of individuals in triage or at a security gate. Snapshot Hyper-Spectral imaging systems are capable of capturing several spectral bands simultaneously, offering co-registered images of a target. With appropriate optics, these systems are potentially able to image blood cells in vivo as they flow through a vessel, eliminating the need for a blood draw and sample staining. Our group has evaluated the capability of a commercial Snapshot Hyper-Spectral imaging system, specifically the Arrow system from Rebellion Photonics, in differentiating between white and red blood cells on unstained and sealed blood smear slides. We evaluated the imaging capabilities of this hyperspectral camera as a platform to build an automated blood cell counting system. Hyperspectral data consisting of 25, 443x313 hyperspectral bands with ~3nm spacing were captured over the range of 419 to 494nm. Open-source hyperspectral datacube analysis tools, used primarily in Geographic Information Systems (GIS) applications, indicate that white blood cells\u27 features are most prominent in the 428-442nm band for blood samples viewed under 20x and 50x magnification over a varying range of illumination intensities. The system has shown to successfully segment blood cells based on their spectral-spatial information. These images could potentially be used in subsequent automated white blood cell segmentation and counting algorithms for performing in vivo white blood cell counting

    Advanced methods in reproductive medicine: Application of optical nanoscopy, artificial intelligence-assisted quantitative phase microscopy and mitochondrial DNA copy numbers to assess human sperm cells

    Get PDF
    Declined fertility rate and population is a matter of serious concern, especially in the developed nations. Assisted Reproductive Technologies (ART), including in vitro fertilization (IVF), have provided great hope for infertility treatment and maintaining population growth and social structure. With the help of ART, more than 8 million babies have already been born so far. Despite the worldwide expansion of ART, there is a number of open questions on the IVF success rates. Male factors for infertility contribute equally as female factors, however, male infertility is primarily focused on the “semen quality”. Therefore, the search of new semen parameters for male fertility evaluation and the exploration of the optimal method of sperm selection in IVF have been included among the top 10 research priorities for male infertility and medically assisted reproduction. The development of imaging systems coupled with image processing by Artificial Intelligence (AI) could be the revolutionary step for semen quality analysis and sperm cell selection in IVF procedures. For this work, we applied optical nanoscopy technology for the analysis of human spermatozoa, i.e., label-based Structured Illumination Microscopy (SIM) and non-invasive Quantitative Phase Microscopy (QPM). The SIM results demonstrated a prominent contrast and resolution enhancement for subcellular structures of living sperm cells, especially for mitochondria-containing midpiece, where features around 100 nm length-scale were resolved. Further, non-labeled QPM combined with machine learning technique revealed the association between gradual progressive motility loss and the morphology changes of the sperm head after external exposure to various concentrations of hydrogen peroxide. Moreover, to recognize healthy and stress-affected sperm cells, we applied Deep Neural Networks (DNNs) to QPM images achieving an accuracy of 85.6% on a dataset of 10,163 interferometric images of sperm cells. Additionally, we summarized the evidence from published literature regarding the association between mitochondrial DNA copy numbers (mtDNAcn) and semen quality. To conclude, we set up the high-resolution imaging of living human sperm cells with a remarkable level of subcellular structural details provided by SIM. Next, the morphological changes of sperm heads resulting from peroxidation have been revealed by QPM, which may not be explored by microscopy currently used in IVF settings. Besides, the implementation of DNNs for QPM image processing appears to be a promising tool in the automated classification and selection of sperm cells during IVF procedures. Moreover, the results of our meta-analysis showed an association of mtDNAcn in human sperm cells and semen quality, which seems to be a relevant sperm parameter for routine clinical practice in male fertility assessment

    Microscopy and Analysis

    Get PDF
    Microscopes represent tools of the utmost importance for a wide range of disciplines. Without them, it would have been impossible to stand where we stand today in terms of understanding the structure and functions of organelles and cells, tissue composition and metabolism, or the causes behind various pathologies and their progression. Our knowledge on basic and advanced materials is also intimately intertwined to the realm of microscopy, and progress in key fields of micro- and nanotechnologies critically depends on high-resolution imaging systems. This volume includes a series of chapters that address highly significant scientific subjects from diverse areas of microscopy and analysis. Authoritative voices in their fields present in this volume their work or review recent trends, concepts, and applications, in a manner that is accessible to a broad readership audience from both within and outside their specialist area

    Roadmap for Optical Tweezers 2023

    Get PDF
    Optical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nanoparticle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space exploration

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    Roadmap for optical tweezers

    Full text link
    Artículo escrito por un elevado número de autores, solo se referencian el que aparece en primer lugar, el nombre del grupo de colaboración, si le hubiere, y los autores pertenecientes a la UAMOptical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects, ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in the life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nano-particle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space explorationEuropean Commission (Horizon 2020, Project No. 812780

    Roadmap for optical tweezers

    Get PDF
    Optical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects, ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in the life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nano-particle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space exploration.journal articl
    corecore