471 research outputs found

    Light-microscopy methods in C. elegans research

    Get PDF
    Ever since Caenorhabditis elegans was introduced as a model system it has been tightly linked to microscopy, which has led to significant advances in understanding biology over the last decades. Developing new technologies therefore is an essential part in the endeavor to gain further mechanistic insights into developmental biology. This review will discuss state-of-the-art developments in quantitative light microscopy in the context of C. elegans research as well as the impact these technologies have on the field. We will highlight future developments that currently promise to revolutionize biological research by combining sequencing-based single-cell technologies with high-resolution quantitative imaging

    Verificação facial em duas etapas para dispositivos móveis

    Get PDF
    Orientadores: Jacques Wainer, Fernanda Alcântara AndalóDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Dispositivos móveis, como smartphones e tablets, se tornaram mais populares e acessíveis nos últimos anos. Como consequência de sua ubiquidade, esses aparelhos guardam diversos tipos de informações pessoais (fotos, conversas de texto, coordenadas GPS, dados bancários, entre outros) que só devem ser acessadas pelo dono do dispositivo. Apesar de métodos baseados em conhecimento, como senhas numéricas ou padrões, ainda estejam entre as principais formas de assegurar a identidade do usuário, traços biométricos tem sido utilizados para garantir uma autenticação mais segura e prática. Entre eles, reconhecimento facial ganhou atenção nos últimos anos devido aos recentes avanços nos dispositivos de captura de imagens e na crescente disponibilidade de fotos em redes sociais. Aliado a isso, o aumento de recursos computacionais, com múltiplas CPUs e GPUs, permitiu o desenvolvimento de modelos mais complexos e robustos, como redes neurais profundas. Porém, apesar da evolução das capacidades de dispositivos móveis, os métodos de reconhecimento facial atuais ainda não são desenvolvidos considerando as características do ambiente móvel, como processamento limitado, conectividade instável e consumo de bateria. Neste trabalho, nós propomos um método de verificação facial otimizado para o ambiente móvel. Ele consiste em um procedimento em dois níveis que combina engenharia de características (histograma de gradientes orientados e análise de componentes principais por regiões) e uma rede neural convolucional para verificar se o indivíduo presente em uma imagem corresponde ao dono do dispositivo. Nós também propomos a \emph{Hybrid-Fire Convolutional Neural Network}, uma arquitetura ajustada para dispositivos móveis que processa informação de pares de imagens. Finalmente, nós apresentamos uma técnica para adaptar o limiar de aceitação do método proposto para imagens com características diferentes daquelas presentes no treinamento, utilizando a galeria de imagens do dono do dispositivo. A solução proposta se compara em acurácia aos métodos de reconhecimento facial do estado da arte, além de possuir um modelo 16 vezes menor e 4 vezes mais rápido ao processar uma imagem em smartphones modernos. Por último, nós também organizamos uma base de dados composta por 2873 selfies de 56 identidades capturadas em condições diversas, a qual esperamos que ajude pesquisas futuras realizadas neste cenárioAbstract: Mobile devices, such as smartphones and tablets, had their popularity and affordability greatly increased in recent years. As a consequence of their ubiquity, these devices now carry all sorts of personal data (\emph{e.g.} photos, text conversations, GPS coordinates, banking information) that should be accessed only by the device's owner. Even though knowledge-based procedures, such as entering a PIN or drawing a pattern, are still the main methods to secure the owner's identity, recently biometric traits have been employed for a more secure and effortless authentication. Among them, face recognition has gained more attention in past years due to recent improvements in image-capturing devices and the availability of images in social networks. In addition to that, the increase in computational resources, with multiple CPUs and GPUs, enabled the design of more complex and robust models, such as deep neural networks. Although the capabilities of mobile devices have been growing in past years, most recent face recognition techniques are still not designed considering the mobile environment's characteristics, such as limited processing power, unstable connectivity and battery consumption. In this work, we propose a facial verification method optimized to the mobile environment. It consists of a two-tiered procedure that combines hand-crafted features (histogram of oriented gradients and local region principal component analysis) and a convolutional neural network to verify if the person depicted in a picture corresponds to the device owner. We also propose \emph{Hybrid-Fire Convolutional Neural Network}, an architecture tweaked for mobile devices that process encoded information of a pair of face images. Finally, we expose a technique to adapt our method's acceptance thresholds to images with different characteristics than those present during training, by using the device owner's enrolled gallery. The proposed solution performs a par to the state-of-the-art face recognition methods, while having a model 16 times smaller and 4 times faster when processing an image in recent smartphone models. Finally, we have collected a new dataset of selfie pictures comprising 2873 images from 56 identities with varied capture conditions, that hopefully will support future researches in this scenarioMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    3D CNN methods in biomedical image segmentation

    Get PDF
    A definite trend in Biomedical Imaging is the one towards the integration of increasingly complex interpretative layers to the pure data acquisition process. One of the most interesting and looked-forward goals in the field is the automatic segmentation of objects of interest in extensive acquisition data, target that would allow Biomedical Imaging to look beyond its use as a purely assistive tool to become a cornerstone in ambitious large-scale challenges like the extensive quantitative study of the Human Brain. In 2019 Convolutional Neural Networks represent the state of the art in Biomedical Image segmentation and scientific interests from a variety of fields, spacing from automotive to natural resource exploration, converge to their development. While most of the applications of CNNs are focused on single-image segmentation, biomedical image data -being it MRI, CT-scans, Microscopy, etc- often benefits from three-dimensional volumetric expression. This work explores a reformulation of the CNN segmentation problem that is native to the 3D nature of the data, with particular interest to the applications to Fluorescence Microscopy volumetric data produced at the European Laboratories for Nonlinear Spectroscopy in the context of two different large international human brain study projects: the Human Brain Project and the White House BRAIN Initiative

    Ant genera identification using an ensemble of convolutional neural networks

    Get PDF
    Works requiring taxonomic knowledge face several challenges, such as arduous identification of many taxa and an insufficient number of taxonomists to identify a great deal of collected organisms. Machine learning tools, particularly convolutional neural networks (CNNs), are then welcome to automatically generate high-performance classifiers from available data. Supported by the image datasets available at the largest online database on ant biology, the AntWeb (www.antweb.org), we propose here an ensemble of CNNs to identify ant genera directly from the head, profile and dorsal perspectives of ant images. Transfer learning is also considered to improve the individual performance of the CNN classifiers. The performance achieved by the classifiers is diverse enough to promote a reduction in the overall classification error when they are combined in an ensemble, achieving an accuracy rate of over 80% on top-1 classification and an accuracy of over 90% on top-3 classification131CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO - CNPQCOORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL DE NÍVEL SUPERIOR - CAPESFUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO - FAPESP141308/2014-1; 131488/2015-5; 311751/2013-0; 309115/2014-023038.002884/2013-382014/13533-

    Development of optical methods for real-time whole-brain functional imaging of zebrafish neuronal activity

    Get PDF
    Each one of us in his life has, at least once, smelled the scent of roses, read one canto of Dante’s Commedia or listened to the sound of the sea from a shell. All of this is possible thanks to the astonishing capabilities of an organ, such as the brain, that allows us to collect and organize perceptions coming from sensory organs and to produce behavioural responses accordingly. Studying an operating brain in a non-invasive way is extremely difficult in mammals, and particularly in humans. In the last decade, a small teleost fish, zebrafish (Danio rerio), has been making its way into the field of neurosciences. The brain of a larval zebrafish is made up of 'only' 100000 neurons and it’s completely transparent, making it possible to optically access it. Here, taking advantage of the best of currently available technology, we devised optical solutions to investigate the dynamics of neuronal activity throughout the entire brain of zebrafish larvae

    Neuron-level dynamics of oscillatory network structure and markerless tracking of kinematics during grasping

    Get PDF
    Oscillatory synchrony is proposed to play an important role in flexible sensory-motor transformations. Thereby, it is assumed that changes in the oscillatory network structure at the level of single neurons lead to flexible information processing. Yet, how the oscillatory network structure at the neuron-level changes with different behavior remains elusive. To address this gap, we examined changes in the fronto-parietal oscillatory network structure at the neuron-level, while monkeys performed a flexible sensory-motor grasping task. We found that neurons formed separate subnetworks in the low frequency and beta bands. The beta subnetwork was active during steady states and the low frequency network during active states of the task, suggesting that both frequencies are mutually exclusive at the neuron-level. Furthermore, both frequency subnetworks reconfigured at the neuron-level for different grip and context conditions, which was mostly lost at any scale larger than neurons in the network. Our results, therefore, suggest that the oscillatory network structure at the neuron-level meets the necessary requirements for the coordination of flexible sensory-motor transformations. Supplementarily, tracking hand kinematics is a crucial experimental requirement to analyze neuronal control of grasp movements. To this end, a 3D markerless, gloveless hand tracking system was developed using computer vision and deep learning techniques. 2021-11-3

    Identifying and ranking potential driver genes of Alzheimer\u27s disease using multiview evidence aggregation.

    Get PDF
    MOTIVATION: Late onset Alzheimer\u27s disease is currently a disease with no known effective treatment options. To better understand disease, new multi-omic data-sets have recently been generated with the goal of identifying molecular causes of disease. However, most analytic studies using these datasets focus on uni-modal analysis of the data. Here, we propose a data driven approach to integrate multiple data types and analytic outcomes to aggregate evidences to support the hypothesis that a gene is a genetic driver of the disease. The main algorithmic contributions of our article are: (i) a general machine learning framework to learn the key characteristics of a few known driver genes from multiple feature sets and identifying other potential driver genes which have similar feature representations, and (ii) A flexible ranking scheme with the ability to integrate external validation in the form of Genome Wide Association Study summary statistics. While we currently focus on demonstrating the effectiveness of the approach using different analytic outcomes from RNA-Seq studies, this method is easily generalizable to other data modalities and analysis types. RESULTS: We demonstrate the utility of our machine learning algorithm on two benchmark multiview datasets by significantly outperforming the baseline approaches in predicting missing labels. We then use the algorithm to predict and rank potential drivers of Alzheimer\u27s. We show that our ranked genes show a significant enrichment for single nucleotide polymorphisms associated with Alzheimer\u27s and are enriched in pathways that have been previously associated with the disease. AVAILABILITY AND IMPLEMENTATION: Source code and link to all feature sets is available at https://github.com/Sage-Bionetworks/EvidenceAggregatedDriverRanking
    corecore