43,829 research outputs found

    Automating image analysis by annotating landmarks with deep neural networks

    Full text link
    Image and video analysis is often a crucial step in the study of animal behavior and kinematics. Often these analyses require that the position of one or more animal landmarks are annotated (marked) in numerous images. The process of annotating landmarks can require a significant amount of time and tedious labor, which motivates the need for algorithms that can automatically annotate landmarks. In the community of scientists that use image and video analysis to study the 3D flight of animals, there has been a trend of developing more automated approaches for annotating landmarks, yet they fall short of being generally applicable. Inspired by the success of Deep Neural Networks (DNNs) on many problems in the field of computer vision, we investigate how suitable DNNs are for accurate and automatic annotation of landmarks in video datasets representative of those collected by scientists studying animals. Our work shows, through extensive experimentation on videos of hawkmoths, that DNNs are suitable for automatic and accurate landmark localization. In particular, we show that one of our proposed DNNs is more accurate than the current best algorithm for automatic localization of landmarks on hawkmoth videos. Moreover, we demonstrate how these annotations can be used to quantitatively analyze the 3D flight of a hawkmoth. To facilitate the use of DNNs by scientists from many different fields, we provide a self contained explanation of what DNNs are, how they work, and how to apply them to other datasets using the freely available library Caffe and supplemental code that we provide.https://arxiv.org/abs/1702.00583Published versio

    Hierarchical ResNeXt Models for Breast Cancer Histology Image Classification

    Full text link
    Microscopic histology image analysis is a cornerstone in early detection of breast cancer. However these images are very large and manual analysis is error prone and very time consuming. Thus automating this process is in high demand. We proposed a hierarchical system of convolutional neural networks (CNN) that classifies automatically patches of these images into four pathologies: normal, benign, in situ carcinoma and invasive carcinoma. We evaluated our system on the BACH challenge dataset of image-wise classification and a small dataset that we used to extend it. Using a train/test split of 75%/25%, we achieved an accuracy rate of 0.99 on the test split for the BACH dataset and 0.96 on that of the extension. On the test of the BACH challenge, we've reached an accuracy of 0.81 which rank us to the 8th out of 51 teams

    Understanding Neural Pathways in Zebrafish through Deep Learning and High Resolution Electron Microscope Data

    Full text link
    The tracing of neural pathways through large volumes of image data is an incredibly tedious and time-consuming process that significantly encumbers progress in neuroscience. We are exploring deep learning's potential to automate segmentation of high-resolution scanning electron microscope (SEM) image data to remove that barrier. We have started with neural pathway tracing through 5.1GB of whole-brain serial-section slices from larval zebrafish collected by the Center for Brain Science at Harvard University. This kind of manual image segmentation requires years of careful work to properly trace the neural pathways in an organism as small as a zebrafish larva (approximately 5mm in total body length). In automating this process, we would vastly improve productivity, leading to faster data analysis and breakthroughs in understanding the complexity of the brain. We will build upon prior attempts to employ deep learning for automatic image segmentation extending methods for unconventional deep learning data.Comment: 8 pages, 5 figures (1a to 5c), PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Automation of pollen analysis using a computer microscope : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University

    Get PDF
    The classification and counting of pollen is an important tool in the understanding of processes in agriculture, forestry, medicine and ecology. Current pollen analysis methods are manual, require expert operators, and are time consuming. Significant research has been carried out into the automation of pollen analysis, however that work has mostly been limited to the classification of pollen. This thesis considers the problem of automating the classification and counting of pollen from the image capture stage. Current pollen analysis methods use expensive and bulky conventional optical microscopes. Using a solid-state image sensor instead of the human eye removes many of the constraints on the design of an optical microscope. Initially the goal was to develop a single lens microscope for imaging pollen. In-depth investigation and experimentation has shown that this is not possible. Instead a computer microscope has been developed which uses only a standard microscope objective and an image sensor to image pollen. The prototype computer microscope produces images of comparable quality to an expensive compound microscope at a tenth of the cost. A segmentation system has been developed for transforming images of a pollen slide, which contain both pollen and detritus, into images of individual pollen suitable for classification. The segmentation system uses adaptive thresholds and edge detection to isolate the pollen in the images. The automated pollen analysis system illustrated in this thesis has been used to capture and analyse four pollen taxa with a 96% success rate in identification. Since the image capture and segmentation stages described here do not affect the classification stage it is anticipated that the system is capable of classifying 16 pollen taxa, as demonstrated in earlier research

    Computational Contributions to the Automation of Agriculture

    Get PDF
    The purpose of this paper is to explore ways that computational advancements have enabled the complete automation of agriculture from start to finish. With a major need for agricultural advancements because of food and water shortages, some farmers have begun creating their own solutions to these problems. Primarily explored in this paper, however, are current research topics in the automation of agriculture. Digital agriculture is surveyed, focusing on ways that data collection can be beneficial. Additionally, self-driving technology is explored with emphasis on farming applications. Machine vision technology is also detailed, with specific application to weed management and harvesting of crops. Finally, the effects of automating agriculture are briefly considered, including labor, the environment, and direct effects on farmers

    TasselNet: Counting maize tassels in the wild via local counts regression network

    Full text link
    Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment.Comment: 14 page

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots
    • …
    corecore