196 research outputs found

    DeepSynth: Three-dimensional nuclear segmentation of biological images using neural networks trained with synthetic data

    Get PDF
    The scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation

    Drone and sensor technology for sustainable weed management: a review

    Get PDF
    Weeds are amongst the most impacting abiotic factors in agriculture, causing important yield loss worldwide. Integrated Weed Management coupled with the use of Unmanned Aerial Vehicles (drones), allows for Site-Specific Weed Management, which is a highly efficient methodology as well as beneficial to the environment. The identification of weed patches in a cultivated field can be achieved by combining image acquisition by drones and further processing by machine learning techniques. Specific algorithms can be trained to manage weeds removal by Autonomous Weeding Robot systems via herbicide spray or mechanical procedures. However, scientific and technical understanding of the specific goals and available technology is necessary to rapidly advance in this field. In this review, we provide an overview of precision weed control with a focus on the potential and practical use of the most advanced sensors available in the market. Much effort is needed to fully understand weed population dynamics and their competition with crops so as to implement this approach in real agricultural contexts

    Applications of Emerging Smart Technologies in Farming Systems: A Review

    Get PDF
    The future of farming systems depends mainly on adopting innovative intelligent and smart technologies The agricultural sector s growth and progress are more critical to human survival than any other industry Extensive multidisciplinary research is happening worldwide for adopting intelligent technologies in farming systems Nevertheless when it comes to handling realistic challenges in making autonomous decisions and predictive solutions in farming applications of Information Communications Technologies ICT need to be utilized more Information derived from data worked best on year-to-year outcomes disease risk market patterns prices or customer needs and ultimately facilitated farmers in decision-making to increase crop and livestock production Innovative technologies allow the analysis and correlation of information on seed quality soil types infestation agents weather conditions etc This review analysis highlights the concept methods and applications of various futuristic cognitive innovative technologies along with their critical roles played in different aspects of farming systems like Artificial Intelligence AI IoT Neural Networks utilization of unmanned vehicles UAV Big data analytics Blok chain technology et

    Multifaceted Analysis of Fine-Tuning in Deep Model for Visual Recognition

    Full text link
    In recent years, convolutional neural networks (CNNs) have achieved impressive performance for various visual recognition scenarios. CNNs trained on large labeled datasets can not only obtain significant performance on most challenging benchmarks but also provide powerful representations, which can be used to a wide range of other tasks. However, the requirement of massive amounts of data to train deep neural networks is a major drawback of these models, as the data available is usually limited or imbalanced. Fine-tuning (FT) is an effective way to transfer knowledge learned in a source dataset to a target task. In this paper, we introduce and systematically investigate several factors that influence the performance of fine-tuning for visual recognition. These factors include parameters for the retraining procedure (e.g., the initial learning rate of fine-tuning), the distribution of the source and target data (e.g., the number of categories in the source dataset, the distance between the source and target datasets) and so on. We quantitatively and qualitatively analyze these factors, evaluate their influence, and present many empirical observations. The results reveal insights into what fine-tuning changes CNN parameters and provide useful and evidence-backed intuitions about how to implement fine-tuning for computer vision tasks.Comment: Accepted by ACM Transactions on Data Scienc

    Explainable artificial intelligence through graph theory by generalized social network analysis-based classifier

    Get PDF
    We propose a new type of supervised visual machine learning classifier, GSNAc, based on graph theory and social network analysis techniques. In a previous study, we employed social network analysis techniques and introduced a novel classification model (called Social Network Analysis-based Classifier-SNAc) which efficiently works with time-series numerical datasets. In this study, we have extended SNAc to work with any type of tabular data by showing its classification efficiency on a broader collection of datasets that may contain numerical and categorical features. This version of GSNAc simply works by transforming traditional tabular data into a network where samples of the tabular dataset are represented as nodes and similarities between the samples are reflected as edges connecting the corresponding nodes. The raw network graph is further simplified and enriched by its edge space to extract a visualizable 'graph classifier model-GCM'. The concept of the GSNAc classification model relies on the study of node similarities over network graphs. In the prediction step, the GSNAc model maps test nodes into GCM, and evaluates their average similarity to classes by employing vectorial and topological metrics. The novel side of this research lies in transforming multidimensional data into a 2D visualizable domain. This is realized by converting a conventional dataset into a network of 'samples' and predicting classes after a careful and detailed network analysis. We exhibit the classification performance of GSNAc as an effective classifier by comparing it with several well-established machine learning classifiers using some popular benchmark datasets. GSNAc has demonstrated superior or comparable performance compared to other classifiers. Additionally, it introduces a visually comprehensible process for the benefit of end-users. As a result, the spin-off contribution of GSNAc lies in the interpretability of the prediction task since the process is human-comprehensible; and it is highly visual

    Doctor of Philosophy

    Get PDF
    dissertationScene labeling is the problem of assigning an object label to each pixel of a given image. It is the primary step towards image understanding and unifies object recognition and image segmentation in a single framework. A perfect scene labeling framework detects and densely labels every region and every object that exists in an image. This task is of substantial importance in a wide range of applications in computer vision. Contextual information plays an important role in scene labeling frameworks. A contextual model utilizes the relationships among the objects in a scene to facilitate object detection and image segmentation. Using contextual information in an effective way is one of the main questions that should be answered in any scene labeling framework. In this dissertation, we develop two scene labeling frameworks that rely heavily on contextual information to improve the performance over state-of-the-art methods. The first model, called the multiclass multiscale contextual model (MCMS), uses contextual information from multiple objects and at different scales for learning discriminative models in a supervised setting. The MCMS model incorporates crossobject and interobject information into one probabilistic framework, and thus is able to capture geometrical relationships and dependencies among multiple objects in addition to local information from each single object present in an image. The second model, called the contextual hierarchical model (CHM), learns contextual information in a hierarchy for scene labeling. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. The CHM then incorporates the resulting multiresolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. We demonstrate the performance of CHM on different challenging tasks such as outdoor scene labeling and edge detection in natural images and membrane detection in electron microscopy images. We also introduce two novel classification methods. WNS-AdaBoost speeds up the training of AdaBoost by providing a compact representation of a training set. Disjunctive normal random forest (DNRF) is an ensemble method that is able to learn complex decision boundaries and achieves low generalization error by optimizing a single objective function for each weak classifier in the ensemble. Finally, a segmentation framework is introduced that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy images
    corecore