1,298 research outputs found

    Machine Learning for Instance Segmentation

    Get PDF
    Volumetric Electron Microscopy images can be used for connectomics, the study of brain connectivity at the cellular level. A prerequisite for this inquiry is the automatic identification of neural cells, which requires machine learning algorithms and in particular efficient image segmentation algorithms. In this thesis, we develop new algorithms for this task. In the first part we provide, for the first time in this field, a method for training a neural network to predict optimal input data for a watershed algorithm. We demonstrate its superior performance compared to other segmentation methods of its category. In the second part, we develop an efficient watershed-based algorithm for weighted graph partitioning, the \emph{Mutex Watershed}, which uses negative edge-weights for the first time. We show that it is intimately related to the multicut and has a cutting edge performance on a connectomics challenge. Our algorithm is currently used by the leaders of two connectomics challenges. Finally, motivated by inpainting neural networks, we create a method to learn the graph weights without any supervision

    Automatic Segmentation of Cells of Different Types in Fluorescence Microscopy Images

    Get PDF
    Recognition of different cell compartments, types of cells, and their interactions is a critical aspect of quantitative cell biology. This provides a valuable insight for understanding cellular and subcellular interactions and mechanisms of biological processes, such as cancer cell dissemination, organ development and wound healing. Quantitative analysis of cell images is also the mainstay of numerous clinical diagnostic and grading procedures, for example in cancer, immunological, infectious, heart and lung disease. Computer automation of cellular biological samples quantification requires segmenting different cellular and sub-cellular structures in microscopy images. However, automating this problem has proven to be non-trivial, and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. This thesis focuses on the development and application of probabilistic graphical models to multi-class cell segmentation. Graphical models can improve the segmentation accuracy by their ability to exploit prior knowledge and model inter-class dependencies. Directed acyclic graphs, such as trees have been widely used to model top-down statistical dependencies as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, polytree graphical models are proposed in this thesis that capture label proximity relations more naturally compared to tree-based approaches. Polytrees can effectively impose the prior knowledge on the inclusion of different classes by capturing both same-level and across-level dependencies. A novel recursive mechanism based on two-pass message passing is developed to efficiently calculate closed form posteriors of graph nodes on polytrees. Furthermore, since an accurate and sufficiently large ground truth is not always available for training segmentation algorithms, a weakly supervised framework is developed to employ polytrees for multi-class segmentation that reduces the need for training with the aid of modeling the prior knowledge during segmentation. Generating a hierarchical graph for the superpixels in the image, labels of nodes are inferred through a novel efficient message-passing algorithm and the model parameters are optimized with Expectation Maximization (EM). Results of evaluation on the segmentation of simulated data and multiple publicly available fluorescence microscopy datasets indicate the outperformance of the proposed method compared to state-of-the-art. The proposed method has also been assessed in predicting the possible segmentation error and has been shown to outperform trees. This can pave the way to calculate uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement, which can be useful in the development of an interactive segmentation framework

    High performance computing for 3D image segmentation

    Get PDF
    Digital image processing is a very popular and still very promising eld of science, which has been successfully applied to numerous areas and problems, reaching elds like forensic analysis, security systems, multimedia processing, aerospace, automotive, and many more. A very important part of the image processing area is image segmentation. This refers to the task of partitioning a given image into multiple regions and is typically used to locate and mark objects and boundaries in input scenes. After segmentation the image represents a set of data far more suitable for further algorithmic processing and decision making. Image segmentation algorithms are a very broad eld and they have received signi cant amount of research interest A good example of an area, in which image processing plays a constantly growing role, is the eld of medical solutions. The expectations and demands that are presented in this branch of science are very high and dif cult to meet for the applied technology. The problems are challenging and the potential bene ts are signi cant and clearly visible. For over thirty years image processing has been applied to different problems and questions in medicine and the practitioners have exploited the rich possibilities that it offered. As a result, the eld of medicine has seen signi cant improvements in the interpretation of examined medical data. Clearly, the medical knowledge has also evolved signi cantly over these years, as well as the medical equipment that serves doctors and researchers. Also the common computer hardware, which is present at homes, of ces and laboratories, is constantly evolving and changing. All of these factors have sculptured the shape of modern image processing techniques and established in which ways it is currently used and developed. Modern medical image processing is centered around 3D images with high spatial and temporal resolution, which can bring a tremendous amount of data for medical practitioners. Processing of such large sets of data is not an easy task, requiring high computational power. Furthermore, in present times the computational power is not as easily available as in recent years, as the growth of possibilities of a single processing unit is very limited - a trend towards multi-unit processing and parallelization of the workload is clearly visible. Therefore, in order to continue the development of more complex and more advanced image processing techniques, a new direction is necessary. A very interesting family of image segmentation algorithms, which has been gaining a lot of focus in the last three decades, is called Deformable Models. They are based on the concept of placing a geometrical object in the scene of interest and deforming it until it assumes the shape of objects of interest. This process is usually guided by several forces, which originate in mathematical functions, features of the input images and other constraints of the deformation process, like object curvature or continuity. A range of very desired features of Deformable Models include their high capability for customization and specialization for different tasks and also extensibility with various approaches for prior knowledge incorporation. This set of characteristics makes Deformable Models a very ef cient approach, which is capable of delivering results in competitive times and with very good quality of segmentation, robust to noisy and incomplete data. However, despite the large amount of work carried out in this area, Deformable Models still suffer from a number of drawbacks. Those that have been gaining the most focus are e.g. sensitivity to the initial position and shape of the model, sensitivity to noise in the input images and to awed input data, or the need for user supervision over the process. The work described in this thesis aims at addressing the problems of modern image segmentation, which has raised from the combination of above-mentioned factors: the signi cant growth of image volumes sizes, the growth of complexity of image processing algorithms, coupled with the change in processor development and turn towards multi-processing units instead of growing bus speeds and the number of operations per second of a single processing unit. We present our innovative model for 3D image segmentation, called the The Whole Mesh Deformation model, which holds a set of very desired features that successfully address the above-mentioned requirements. Our model has been designed speci cally for execution on parallel architectures and with the purpose of working well with very large 3D images that are created by modern medical acquisition devices. Our solution is based on Deformable Models and is characterized by a very effective and precise segmentation capability. The proposed Whole Mesh Deformation (WMD) model uses a 3D mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times. The model offers a very good ability for topology changes and allows effective parallelization of work ow, which makes it a very good choice for large data-sets. In this thesis we present a precise model description, followed by experiments on arti cial images and real medical data

    Sustainable Reservoir Management Approaches under Impacts of Climate Change - A Case Study of Mangla Reservoir, Pakistan

    Get PDF
    Reservoir sedimentation is a major issue for water resource management around the world. It has serious economic, environmental, and social consequences, such as reduced water storage capacity, increased flooding risk, decreased hydropower generation, and deteriorated water quality. Increased rainfall intensity, higher temperatures, and more extreme weather events due to climate change are expected to exacerbate the problem of reservoir sedimentation. As a result, sedimentation must be managed to ensure the long-term viability of reservoirs and their associated infrastructure. Effective reservoir sedimentation management in the face of climate change necessitates an understanding of the sedimentation process and the factors that influence it, such as land use practices, erosion, and climate. Monitoring and modelling sedimentation rates are also useful tools for forecasting future impacts and making management decisions. The goal of this research is to create long-term reservoir management strategies in the face of climate change by simulating the effects of various reservoir-operating strategies on reservoir sedimentation and sediment delta movement at Mangla Reservoir in Pakistan (the second-largest dam in the country). In order to assess the impact of the Mangla Reservoir's sedimentation and reservoir life, a framework was developed. This framework incorporates both hydrological and morphodynamic models and various soft computing models. In addition to taking climate change uncertainty into consideration, the proposed framework also incorporates sediment source, sediment delivery, and reservoir morphology changes. Furthermore, the purpose of this study is to provide a practical methodology based on the limited data available. In the first phase of this study, it was investigated how to accurately quantify the missing suspended sediment load (SSL) data in rivers by utilizing various techniques, such as sediment rating curves (SRC) and soft computing models (SCMs), including local linear regression (LLR), artificial neural networks (ANN) and wavelet-cum-ANN (WANN). Further, the Gamma and M-test were performed to select the best-input variables and appropriate data length for SCMs development. Based on an evaluation of the outcomes of all leading models for SSL estimation, it can be concluded that SCMs are more effective than SRC approaches. Additionally, the results also indicated that the WANN model was the most accurate model for reconstructing the SSL time series because it is capable of identifying the salient characteristics in a data series. The second phase of this study examined the feasibility of using four satellite precipitation datasets (SPDs) which included GPM, PERSIANN_CDR, CHIRPS, and CMORPH to predict streamflow and sediment loads (SL) within a poorly gauged mountainous catchment, by employing the SWAT hydrological model as well as SWAT coupled soft computing models (SCMs) such as artificial neural networks (SWAT-ANN), random forests (SWAT-RF), and support vector regression (SWAT-SVR). SCMs were developed using the outputs of un-calibrated SWAT hydrological models to improve the predictions. The results indicate that during the entire simulation, the GPM shows the best performance in both schemes, while PERSIAN_CDR and CHIRPS also perform well, whereas CMORPH predicts streamflow for the Upper Jhelum River Basin (UJRB) with relatively poor performance. Among the best GPM-based models, SWAT-RF offered the best performance to simulate the entire streamflow, while SWAT-ANN excelled at simulating the SL. Hence, hydrological coupled SCMs based on SPDs could be an effective technique for simulating streamflow and SL, particularly in complex terrain where gauge network density is low or uneven. The third and last phase of this study investigated the impact of different reservoir operating strategies on Mangla reservoir sedimentation using a 1D sediment transport model. To improve the accuracy of the model, more accurate boundary conditions for flow and sediment load were incorporated into the numerical model (derived from the first and second phases of this study) so that the successive morphodynamic model could precisely predict bed level changes under given climate conditions. Further, in order to assess the long-term effect of a changing climate, a Global Climate Model (GCM) under Representative Concentration Pathways (RCP) scenarios 4.5 and 8.5 for the 21st century is used. The long-term modelling results showed that a gradual increase in the reservoir minimum operating level (MOL) slows down the delta movement rate and the bed level close to the dam. However, it may compromise the downstream irrigation demand during periods of high water demand. The findings may help the reservoir managers to improve the reservoir operation rules and ultimately support the objective of sustainable reservoir use for societal benefit. In summary, this study provides comprehensive insights into reservoir sedimentation phenomena and recommends an operational strategy that is both feasible and sustainable over the long term under the impact of climate change, especially in cases where a lack of data exists. Basically, it is very important to improve the accuracy of sediment load estimates, which are essential in the design and operation of reservoir structures and operating plans in response to incoming sediment loads, ensuring accurate reservoir lifespan predictions. Furthermore, the production of highly accurate streamflow forecasts, particularly when on-site data is limited, is important and can be achieved by the use of satellite-based precipitation data in conjunction with hydrological and soft computing models. Ultimately, the use of soft computing methods produces significantly improved input data for sediment load and discharge, enabling the application of one-dimensional hydro-morphodynamic numerical models to evaluate sediment dynamics and reservoir useful life under the influence of climate change at various operating conditions in a way that is adequate for evaluating sediment dynamics.:Chapter 1: Introduction Chapter 2:Reconstruction of Sediment Load Data in Rivers Chapter 3:Assessment of The Hydrological and Coupled Soft Computing Models, Based on Different Satellite Precipitation Datasets, To Simulate Streamflow and Sediment Load in A Mountainous Catchment Chapter 4:Simulating the Impact of Climate Change with Different Reservoir Operating Strategies on Sedimentation of the Mangla Reservoir, Northern Pakistan Chapter 5:Conclusions and Recommendation

    Surface Modeling and Analysis Using Range Images: Smoothing, Registration, Integration, and Segmentation

    Get PDF
    This dissertation presents a framework for 3D reconstruction and scene analysis, using a set of range images. The motivation for developing this framework came from the needs to reconstruct the surfaces of small mechanical parts in reverse engineering tasks, build a virtual environment of indoor and outdoor scenes, and understand 3D images. The input of the framework is a set of range images of an object or a scene captured by range scanners. The output is a triangulated surface that can be segmented into meaningful parts. A textured surface can be reconstructed if color images are provided. The framework consists of surface smoothing, registration, integration, and segmentation. Surface smoothing eliminates the noise present in raw measurements from range scanners. This research proposes area-decreasing flow that is theoretically identical to the mean curvature flow. Using area-decreasing flow, there is no need to estimate the curvature value and an optimal step size of the flow can be obtained. Crease edges and sharp corners are preserved by an adaptive scheme. Surface registration aligns measurements from different viewpoints in a common coordinate system. This research proposes a new surface representation scheme named point fingerprint. Surfaces are registered by finding corresponding point pairs in an overlapping region based on fingerprint comparison. Surface integration merges registered surface patches into a whole surface. This research employs an implicit surface-based integration technique. The proposed algorithm can generate watertight models by space carving or filling the holes based on volumetric interpolation. Textures from different views are integrated inside a volumetric grid. Surface segmentation is useful to decompose CAD models in reverse engineering tasks and help object recognition in a 3D scene. This research proposes a watershed-based surface mesh segmentation approach. The new algorithm accurately segments the plateaus by geodesic erosion using fast marching method. The performance of the framework is presented using both synthetic and real world data from different range scanners. The dissertation concludes by summarizing the development of the framework and then suggests future research topics

    Piecewise smooth reconstruction of normal vector field on digital data

    Get PDF
    International audienceWe propose a novel method to regularize a normal vector field defined on a digital surface (boundary of a set of voxels). When the digital surface is a digitization of a piecewise smooth manifold, our method localizes sharp features (edges) while regularizing the input normal vector field at the same time. It relies on the optimisation of a variant of the Ambrosio-Tortorelli functional, originally defined for denoising and contour extraction in image processing [AT90]. We reformulate this functional to digital surface processing thanks to discrete calculus operators. Experiments show that the output normal field is very robust to digitization artifacts or noise, and also fairly independent of the sampling resolution. The method allows the user to choose independently the amount of smoothing and the length of the set of discontinuities. Sharp and vanishing features are correctly delineated even on extremely damaged data. Finally, our method can be used to enhance considerably the output of state-of- the-art normal field estimators like Voronoi Covariance Measure [MOG11] or Randomized Hough Transform [BM12]

    Information-theoretic approaches to atoms-in-molecules : Hirshfeld family of partitioning schemes

    Get PDF
    Many population analysis methods are based on the precept that molecules should be built from fragments (typically atoms) that maximally resemble the isolated fragment. The resulting molecular building blocks are intuitive (because they maximally resemble well-understood systems) and transferable (because if two molecular fragments both resemble an isolated fragment, they necessarily resemble each other). Information theory is one way to measure the deviation between molecular fragments and their isolated counterparts, and it is a way that lends itself to interpretation. For example, one can analyze the relative importance of electron transfer and polarization of the fragments. We present key features, advantages, and disadvantages of the information-theoretic approach. We also codify existing information-theoretic partitioning methods in a way, that clarifies the enormous freedom one has within the information-theoretic ansatz
    • …
    corecore