30 research outputs found

    Machine Learning for Instance Segmentation

    Get PDF
    Volumetric Electron Microscopy images can be used for connectomics, the study of brain connectivity at the cellular level. A prerequisite for this inquiry is the automatic identification of neural cells, which requires machine learning algorithms and in particular efficient image segmentation algorithms. In this thesis, we develop new algorithms for this task. In the first part we provide, for the first time in this field, a method for training a neural network to predict optimal input data for a watershed algorithm. We demonstrate its superior performance compared to other segmentation methods of its category. In the second part, we develop an efficient watershed-based algorithm for weighted graph partitioning, the \emph{Mutex Watershed}, which uses negative edge-weights for the first time. We show that it is intimately related to the multicut and has a cutting edge performance on a connectomics challenge. Our algorithm is currently used by the leaders of two connectomics challenges. Finally, motivated by inpainting neural networks, we create a method to learn the graph weights without any supervision

    전근대 토지대장과 지적도의 대화형 분석을 위한 시각화 설계

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 2. 서진욱.We propose an interactive visualization design tool, called JigsawMap, for analyzing and mapping historical textual cadasters. A cadaster is an official register that records land properties (e.g., location, ownership, value and size) for land valuation and taxation. Such mapping of old and new cadasters can help historians understand the social and economic background of changes in land uses or ownership. JigsawMap can effectively connect the past land survey results to modern cadastral maps. In order to accomplish the connection process, three steps are performed: (1) segmentation of cadastral map, (2) visualization of textual cadastre, (3) and mapping interaction. We conducted usability studies and long term case studies to evaluate JigsawMap, and received positive responses. We summarize the evaluation results and present design guidelines for participatory design projects with historians. Followed by our study on JigsawMap, we further investigated on each components of our tool for more scalable map connection. First, we designed a hybrid algorithm to semi-automatically segment land pieces on cadastral map. The original JigsawMap provides interface for user to segment land pieces and the experiment result shows that segmentation algorithm accurately extracts the regions. Next, we reconsidered the visual encoding and simplified it to make textual cadastre more scalable. Since the former visual encoding relies on traditional map legend, the visual encoding can be selected based on user expert level. Finally, we redesigned layout algorithm to generate a better initial layout. We used evolution algorithm to articulate ambiguity problem of textual cadastre and the result less suffered from overlapping problem. Overall, our visualization design tool will provide an accurate segmentation result, give the user an option to select visual encoding that suits on their expert level, and generate more readable initial layout which gives an overview of cadastre layout.Chapter 1 Introduction 1 1.1 Background & Motivation 1 1.2 Main Contribution 7 1.3 Organization of the Dissertation 8 Chapter 2 Related Work 11 2.1 Map Data Visualization 11 2.2 Graph Layout Algorithms 13 2.3 Collaborative Map Editing Service 14 2.4 Map Image Segmentation 15 2.5 Premodern Cadastral Maps 17 2.6 Assessing Measures for Cartogram 18 Chapter 3 Visualizing and Mapping Premodern Textual Cadasters to Cadastral Maps 20 3.1 Textual Cadastre 21 3.2 Cadastral Maps 24 3.3 Paper-based Mapping Process and Obstacles 24 3.4 Task Flow in JigsawMap 26 3.5 Design Rationale 32 3.6 Evaluation 34 3.7 Discussion 40 3.8 Design Guidelines When Working with Historians 42 Chapter 4 Accurate Segmentation of Land Regions in Historical Cadastral Maps 44 4.1 Segmentation Pipeline 45 4.2 Preprocessing 46 4.3 Removal of Grid Line 48 4.4 Removal of Characters 52 4.5 Reconstruction of Land Boundaries 53 4.6 Generation of Polygons 55 4.7 Experimental Result 56 4.8 Discussion 59 Chapter 5 Approximating Rectangular Cartogram from Premodern Textual Cadastre 62 5.1 Challenges of the Textual Cadastre Layout 62 5.2 Quality Measures for Assessing Rectangular Cartogram 64 5.3 Quality Measures for Assessing Textual Cadastre 65 5.4 Graph Layout Algorithm 66 5.5 Results 72 5.6 Discussion 73 Chapter 6 Design of Scalable Node Representation for a Large Textual Cadastre 78 6.1 Motivation 78 6.2 Visual Encoding in JigsawMa 80 6.3 Challenges of Current Visual Encoding 81 6.4 Compact Visual Encoding 83 6.5 Results 84 6.6 Discussion 86 Chapter 7 Conclusion 88 Bibliography 90 Abstract in Korean 101Docto

    Classification of acute lymphoblastic leukemia using deep learning

    Get PDF
    Acute Leukemia is a life-threatening disease common both in children and adults that can lead to death if left untreated. Acute Lymphoblastic Leukemia (ALL) spreads out in children’s bodies rapidly and takes the life within a few weeks. To diagnose ALL, the hematologists perform blood and bone marrow examination. Manual blood testing techniques that have been used since long time are often slow and come out with the less accurate diagnosis. This work improves the diag- nosis of ALL with a computer-aided system, which yields accurate result by using image proces- sing and deep learning techniques. This research proposed a method for the classification of ALL into its subtypes and reactive bone marrow (normal) in stained bone marrow images. A robust segmentation and deep learning techniques with the convolutional neural network are used to train the model on the bone marrow images to achieve accurate classification results. Experimental results thus obtained and compared with the results of other classifiers Naïve Bayesian, KNN, and SVM. Experimental results reveal that the proposed method achieved 97.78% accuracy. The obtained results exhibit that the proposed approach could be used as a tool to diagnose Acute Lymphoblastic Leukemia and its sub-types that will definitely assist pathologists

    Fine spatial scale modelling of Trentino past forest landscape and future change scenarios to study ecosystem services through the years

    Get PDF
    Ciolli, MarcoCantiani, Maria Giulia1openLandscape in Europe has dramatically changed in the last decades. This has been especially true for Alpine regions, where the progressive urbanization of the valleys has been accom- panied by the abandonment of smaller villages and areas at higher elevation. This trend has been clearly observable in the Provincia Autonoma di Trento (PAT) region in the Italian Alps. The impact has been substantial for many rural areas, with the progressive shrinking of meadows and pastures due to the forest natural recolonization. These modifications of the landscape affect biodiversity, social and cultural dynamics, including landscape perception and some ecosystem services. Literature review showed that this topic has been addressed by several authors across the Alps, but their researches are limited in space coverage, spatial resolution and time span. This thesis aims to create a comprehensive dataset of historical maps and multitemporal orthophotos in the area of PAT to perform data analysis to identify the changes in forest and open areas, being an evaluation of how these changes affected land- scape structure and ecosystems, create a future change scenario for a test area and highlight some major changes in ecosystem services through time. In this study a high resolution dataset of maps covering the whole PAT area for over a century was developed. The earlier representation of the PAT territory which contained reliable data about forest coverage was considered is the Historic Cadastral maps of the 1859. These maps in fact systematically and accurately represented the land use of each parcel in the Habsburg Empire, included the PAT. Then, the Italian Kingdom Forest Maps, was the next important source of information about the forest coverage after World War I, before coming to the most recent datasets of the greyscale images of 1954, 1994 and the multiband images of 2006 and 2015. The purpose of the dataset development is twofold: to create a series of maps describing the forest and open areas coverage in the last 160 years for the whole PAT on one hand and to setup and test procedures to extract the relevant information from imagery and historical maps on the other. The datasets were archived, processed and analysed using the Free and Open Source Software (FOSS) GIS GRASS, QGIS and R. The goal set by this work was achieved by a remote sensed analysis of said maps and aerial imagery. A series of procedures were applied to extract a land use map, with the forest categories reaching a level of detail rarely achieved for a study area of such an extension (6200 km2 ). The resolution of the original maps is in fact at a meter level, whereas the coarser resampling adopted is 10mx10m pixels. The great variety and size of the input data required the development, along the main part of the research, of a series of new tools for automatizing the analysis of the aerial imagery, to reduce the user intervention. New tools for historic map classification were as well developed, for eliminating from the resulting maps of land use from symbols (e.g.: signs), thus enhancing the results. Once the multitemporal forest maps were obtained, the second phase of the current work was a qualitative and quantitative assessment of the forest coverage and how it changed. This was performed by the evaluation of a number of landscape metrics, indexes used to quantify the compaction or the rarefaction of the forest areas. A recurring issue in the current Literature on the topic of landscape metrics was identified along their analysis in the current work, that was extensively studied. This highlighted the importance of specifying some parameters in the most used landscape fragmentation analy- sis software to make the results of different studies properly comparable. Within this analysis a set of data coming from other maps were used to characterize the process of afforestation in PAT, such as the potential forest maps, which were used to quantify the area of potential forest which were actually afforested through the years, the Digital Ele- vation Model, which was used to quantify the changes in forest area at a different ranges of altitude, and finally the forest class map, which was used to estimate how afforestation has affected each single forest type. The output forest maps were used to analyse and estimate some ecosystem services, in par- ticular the protection from soil erosion, the changes in biodiversity and the landscape of the forests. Finally, a procedure for the analysis of future changes scenarios was set up to study how afforestation will proceed in absence of external factors in a protected area of PAT. The pro- cedure was developed using Agent Based Models, which considers trees as thinking agents, able to choose where to expand the forest area. The first part of the results achieved consists in a temporal series of maps representing the situation of the forest in each year of the considered dataset. The analysis of these maps suggests a trend of afforestation across the PAT territory. The forest maps were then reclassi- fied by altitude ranges and forest types to show how the afforestation proceeded at different altitudes and forest types. The results showed that forest expansion acted homogeneously through different altitude and forest types. The analysis of a selected set of landscape met- rics showed a progressive compaction of the forests at the expenses of the open areas, in each altitude range and for each forest type. This generated on one hand a benefit for all those ecosystem services linked to a high forest cover, while reduced ecotonal habitats and affected biodiversity distribution and quality. Finally the ABM procedure resulted in a set of maps representing a possible evolution of the forest in an area of PAT, which represented a similar situation respect to other simulations developed using different models in the same area. A second part of the result achieved in the current work consisted in new open source tools for image analysis developed for achieving the results showed, but with a potentially wider field of application, along with new procedure for the evaluation of the image classification. The current work fulfilled its aims, while providing in the meantime new tools and enhance- ment of existing tools for remote sensing and leaving as heritage a large dataset that will be used to deepen he knowledge of the territory of PAT, and, more widely to study emerging pattern in afforestation in an alpine environment.openGobbi, S

    Image analysis for the study of chromatin distribution in cell nuclei with application to cervical cancer screening

    Get PDF

    River flow monitoring: LS-PIV technique, an image-based method to assess discharge

    Get PDF
    The measurement of the river discharge within a natural ort artificial channel is still one of the most challenging tasks for hydrologists and the scientific community. Although discharge is a physical quantity that theoretically can be measured with very high accuracy, since the volume of water flows in a well-defined domain, there are numerous critical issues in obtaining a reliable value. Discharge cannot be measured directly, so its value is obtained by coupling a measurement of a quantity related to the volume of flowing water and the area of a channel cross-section. Direct measurements of current velocity are made, traditionally with instruments such as current meters. Although measurements with current meters are sufficiently accurate and even if there are universally recognized standards for the current application of such instruments, they are often unusable under specific flow conditions. In flood conditions, for example, due to the need for personnel to dive into the watercourse, it is impossible to ensure adequate safety conditions to operators for carrying out flow measures. Critical issue arising from the use of current meters has been partially addressed thanks to technological development and the adoption of acoustic sensors. In particular, with the advent of Acoustic Doppler Current Profilers (ADCPs), flow measurements can take place without personnel having direct contact with the flow, performing measurements either from the bridge or from the banks. This made it possible to extend the available range of discharge measurements. However, the flood conditions of a watercourse also limit the technology of ADCPs. The introduction of the instrument into the current with high velocities and turbulence would put the instrument itself at serious risk, making it vulnerable and exposed to damage. In the most critical case, the instrument could be torn away by the turbulent current. On the other hand, considering smaller discharges, both current meters and ADCPs are technologically limited in their measurement as there are no adequate water levels for the use of the devices. The difficulty in obtaining information on the lowest and highest values of discharge has important implications on how to define the relationships linking flows to water levels. The stage-discharge relationship is one of the tools through which it is possible to monitor the flow in a specific section of a watercourse. Through this curve, a discharge value can be obtained from knowing the water stage. Curves are site-specific and must be continuously updated to account for changes in geometry that the sections for which they are defined may experience over time. They are determined by making simultaneous discharge and stage measurements. Since instruments such as current meters and ADCPs are traditionally used, stage-discharge curves suffer from instrumental limitations. So, rating curves are usually obtained by interpolation of field-measured data and by extrapolate them for the highest and the lowest discharge values, with a consequent reduction in accuracy. This thesis aims to identify a valid alternative to traditional flow measurements and to show the advantages of using new methods of monitoring to support traditional techniques, or to replace them. Optical techniques represent the best solution for overcoming the difficulties arising from the adoption of a traditional approach to flow measurement. Among these, the most widely used techniques are the Large-Scale Particle Image Velocimetry (LS-PIV) and the Large-Scale Particle Tracking Velocimetry. They are able to estimate the surface velocity fields by processing images representing a moving tracer, suitably dispersed on the liquid surface. By coupling velocity data obtained from optical techniques with geometry of a cross-section, a discharge value can easily be calculated. In this thesis, the study of the LS-PIV technique was deepened, analysing the performance of the technique, and studying the physical and environmental parameters and factors on which the optical results depend. As the LS-PIV technique is relatively new, there are no recognized standards available for the proper application of the technique. A preliminary numerical analysis was conducted to identify the factors on which the technique is significantly dependent. The results of these analyses enabled the development of specific guidelines through which the LS-PIV technique could subsequently be applied in open field during flow measurement campaigns in Sicily. In this way it was possible to observe experimentally the criticalities involved in applying the technique on real cases. These measurement campaigns provided the opportunity to carry out analyses on field case studies and structure an automatic procedure for optimising the LS-PIV technique. In all case studies it was possible to observe how the turbulence phenomenon is a worsening factor in the output results of the LS-PIV technique. A final numerical analysis was therefore performed to understand the influence of turbulence factor on the performance of the technique. The results obtained represent an important step for future development of the topic

    Using contour information and segmentation for object registration, modeling and retrieval

    Get PDF
    This thesis considers different aspects of the utilization of contour information and syntactic and semantic image segmentation for object registration, modeling and retrieval in the context of content-based indexing and retrieval in large collections of images. Target applications include retrieval in collections of closed silhouettes, holistic w ord recognition in handwritten historical manuscripts and shape registration. Also, the thesis explores the feasibility of contour-based syntactic features for improving the correspondence of the output of bottom-up segmentation to semantic objects present in the scene and discusses the feasibility of different strategies for image analysis utilizing contour information, e.g. segmentation driven by visual features versus segmentation driven by shape models or semi-automatic in selected application scenarios. There are three contributions in this thesis. The first contribution considers structure analysis based on the shape and spatial configuration of image regions (socalled syntactic visual features) and their utilization for automatic image segmentation. The second contribution is the study of novel shape features, matching algorithms and similarity measures. Various applications of the proposed solutions are presented throughout the thesis providing the basis for the third contribution which is a discussion of the feasibility of different recognition strategies utilizing contour information. In each case, the performance and generality of the proposed approach has been analyzed based on extensive rigorous experimentation using as large as possible test collections

    New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty

    Get PDF
    Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced data sets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present work introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images

    New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty

    Get PDF
    Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced datasets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present thesis introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images.Comment: 218 pages, 58 figures, PhD thesis, Department of Mechanical Engineering, Karlsruhe Institute of Technology, published online with KITopen (License: CC BY-SA 3.0, http://dx.doi.org/10.5445/IR/1000057821
    corecore