1,842 research outputs found

    NEFI: Network Extraction From Images

    Full text link
    Networks and network-like structures are amongst the central building blocks of many technological and biological systems. Given a mathematical graph representation of a network, methods from graph theory enable a precise investigation of its properties. Software for the analysis of graphs is widely available and has been applied to graphs describing large scale networks such as social networks, protein-interaction networks, etc. In these applications, graph acquisition, i.e., the extraction of a mathematical graph from a network, is relatively simple. However, for many network-like structures, e.g. leaf venations, slime molds and mud cracks, data collection relies on images where graph extraction requires domain-specific solutions or even manual. Here we introduce Network Extraction From Images, NEFI, a software tool that automatically extracts accurate graphs from images of a wide range of networks originating in various domains. While there is previous work on graph extraction from images, theoretical results are fully accessible only to an expert audience and ready-to-use implementations for non-experts are rarely available or insufficiently documented. NEFI provides a novel platform allowing practitioners from many disciplines to easily extract graph representations from images by supplying flexible tools from image processing, computer vision and graph theory bundled in a convenient package. Thus, NEFI constitutes a scalable alternative to tedious and error-prone manual graph extraction and special purpose tools. We anticipate NEFI to enable the collection of larger datasets by reducing the time spent on graph extraction. The analysis of these new datasets may open up the possibility to gain new insights into the structure and function of various types of networks. NEFI is open source and available http://nefi.mpi-inf.mpg.de

    A Quantitative Assessment of Forest Cover Change in the Moulouya River Watershed (Morocco) by the Integration of a Subpixel-Based and Object-Based Analysis of Landsat Data

    Get PDF
    A quantitative assessment of forest cover change in the Moulouya River watershed (Morocco) was carried out by means of an innovative approach from atmospherically corrected reflectance Landsat images corresponding to 1984 (Landsat 5 Thematic Mapper) and 2013 (Landsat 8 Operational Land Imager). An object-based image analysis (OBIA) was undertaken to classify segmented objects as forested or non-forested within the 2013 Landsat orthomosaic. A Random Forest classifier was applied to a set of training data based on a features vector composed of different types of object features such as vegetation indices, mean spectral values and pixel-based fractional cover derived from probabilistic spectral mixture analysis). The very high spatial resolution image data of Google Earth 2013 were employed to train/validate the Random Forest classifier, ranking the NDVI vegetation index and the corresponding pixel-based percentages of photosynthetic vegetation and bare soil as the most statistically significant object features to extract forested and non-forested areas. Regarding classification accuracy, an overall accuracy of 92.34% was achieved. The previously developed classification scheme was applied to the 1984 Landsat data to extract the forest cover change between 1984 and 2013, showing a slight net increase of 5.3% (ca. 8800 ha) in forested areas for the whole region

    Accessible software frameworks for reproducible image analysis of host-pathogen interactions

    Get PDF
    Um die Mechanismen hinter lebensgefährlichen Krankheiten zu verstehen, müssen die zugrundeliegenden Interaktionen zwischen den Wirtszellen und krankheitserregenden Mikroorganismen bekannt sein. Die kontinuierlichen Verbesserungen in bildgebenden Verfahren und Computertechnologien ermöglichen die Anwendung von Methoden aus der bildbasierten Systembiologie, welche moderne Computeralgorithmen benutzt um das Verhalten von Zellen, Geweben oder ganzen Organen präzise zu messen. Um den Standards des digitalen Managements von Forschungsdaten zu genügen, müssen Algorithmen den FAIR-Prinzipien (Findability, Accessibility, Interoperability, and Reusability) entsprechen und zur Verbreitung ebenjener in der wissenschaftlichen Gemeinschaft beitragen. Dies ist insbesondere wichtig für interdisziplinäre Teams bestehend aus Experimentatoren und Informatikern, in denen Computerprogramme zur Verbesserung der Kommunikation und schnellerer Adaption von neuen Technologien beitragen können. In dieser Arbeit wurden daher Software-Frameworks entwickelt, welche dazu beitragen die FAIR-Prinzipien durch die Entwicklung von standardisierten, reproduzierbaren, hochperformanten, und leicht zugänglichen Softwarepaketen zur Quantifizierung von Interaktionen in biologischen System zu verbreiten. Zusammenfassend zeigt diese Arbeit wie Software-Frameworks zu der Charakterisierung von Interaktionen zwischen Wirtszellen und Pathogenen beitragen können, indem der Entwurf und die Anwendung von quantitativen und FAIR-kompatiblen Bildanalyseprogrammen vereinfacht werden. Diese Verbesserungen erleichtern zukünftige Kollaborationen mit Lebenswissenschaftlern und Medizinern, was nach dem Prinzip der bildbasierten Systembiologie zur Entwicklung von neuen Experimenten, Bildgebungsverfahren, Algorithmen, und Computermodellen führen wird

    Geospatial openness: from software to standards & data

    Get PDF
    Abstract This paper is the editorial of the Special Issue "Open Source Geospatial Software", which features 10 published papers. The editorial introduces the concept of openness and, within the geospatial context, declines it into the three main components of software, data and standards. According to this classification, the papers published in the Special Issue are briefly summarized and a future research agenda in the open geospatial domain is finally outlined

    Automated Vigor Estimation on Vineyards

    Get PDF
    Estimating the balance or vigor in vines, as the yield to pruning weight relation, is a useful parameter that growers use to better prepare for the harvest season and to establish precision agriculture management of the vineyard, achieving specific site planification like pruning, debriefing or budding. Traditionally growers obtain this parameter by first manually weighting the pruned canes during the vineyard dormant season (no leaves); second during the harvest collect the weight of the fruit for the vines evaluated in the first step and then correlate the two measures. Since this is a very manual and time-consuming task, growers usually obtain this number by just taking a couple of samples and extrapolating this value to the entire vineyard, losing all the variability present in theirs fields, which imply loss in information that can lead to specific site management and consequently grape quality and quantity improvement. In this paper we develop a computer vision-based algorithm that is robust to differences in trellis system, varieties and light conditions; to automatically estimate the pruning weight and consequently the variability of vigor inside the lot. The results will be used to improve the way local growers plan the annual winter pruning, advancing in the transformation to precision agriculture. Our proposed solution doesn\textsc{\char13}t require to weight the shoots (also called canes), creating prescription maps (detail instructions for pruning, harvest and other management decisions specific for the location) based in the estimated vigor automatically. Our solution uses Deep Learning (DL) techniques to get the segmentation of the vine trees directly from the image captured on the field during dormant seaso

    Deep Learning Based Classification Techniques for Hyperspectral Images in Real Time

    Get PDF
    Remote sensing can be defined as the acquisition of information from a given scene without coming into physical contact with it, through the use of sensors, mainly located on aerial platforms, which capture information in different ranges of the electromagnetic spectrum. The objective of this thesis is the development of efficient schemes, based on the use of deep learning neural networks, for the classification of remotely sensed multi and hyperspectral land cover images. Efficient schemes are those that are capable of obtaining good results in terms of classification accuracy and that can be computed in a reasonable amount of time depending on the task performed. Regarding computational platforms, multicore architectures and Graphics Processing Units (GPUs) will be considered

    An interactive ImageJ plugin for semi-automated image denoising in electron microscopy

    Get PDF
    The recent advent of 3D in electron microscopy (EM) has allowed for detection of nanometer resolution structures. This has caused an explosion in dataset size, necessitating the development of automated workflows. Moreover, large 3D EM datasets typically require hours to days to be acquired and accelerated imaging typically results in noisy data. Advanced denoising techniques can alleviate this, but tend to be less accessible to the community due to low-level programming environments, complex parameter tuning or a computational bottleneck. We present DenoisEM: an interactive and GPU accelerated denoising plugin for ImageJ that ensures fast parameter tuning and processing through parallel computing. Experimental results show that DenoisEM is one order of magnitude faster than related software and can accelerate data acquisition by a factor of 4 without significantly affecting data quality. Lastly, we show that image denoising benefits visualization and (semi-)automated segmentation and analysis of ultrastructure in various volume EM datasets
    • …
    corecore