365 research outputs found

    A robust adaptive wavelet-based method for classification of meningioma histology images

    Get PDF
    Intra-class variability in the texture of samples is an important problem in the domain of histological image classification. This issue is inherent to the field due to the high complexity of histology image data. A technique that provides good results in one trial may fail in another when the test and training data are changed and therefore, the technique needs to be adapted for intra-class texture variation. In this paper, we present a novel wavelet based multiresolution analysis approach to meningioma subtype classification in response to the challenge of data variation.We analyze the stability of Adaptive Discriminant Wavelet Packet Transform (ADWPT) and present a solution to the issue of variation in the ADWPT decomposition when texture in data changes. A feature selection approach is proposed that provides high classification accuracy

    Introduction of image-based water transparency descriptors to quantify marine snow and turbidity features. A study with data from a stationary observatory

    Get PDF
    Möller T, Nilssen I, Nattkemper TW. Introduction of image-based water transparency descriptors to quantify marine snow and turbidity features. A study with data from a stationary observatory. Presented at the MIW 2014 - Marine Imaging Workshop, Southampton

    TICAL - a web-tool for multivariate image clustering and data topology preserving visualization

    Get PDF
    In life science research bioimaging is often used to study two kinds of features in a sample simultaneously: morphology and co-location of molecular components. While bioimaging technology is rapidly proposing and improving new multidimensional imaging platforms, bioimage informatics has to keep pace in order to develop algorithmic approaches to support biology experts in the complex task of data analysis. One particular problem is the availability and applicability of sophisticated image analysis algorithms via the web so different users can apply the same algorithms to their data (sometimes even to the same data to get the same results) and independently from her/his whereabouts and from the technical features of her/his computer. In this paper we describe TICAL, a visual data mining approach to multivariate microscopy analysis which can be applied fully through the web.We describe the algorithmic approach, the software concept and present results obtained for different example images

    BIIGLE 2.0 - Browsing and Annotating Large Marine Image Collections

    Get PDF
    Combining state-of-the art digital imaging technology with different kinds of marine exploration techniques such as modern AUV (autonomous underwater vehicle), ROV (remote operating vehicle) or other monitoring platforms enables marine imaging on new spatial and/or temporal scales. A comprehensive interpretation of such image collections requires the detection, classification and quantification of objects of interest in the images usually performed by domain experts. However, the data volume and the rich content of the images makes the support by software tools inevitable. We define some requirements for marine image annotation and present our new online tool Biigle 2.0. It is developed with a special focus on annotating benthic fauna in marine image collections with tools customized to increase efficiency and effectiveness in the manual annotation process. The software architecture of the system is described and the special features of Biigle 2.0 are illustrated with different use-cases and future developments are discussed

    A machine vision system for automated non-invasive assessment of cell viability via dark field microscopy, wavelet feature selection and classification

    Get PDF
    Wei N, Flaschel E, Friehs K, Nattkemper TW. A machine vision system for automated non-invasive assessment of cell viability via dark field microscopy, wavelet feature selection and classification. BMC Bioinformatics. 2008;9(1):449.Background: Cell viability is one of the basic properties indicating the physiological state of the cell, thus, it has long been one of the major considerations in biotechnological applications. Conventional methods for extracting information about cell viability usually need reagents to be applied on the targeted cells. These reagent-based techniques are reliable and versatile, however, some of them might be invasive and even toxic to the target cells. In support of automated noninvasive assessment of cell viability, a machine vision system has been developed. Results: This system is based on supervised learning technique. It learns from images of certain kinds of cell populations and trains some classifiers. These trained classifiers are then employed to evaluate the images of given cell populations obtained via dark field microscopy. Wavelet decomposition is performed on the cell images. Energy and entropy are computed for each wavelet subimage as features. A feature selection algorithm is implemented to achieve better performance. Correlation between the results from the machine vision system and commonly accepted gold standards becomes stronger if wavelet features are utilized. The best performance is achieved with a selected subset of wavelet features. Conclusion: The machine vision system based on dark field microscopy in conjugation with supervised machine learning and wavelet feature selection automates the cell viability assessment, and yields comparable results to commonly accepted methods. Wavelet features are found to be suitable to describe the discriminative properties of the live and dead cells in viability classification. According to the analysis, live cells exhibit morphologically more details and are intracellularly more organized than dead ones, which display more homogeneous and diffuse gray values throughout the cells. Feature selection increases the system's performance. The reason lies in the fact that feature selection plays a role of excluding redundant or misleading information that may be contained in the raw data, and leads to better results

    DELPHI - fast and adaptive computational laser point detection and visual footprint quantification for arbitrary underwater image collections

    Get PDF
    Marine researchers continue to create large quantities of benthic images e.g., using AUVs (Autonomous Underwater Vehicles). In order to quantify the size of sessile objects in the images, a pixel-to-centimeter ratio is required for each image, often indirectly provided through a geometric laser point (LP) pattern, projected onto the seafloor. Manual annotation of these LPs in all images is too time-consuming and thus infeasible for nowadays data volumes. Because of the technical evolution of camera rigs, the LP's geometrical layout and color features vary for different expeditions and projects. This makes the application of one algorithm, tuned to a strictly defined LP pattern, also ineffective. Here we present the web-tool DELPHI, that efficiently learns the LP layout for one image transect/collection from just a small number of hand labeled LPs and applies this layout model to the rest of the data. The efficiency in adapting to new data allows to compute the LPs and the pixel-to-centimeter ratio fully automatic and with high accuracy. DELPHI is applied to two real-world examples and shows clear improvements regarding reduction of tuning effort for new LP patterns as well as increasing detection performance

    SOM-based Peptide Prototyping for Mass Spectrometry Peak Intensity Prediction

    Get PDF
    In todays bioinformatics, Mass spectrometry (MS) is the key technique for the identification of proteins. A prediction of spectrum peak intensities from pre computed molecular features would pave the way to better understanding of spectrometry data and improved spectrum evaluation. We propose a neural network architecture of Local Linear Map (LLM)-type based on Self-Organizing Maps (SOMs) for peptide prototyping and learning locally tuned regression functions for peak intensity prediction in MALDI-TOF mass spectra. We obtain results comparable to those obtained by nu-Support Vector Regression and show how the SOM learning architecture provides a basis for peptide feature profiling and visualisation

    A Web2.0 Strategy for the Collaborative Analysis of Complex Bioimages

    Get PDF
    Loyek C, Kölling J, Langenkämper D, Niehaus K, Nattkemper TW. A Web2.0 Strategy for the Collaborative Analysis of Complex Bioimages. In: Gama J, Bradley E, Hollmén J, eds. Advances in Intelligent Data Analysis X: 10th International Symposium, IDA 2011, Porto, Portugal, October 29-31, 2011. Proceedings. Lecture Notes in Computer Science. Vol 7014. Berlin, Heidelberg: Springer; 2011: 258-269

    Gear-Induced Concept Drift in Marine Images and Its Effect on Deep Learning Classification

    Get PDF
    In marine research, image data sets from the same area but collected at different times allow seafloor fauna communities to be monitored over time. However, ongoing technological developments have led to the use of different imaging systems and deployment strategies. Thus, instances of the same class exhibit slightly shifted visual features in images taken at slightly different locations or with different gear. These shifts are referred to as concept drift in the domains computational image analysis and machine learning as this phenomenon poses particular challenges for these fields. In this paper, we analyse four different data sets from an area in the Peru Basin and show how changes in imaging parameters affect the classification of 12 megafauna morphotypes with a 34-layer ResNet. Images were captured using the ocean floor observation system, a traditional sled-based system, or an autonomous underwater vehicle, which is used as an imaging platform capable of surveying larger regions. ResNet applied on separate individual data sets, i.e., without concept drift, showed that changing object distance was less important than the amount of training data. The results for the image data acquired with the ocean floor observation system showed higher performance values than data collected with the autonomous underwater vehicle. The results from this concept drift studies indicate that collecting image data from many dives with slightly different gear may result in training data well-suited for learning taxonomic classification tasks and that data volume can compensate for light concept drift

    Gear-Induced Concept Drift in Marine Images and Its Effect on Deep Learning Classification

    Get PDF
    Langenkämper D, van Kevelaer R, Purser A, Nattkemper TW. Gear-Induced Concept Drift in Marine Images and Its Effect on Deep Learning Classification. Frontiers in Marine Science. 2020;7: 506.In marine research, image data sets from the same area but collected at different times allow seafloor fauna communities to be monitored over time. However, ongoing technological developments have led to the use of different imaging systems and deployment strategies. Thus, instances of the same class exhibit slightly shifted visual features in images taken at slightly different locations or with different gear. These shifts are referred to as concept drift in the domains computational image analysis and machine learning as this phenomenon poses particular challenges for these fields. In this paper, we analyse four different data sets from an area in the Peru Basin and show how changes in imaging parameters affect the classification of 12 megafauna morphotypes with a 34-layer ResNet. Images were captured using the ocean floor observation system, a traditional sled-based system, or an autonomous underwater vehicle, which is used as an imaging platform capable of surveying larger regions. ResNet applied on separate individual data sets, i.e., without concept drift, showed that changing object distance was less important than the amount of training data. The results for the image data acquired with the ocean floor observation system showed higher performance values than data collected with the autonomous underwater vehicle. The results from this concept drift studies indicate that collecting image data from many dives with slightly different gear may result in training data well-suited for learning taxonomic classification tasks and that data volume can compensate for light concept drift
    corecore