10 research outputs found
Automatisierte Analyse von Bauschuttrezyklaten auf der Basis von Bild- und Spektralinformationen
Bau- und Abbruchabfälle stellen Gemische aus mineralischen, metallischen und organischen Anteilen dar, welche eine entsprechend hochwertige Aufbereitung benötigen, um als rezyklierte Gesteinskörnungen wieder im Herstellungsprozess von Baustoffen verwendet werden zu können. Gesteinskörnungen stellen ein körniges Material dar, welches für die Betonherstellung geeignet ist. In Deutschland wie auch weltweit wird nur ein sehr geringer Teil des anfallenden Bauschutts für die Herstellung von Beton wiederverwendet. Variierende Gehalte an porösen Partikeln, Kontamination durch organisches und anorganisches Material erschweren die Sortierung der Gemische. Stand der Technik im Bereich der Analyse der Bau- und Abbruchabfälle ist die manuelle Inspektion durch Laborassistenten, weil nicht alle Bauschuttklassen zurzeit mittels automatisierter Methoden klassifiziert werden können. Für die Erkennung und Separation von aufbereiteten Bauabfällen wurden bisher nur einige gezielte Untersuchungen vorgenommen. Die Untersuchungen zeigten, dass nur ein modernes optisches System als Kombination von zwei oder mehreren spektralen sowie auch ortsaufgelösten Sensoren unter Verwendung adaptierter Erkennungsverfahren zukünftig in der Lage sein könnte, die Vielzahl der Stoffe im Bauschutt zuverlässig unterscheiden zu können. Die Automatisierung der Erkennung von Schüttgütern, insbesondere Bauschuttrezyklaten, würde zu einer enormen Zeit- und damit auch Kostenersparnis führen. Die Lösung einer komplexen Aufgabe wie die Bauschutterkennung benötigt die Anwendung verschiedener Algorithmen aus den Bereichen des maschinellen Lernens, der Bildverarbeitung und der Spektroskopie. Daraus folgt, dass diverse Untersuchungen zur Datensatzerstellung und -strukturierung, Merkmalsextraktion und Auswahl der geeigneten Merkmale mittels Merkmalsselektionsverfahren, Auswahl der Klassifikationsalgorithmen und Anpassung des Klassifikators zur Lösung der Aufgabe durchgeführt werden müssen. Im Rahmen dieser Arbeit wurde eine automatisierte Analyse von Bauschuttrezyklaten auf Basis von Bild- und Spektralinformationen realisiert. Ein Analyseverfahren für die Qualitätssicherung rezyklierter Gesteinskörnungen wurde entwickelt. Eine Basis für die Untersuchungen stellen durch Laborspezialisten vorbereitete und vorsortierte Proben dar. Die Proben wurden mittels eines Aufbaus mit hochauflösender 3-CCD-Kamera, einer Kombination von Auflicht- und Durchlichtbeleuchtung und mit einem NIR-Spektrometer aufgenommen. Daraus ergeben sich drei Datensätze auf der Basis von Bild-, Spektral- und Hybrid-Information (Kombination von beiden Informationen). Unterschiedliche Algorithmen für die Merkmalsselektion und Merkmalsextraktion wurden auf den Datensätzen untersucht und angepasst. Für die Lösung der Erkennungsaufgabe wurden diese Algorithmen zusammen mit verschiedenen Klassifikatoren aus den Bereichen der statistischen Klassifikatoren (Naive Bayes), der Entscheidungsbäume (Random Forest), der instanzbasierten Klassifiktoren (k-Nächste Nachbarn), der Support-Vektor-Maschinen und der künstlichen neuronalen Netze. Das Erkennungsproblem stellt eine komplexe Optimierungsaufgabe mit einer Vielzahl an Einflussfaktoren dar. Die Faktoren wurden in der Arbeit beschrieben und untersucht. Die Fusion von Bild- und Spektralinformation sowie eine passende Optimierung von beiden Informationsteilen erlaubt im Ergebnis eine Gesamterkennungsrate von 99,9% unter Anwendung des Klassifikators SVM mit polynomialem Kern.Construction and demolition waste (CDW) is a heterogenous mixture, which contains mineral, metal and organic components. These mixtures require special preparation before they can be used as recycled aggregates for concrete production. Only a small part of CDW used worldwide and also in Germany for this purpose due to contamination by organic and mineral components and components with high porosity. State of the Art in CDW-Sorting is a manual analysis by laboratory specialists because existing automated methods cannot classify all components of CDW. Several studies were done for analysis of CDW. They showed that a combination of multiple spectral and image sensors can provide enough information to distinguish different construction material classes from each other. Automatization of recognition of CDW will save time and resources in future. A solution of this complex task requires the usage of different algorithms of machine learning, image processing and spectroscopy. It means that diverse investigations of dataset creation and structure, feature extraction/selection, classifier selection and adaptation of these algorithms for the given problem should be done. Automated analysis of CDW based on image and spectral information was realized in this work. An analysis method for recycled aggregates of CDW was developed. Investigations are based on manually prepared and sorted samples by laboratory specialists. These samples were captured by a system with high-resolution 3CCD-camera, a combination of incident and transmitted light illumination and a NIR-spectrometer. It results in three datasets, which based on image, spectral and hybrid information (combination of both). Diverse algorithms for feature selection and extraction were tested and adapted on these datasets. These algorithms were used together with different classifiers: probabilistic classifiers (Naive Bayes), decision trees (Random Forest), instance-based classifiers (k-Nearest Neighbors), support vector machines and neural networks. The recognition problem is a complex optimization task with many influencing factors. These factors were investigated and described in this work. The fusion of image and spectral information allowed in combination with an appropriate optimization to reach a total recognition rate of 99.9% by using a SVM classifier with polynomial kernel
Using hybrid information of colour image analysis and SWIR-spectrum for high-precision analysis of construction and demolition waste
This paper discusses the accuracy improvement of automatic analysis of construction and demolition waste (CDW) by using the combination of image analysis and spectral information. This means using the combination of methods of image processing, methods of spectral analysis and methods of supervised learning. The classification performances in colour images and also in SWIR-spectrums showed, that we have to use a combination of these two components in a combined feature vector to improve the accuracy of analysis. Investigations on hybrid information from colour images and SWIR-spectrums were done and compared with the separate usage of these information sources
Significant characteristics in VIS- and IR-spectrum of construction and demolition waste for high-precision supervised classification
This paper discusses the possibility of automatic classifying of construction and demolition waste (CDW) by using methods of spectral analysis and supervised classifiers. The classification performances in colour images shown, that we have touse additional spectral information to solve the recognition task in a satisfactory manner. Therefore, investigations invisible (VIS) and infrared (IR) spectrum were done for analysing significant characteristics in spectrum, which are usefu1 for autornatic classification of C&D aggregates
Explaining and Evaluating Deep Tissue Classification by Visualizing Activations of Most Relevant Intermediate Layers
Deep Learning-based tissue classification may support pathologists in analyzing digitized whole slide images. However, in such critical tasks, only approaches that can be validated by medical experts in advance to deployment, are suitable. We present an approach that contributes to making automated tissue classification more transparent. We step beyond broadly used visualizations for last layers of a convolutional neural network by identifying most relevant intermediate layers applying Grad-CAM. A visual evaluation by a pathologist shows that these layers assign relevance, where important morphological structures are present in case of correct class decisions. We introduce a tool that can be easily used by medical experts for such validation purposes for any convolutional neural network and any layer. Visual explanations for intermediate layers provide insights into a neural network’s decision for histopathological tissue classification. In future research also the context of the input data must be considered
The application of texture features to quality control of metal surfaces
Quality assessment is an important step in production processes of metal parts. This step is required in order to check whether surface quality meets the requirements. Progress in the field of computing technologies and computer vision gives the possibility of visual surface quality control with industrial cameras and image processing methods. Authors of different papers proposed various texture feature algorithms which are suitable for different fields of images processing. In this research 27 texture features were calculated for surface images taken in different lighting conditions. Correlation coefficients between these 2D texture features and 11 roughness 3D parameters were calculated. A strong correlation between 2D features and 3D parameters occurred for images captured under ring light conditions
Tumor–Stroma Ratio in Colorectal Cancer—Comparison between Human Estimation and Automated Assessment
Simple Summary
A lower tumor–stroma ratio within a tumor correlates with a poorer outcome, i.e., with a higher risk of death. The assessment of this ratio by humans is prone to errors, and when presented the same case, the ratios reported by multiple pathologists will oftentimes deviate significantly. The aim of our work was to predict the tumor–stroma ratio automatically using deep neural segmentation networks. The assessment comprises two steps: recognizing the different tissue types and estimating their ratio. We compared both steps individually to human observers and showed that (i) the outlined automatic method yields good segmentation results and (ii) that human estimations are consistently higher than the automated estimation and deviate significantly for a hand-annotated ground truth. We showed that including an additional evaluation step for our segmentation results and relating the segmentation quality to deviations in tumor–stroma assessment provides helpful insights.
Abstract
The tumor–stroma ratio (TSR) has been repeatedly shown to be a prognostic factor for survival prediction of different cancer types. However, an objective and reliable determination of the tumor–stroma ratio remains challenging. We present an easily adaptable deep learning model for accurately segmenting tumor regions in hematoxylin and eosin (H&E)-stained whole slide images (WSIs) of colon cancer patients into five distinct classes (tumor, stroma, necrosis, mucus, and background). The tumor–stroma ratio can be determined in the presence of necrotic or mucinous areas. We employ a few-shot model, eventually aiming for the easy adaptability of our approach to related segmentation tasks or other primaries, and compare the results to a well-established state-of-the art approach (U-Net). Both models achieve similar results with an overall accuracy of 86.5% and 86.7%, respectively, indicating that the adaptability does not lead to a significant decrease in accuracy. Moreover, we comprehensively compare with TSR estimates of human observers and examine in detail discrepancies and inter-rater reliability. Adding a second survey for segmentation quality on top of a first survey for TSR estimation, we found that TSR estimations of human observers are not as reliable a ground truth as previously thought
Domain Transfer in Histopathology using Multi-ProtoNets with Interactive Prototype Adaptation
Few-shot learning addresses the problem of classification when little data or few labels are available. This is especially relevant in histopathology, where labeling must be carried out by highly trained medical experts. Prototypical Networks promise transferability to new domains by using a pre-trained encoder and classifying by way of a prototypical representation of each class learned with few samples. We examine the applicability of this approach by attempting domain transfer from colon tissue (for training the encoder) to urothelial tissue. Furthermore, we address the problems arising from representing a class via a small amount of representatives (prototypes) by testing two different prototype calculation strategies. We compare the original “Prototype per Class” (PPC) approach to our “Prototype per Annotation” (PPA) method, which calculates one prototype for each example annotation made by the pathologist. We test the domain transfer capability of our approach on a dataset of 55 whole slide images (WSIs) containing six subtypes of urothelial carcinoma in two granularities: “Superclasses”, which combines the tumorous subtypes into a single “tumor” class on top of a aggregated “healthy” and additional “necrosis” class, and “subtypes”, which considers all eleven classes separately. We evaluate the classic PPC approach as well as our PPA approach on this data set. Our results show that the adaptation of the Prototypical Network from colon tissue to urothelial tissue was successful, yielding an F1 score of 0.91 for the “superclasses”. Furthermore, the PPA approach performs very comparably to the PPC strategy. This makes it a viable alternative that places more value on the intent of the pathologist during annotation
Lymph node metastases detection in Whole Slide Images using prototypical patterns and transformer-guided multiple instance learning
Background: The examination of lymph nodes (LNs) regarding metastases is vital for the staging of cancer patients, which is necessary for diagnosis and adequate treatment selection. Advancements in digital pathology, utilizing Whole-Slide Images (WSIs) and convolutional neural networks (CNNs), pose new opportunities to automate this procedure, thus reducing pathologists’ workload while simultaneously increasing the accuracy in metastases detection. Objective: To address the task of LN-metastases detection, the use of weakly supervised transformers are applied for the analysis of WSIs. Methods & Materials: As WSIs are too large to be processed as a whole, they are divided into non-overlapping patches, which are converted to feature vectors using a CNN network, pre-trained on HE-stained colon cancer resections. A subset of these patches serves as input for a transformer to predict if a LN contains a metastasis. Hence, selecting a representative subset is an important part of the pipeline. Hereby, a prototype based clustering is employed and different sampling strategies are tested. Finally, the chosen feature vectors are fed into a transformer-based multiple instance learning (MIL) architecture, classifying the LNs into healthy/negative (that is, containing no metastases), or metastatic/positive (that is, containing metastases). The proposed model is trained only on the Camelyon16 training data (LNs from breast cancer patients), and evaluated on the Camelyon16 test set. Results: The trained model achieves accuracies of up to 92.3% on the test data (from breast LNs). While the model struggles with smaller metastases, high specificities of up to 96.9% can be accomplished. Additionally, the model is evaluated on LNs from a different primary tumor (colon), where accuracies between 62.3% and 95.9% could be obtained. Conclusion: The investigated transformer-model performs very good on LN data from the public LN breast data, but the domain transfer to LNs from the colon needs more research
Towards interactive AI-authoring with prototypical few-shot classifiers in histopathology
A vast multitude of tasks in histopathology could potentially benefit from the support of artificial intelligence (AI). Many examples have been shown in the literature and first commercial products with FDA or CE-IVDR clearance are available. However, two key challenges remain: (1) a scarcity of thoroughly annotated images, respectively the laboriousness of this task, and (2) the creation of robust models that can cope with the data heterogeneity in the field (domain generalization). In this work, we investigate how the combination of prototypical few-shot classification models and data augmentation can address both of these challenges. Based on annotated data sets that include multiple centers, multiple scanners, and two tumor entities, we examine the robustness and the adaptability of few-shot classifiers in multiple scenarios. We demonstrate that data from one scanner and one site are sufficient to train robust few-shot classification models by applying domain-specific data augmentation. The models achieved classification performance of around 90% on a multiscanner and multicenter database, which is on par with the accuracy achieved on the primary single-center single-scanner data. Various convolutional neural network (CNN) architectures can be used for feature extraction in the few-shot model. A comparison of nine state-of-the-art architectures yielded that EfficientNet B0 provides the best trade-off between accuracy and inference time. The classification of prototypical few-shot models directly relies on class prototypes derived from example images of each class. Therefore, we investigated the influence of prototypes originating from images from different scanners and evaluated their performance also on the multiscanner database. Again, our few-shot model showed a stable performance with an average absolute deviation in accuracy compared to the primary prototypes of 1.8% points. Finally, we examined the adaptability to a new tumor entity: classification of tissue sections containing urothelial carcinoma into normal, tumor, and necrotic regions. Only three annotations per subclass (e.g., muscle and adipose tissue are subclasses of normal tissue) were provided to adapt the few-shot model, which obtained an overall accuracy of 93.6%. These results demonstrate that prototypical few-shot classification is an ideal technology for realizing an interactive AI authoring system as it only requires few annotations and can be adapted to new tasks without involving retraining of the underlying feature extraction CNN, which would in turn require a selection of hyper-parameters based on data science expert knowledge. Similarly, it can be regarded as a guided annotation system. To this end, we realized a workflow and user interface that targets non-technical users