18 research outputs found

    visClust: A visual clustering algorithm based on orthogonal projections

    Full text link
    We present a novel clustering algorithm, visClust, that is based on lower dimensional data representations and visual interpretation. Thereto, we design a transformation that allows the data to be represented by a binary integer array enabling the further use of image processing methods to select a partition. Qualitative and quantitative analyses show that the algorithm obtains high accuracy (measured with an adjusted one-sided Rand-Index) and requires low runtime and RAM. We compare the results to 6 state-of-the-art algorithms, confirming the quality of visClust by outperforming in most experiments. Moreover, the algorithm asks for just one obligatory input parameter while allowing optimization via optional parameters. The code is made available on GitHub.Comment: 23 page

    On orthogonal projections for dimension reduction and applications in augmented target loss functions for learning problems

    Get PDF
    The use of orthogonal projections on high-dimensional input and target data in learning frameworks is studied. First, we investigate the relations between two standard objectives in dimension reduction, preservation of variance and of pairwise relative distances. Investigations of their asymptotic correlation as well as numerical experiments show that a projection does usually not satisfy both objectives at once. In a standard classification problem we determine projections on the input data that balance the objectives and compare subsequent results. Next, we extend our application of orthogonal projections to deep learning tasks and introduce a general framework of augmented target loss functions. These loss functions integrate additional information via transformations and projections of the target data. In two supervised learning problems, clinical image segmentation and music information classification, the application of our proposed augmented target loss functions increase the accuracy

    An amplified-target loss approach for photoreceptor layer segmentation in pathological OCT scans

    Full text link
    Segmenting anatomical structures such as the photoreceptor layer in retinal optical coherence tomography (OCT) scans is challenging in pathological scenarios. Supervised deep learning models trained with standard loss functions are usually able to characterize only the most common disease appeareance from a training set, resulting in suboptimal performance and poor generalization when dealing with unseen lesions. In this paper we propose to overcome this limitation by means of an augmented target loss function framework. We introduce a novel amplified-target loss that explicitly penalizes errors within the central area of the input images, based on the observation that most of the challenging disease appeareance is usually located in this area. We experimentally validated our approach using a data set with OCT scans of patients with macular diseases. We observe increased performance compared to the models that use only the standard losses. Our proposed loss function strongly supports the segmentation model to better distinguish photoreceptors in highly pathological scenarios.Comment: Accepted for publication at MICCAI-OMIA 201

    Dimension reduction with orthogonal projections and applications in medical imaging using learning frameworks

    No full text
    Im Zeitalter der Digitalisierung ist Dimensionsreduktion von großer Bedeutung um hochdimensionale Daten zu komprimieren. Eine niedrigdimensionale DatenreprĂ€sentation ermöglicht effiziente Berechnungen, soll aber gleichzeitig essenzielle Informationen fĂŒr nachfolgende Bearbeitung bewahren. In meiner Dissertation leite ich mathematische Ergebnisse zu orthogonalen Projektoren und Dimensionsreduktion her und verwende diese in medizinischen Bildverarbeitungsproblemen mit klinischen Daten. Die theoretischen Grundlagen ermöglichen neue Analysen und die Verbesserung von Lernalgorithmen mit neuralen Netzen. Orthogonale Projektoren sind eine effiziente lineare Methode zur Dimensionsreduktion. FĂŒr die Berechnung wird nur eine Matrixmultiplikation benötigt und dadurch wird effiziente Dimensionsreduktion auf großen und sehr hochdimensionalen Datenmengen möglich. Wir leiten neue theoretische Resultate fĂŒr Punktfolgen her, die den Raum der orthogonalen Projektoren (die Grassmann-Mannigfaltigkeit) asymptotisch optimal abdecken. Durch die numerische Konstruktion können wir die theoretischen Resultate illustrieren und verwenden diese Punktfolgen auch in den weiteren Analysen und Experimenten. Um die Eignung einer Dimensionsreduktionsmethode zu evaluieren, mĂŒssen verschiedene Aspekte des Informationserhalts beachtet werden. Der orthogonale Projektor gegeben durch die populĂ€re Hauptkomponentenanalyse (PCA) maximiert die Totalvarianz innerhalb der projizierten Daten. Besonders Aufgabenstellungen mit verrauschten Daten können von PCA profitieren, da das Rauschen durch das Entfernen von den Eigenrichtungen reduziert werden kann, die zu den niedrigeren Hauptkomponenten korrespondieren. Hingegen erhalten zufĂ€llige Projektoren mit hoher Wahrscheinlichkeit alle paarweisen Distanzen innerhalb einer Datenmenge. Diese Eigenschaft ist von Bedeutung fĂŒr Probleme in denen kleine VerĂ€nderungen innerhalb der projizierten Daten wichtig sind. Ob die eine oder andere Eigenschaft wichtig ist, kommt auf die jeweilige Anwendung und den Zustand der Daten an. Wir beweisen, dass normalerweise die gleichzeitige Erhaltung der Totalvarianz und der paarweisen Distanzen nicht möglich ist. Numerische Experimente illustrieren die theoretischen Ergebnisse. Im angewandten Teil meiner Dissertation arbeite ich mit klinischen Bilddaten von optischer KohĂ€renztomographie (OCT) und Magnetresonanztomographie (MRT). Die Forschung fand in Kollaboration mit dem Vienna Reading Center (VRC), FakultĂ€t fĂŒr Ophthalmologie, Medizinische UniversitĂ€t Wien und dem Laboratory of Mathematics in Imaging (LMI), Brigham and Women's Hospital, Harvard Medical School, Boston (MA), statt. Automatisierte Bildsegmentierung ist besonders wichtig fĂŒr Krankheitserkennung, da in der Regel der Ă€rztliche Alltag nicht genug Zeit lĂ€sst um alle Bildaufnahmen von Patienten zu evaluieren. Wir entwickeln basierend auf manuell annotierten OCT Daten verschiedene Methoden zur Bildsegmentierung mit Lernalgorithmen, die Dimensionsreduktion und orthogonale Projektoren verwenden. Unsere Methoden konnten die Quantifizierung in pathologischen Daten verbessern und könnten in Zukunft als Richtlinien fĂŒr Ärzte dienen. In einer Rekonstruktionsanalyse mit MRT Daten verwenden wir orthogonale Projektoren um multidimensionale Bilddaten zu komprimieren. Wir vergleichen die resultierenden Bildrekonstruktionen und folgern Verbesserung durch die zusĂ€tzliche Anwendung von PCA.In the age of digitalized data, dimension reduction is a necessary tool to compress and combine high-dimensional data. The lower dimensional representations shall enable the feasibility of computations, but at the same time preserve essential information for subsequent tasks. In this thesis I derive new mathematical results regarding dimension reduction with orthogonal projections and connect them to problems arising in clinical medical routine, enabling new analyses and improving performances in (deep) learning tasks. Orthogonal projections are important linear methods in that framework, since the dimension reduction itself is just based on a matrix multiplication, allowing fast computation on large data sets. We derive new results on point sequences that sample the space of orthogonal projections, called the Grassmannian. In particular, we prove that a specific class of sequences covers the space in an asymptotically optimal manner. Our numerical experiments illustrate the theoretical results, validating further analyses and experiments in applications with such point sequences. To measure the adequacy of a dimension reduction method, the preservation of diverse aspects of information must be considered. The orthogonal projection arising from the well known principal component analysis (PCA) maximizes the total variance within the projected data. Especially tasks that work with noisy data benefit from this method, since the influence of noise can be reduced by cutting off the eigendirections that correspond to the smaller principal components. On the other hand, random projections ensure with high probability the preservation of all pairwise distances within a data set. This property is especially important for tasks that rely on small changes within the data. Whether one or the other kind of information is of main importance depends on further objectives and the condition of the given data. We provide a mathematical proof to show that these two objectives strongly correlate and usually cannot be achieved at the same time. Numerical experiments are designed to illustrate these results. Moreover, we determine specific projections on input data in a standard classification problem. The highest classification accuracy was achieved by projections that balance the two objectives, preservation of variance and pairwise distances. Dimension reduction and orthogonal projections are useful in plenty diverse imaging problems. In the applied part of my PhD project I work with clinical optical coherence tomography (OCT) and magnetic resonance imaging (MRI) data. This research emerged from collaborations with the Vienna Reading Center (VRC), Department of Ophthalmology, Medical University of Vienna as well as the Laboratory of Mathematics in Imaging (LMI), Brigham and Women's Hospital, Harvard Medical School, Boston, MA. Automated image segmentation is especially important and necessary for disease identification, since daily routine of doctors usually does not allow enough time to look through all scans of patients. Based on annotated OCT data we develop image segmentation methods using (deep) learning algorithms that include dimension reduction and orthogonal projections. Our methods improve the accuracy of baseline methods and could serve as future guidelines for doctors. Moreover, in an analysis on MRI reconstruction accuracy, we use orthogonal projections to linearly compress higher dimensional signal arrays and conclude the beneficial impact of PCA
    corecore