150 research outputs found

    A hybrid approach for stain normalisation in digital histopathological images

    Get PDF
    Stain in-homogeneity adversely affects segmentation and quantifi-cation of tissues in histology images. Stain normalisation techniques have been used to standardise the appearance of images. However, most the available stain normalisation techniques only work on a particular kind of stain images. In addition, some of these techniques fail to utilise both the spatial and tex-tural information in histology images, leading to image tissue distortion. In this paper, a hybrid approach has been developed, based on an octree colour quantisation algorithm combined with the Beer-Lambert law, a modified blind source separation algorithm, and a modified colour transfer approach. The hybrid method consists of two stages the stain separation stage and colour transfer stage. An octree colour quantisation algorithm combined with Beer-Lambert law, and a modified blind source separation algorithm are used during the stain separation stage to computationally estimate the amount of stain in an histology image based on its chromatic and luminous response. A modified colour transfer algorithm is used during the colour transfer stage to minimise the effect of varying staining and illumination. The hybrid method addresses the colour variation problem in both H&DAB (Haemotoxylin and Diaminoben-zidine) and H&E (Haemotoxylin and Eosin) stain images. The stain normali-sation method is validated against ground truth data. It is widely known that the Beer-Lambert law applies to only stains (such as haematoxylin, eosin) that absorb light. We demonstrate that the Beer-Lambert law applies is applicable to images containing a DAB stain. Better stain normalisation results are obtained in both H&E and H&DAB images

    Apprentissage de représentation pour des données générées par des utilisateurs

    Get PDF
    In this thesis, we study how representation learning methods can be applied to user-generated data. Our contributions cover three different applications but share a common denominator: the extraction of relevant user representations. Our first application is the item recommendation task, where recommender systems build user and item profiles out of past ratings reflecting user preferences and item characteristics. Nowadays, textual information is often together with ratings available and we propose to use it to enrich the profiles extracted from the ratings. Our hope is to extract from the textual content shared opinions and preferences. The models we propose provide another opportunity: predicting the text a user would write on an item. Our second application is sentiment analysis and, in particular, polarity classification. Our idea is that recommender systems can be used for such a task. Recommender systems and traditional polarity classifiers operate on different time scales. We propose two hybridizations of these models: the former has better classification performance, the latter highlights a vocabulary of surprise in the texts of the reviews. The third and final application we consider is urban mobility. It takes place beyond the frontiers of the Internet, in the physical world. Using authentication logs of the subway users, logging the time and station at which users take the subway, we show that it is possible to extract robust temporal profiles.Dans cette thèse, nous étudions comment les méthodes d'apprentissage de représentations peuvent être appliquées à des données générées par l'utilisateur. Nos contributions couvrent trois applications différentes, mais partagent un dénominateur commun: l'extraction des représentations d'utilisateurs concernés. Notre première application est la tâche de recommandation de produits, où les systèmes existant créent des profils utilisateurs et objets qui reflètent les préférences des premiers et les caractéristiques des derniers, en utilisant l'historique. De nos jours, un texte accompagne souvent cette note et nous proposons de l'utiliser pour enrichir les profils extraits. Notre espoir est d'en extraire une connaissance plus fine des goûts des utilisateurs. Nous pouvons, en utilisant ces modèles, prédire le texte qu'un utilisateur va écrire sur un objet. Notre deuxième application est l'analyse des sentiments et, en particulier, la classification de polarité. Notre idée est que les systèmes de recommandation peuvent être utilisés pour une telle tâche. Les systèmes de recommandation et classificateurs de polarité traditionnels fonctionnent sur différentes échelles de temps. Nous proposons deux hybridations de ces modèles: la première a de meilleures performances en classification, la seconde exhibe un vocabulaire de surprise. La troisième et dernière application que nous considérons est la mobilité urbaine. Elle a lieu au-delà des frontières d'Internet, dans le monde physique. Nous utilisons les journaux d'authentification des usagers du métro, enregistrant l'heure et la station d'origine des trajets, pour caractériser les utilisateurs par ses usages et habitudes temporelles

    COMMUNITY DETECTION IN GRAPHS

    Get PDF
    Thesis (Ph.D.) - Indiana University, Luddy School of Informatics, Computing, and Engineering/University Graduate School, 2020Community detection has always been one of the fundamental research topics in graph mining. As a type of unsupervised or semi-supervised approach, community detection aims to explore node high-order closeness by leveraging graph topological structure. By grouping similar nodes or edges into the same community while separating dissimilar ones apart into different communities, graph structure can be revealed in a coarser resolution. It can be beneficial for numerous applications such as user shopping recommendation and advertisement in e-commerce, protein-protein interaction prediction in the bioinformatics, and literature recommendation or scholar collaboration in citation analysis. However, identifying communities is an ill-defined problem. Due to the No Free Lunch theorem [1], there is neither gold standard to represent perfect community partition nor universal methods that are able to detect satisfied communities for all tasks under various types of graphs. To have a global view of this research topic, I summarize state-of-art community detection methods by categorizing them based on graph types, research tasks and methodology frameworks. As academic exploration on community detection grows rapidly in recent years, I hereby particularly focus on the state-of-art works published in the latest decade, which may leave out some classic models published decades ago. Meanwhile, three subtle community detection tasks are proposed and assessed in this dissertation as well. First, apart from general models which consider only graph structures, personalized community detection considers user need as auxiliary information to guide community detection. In the end, there will be fine-grained communities for nodes better matching user needs while coarser-resolution communities for the rest of less relevant nodes. Second, graphs always suffer from the sparse connectivity issue. Leveraging conventional models directly on such graphs may hugely distort the quality of generate communities. To tackle such a problem, cross-graph techniques are involved to propagate external graph information as a support for target graph community detection. Third, graph community structure supports a natural language processing (NLP) task to depict node intrinsic characteristics by generating node summarizations via a text generative model. The contribution of this dissertation is threefold. First, a decent amount of researches are reviewed and summarized under a well-defined taxonomy. Existing works about methods, evaluation and applications are all addressed in the literature review. Second, three novel community detection tasks are demonstrated and associated models are proposed and evaluated by comparing with state-of-art baselines under various datasets. Third, the limitations of current works are pointed out and future research tracks with potentials are discussed as well

    Trennung und Schätzung der Anzahl von Audiosignalquellen mit Zeit- und Frequenzüberlappung

    Get PDF
    Everyday audio recordings involve mixture signals: music contains a mixture of instruments; in a meeting or conference, there is a mixture of human voices. For these mixtures, automatically separating or estimating the number of sources is a challenging task. A common assumption when processing mixtures in the time-frequency domain is that sources are not fully overlapped. However, in this work we consider some cases where the overlap is severe — for instance, when instruments play the same note (unison) or when many people speak concurrently ("cocktail party") — highlighting the need for new representations and more powerful models. To address the problems of source separation and count estimation, we use conventional signal processing techniques as well as deep neural networks (DNN). We first address the source separation problem for unison instrument mixtures, studying the distinct spectro-temporal modulations caused by vibrato. To exploit these modulations, we developed a method based on time warping, informed by an estimate of the fundamental frequency. For cases where such estimates are not available, we present an unsupervised model, inspired by the way humans group time-varying sources (common fate). This contribution comes with a novel representation that improves separation for overlapped and modulated sources on unison mixtures but also improves vocal and accompaniment separation when used as an input for a DNN model. Then, we focus on estimating the number of sources in a mixture, which is important for real-world scenarios. Our work on count estimation was motivated by a study on how humans can address this task, which lead us to conduct listening experiments, confirming that humans are only able to estimate the number of up to four sources correctly. To answer the question of whether machines can perform similarly, we present a DNN architecture, trained to estimate the number of concurrent speakers. Our results show improvements compared to other methods, and the model even outperformed humans on the same task. In both the source separation and source count estimation tasks, the key contribution of this thesis is the concept of “modulation”, which is important to computationally mimic human performance. Our proposed Common Fate Transform is an adequate representation to disentangle overlapping signals for separation, and an inspection of our DNN count estimation model revealed that it proceeds to find modulation-like intermediate features.Im Alltag sind wir von gemischten Signalen umgeben: Musik besteht aus einer Mischung von Instrumenten; in einem Meeting oder auf einer Konferenz sind wir einer Mischung menschlicher Stimmen ausgesetzt. Für diese Mischungen ist die automatische Quellentrennung oder die Bestimmung der Anzahl an Quellen eine anspruchsvolle Aufgabe. Eine häufige Annahme bei der Verarbeitung von gemischten Signalen im Zeit-Frequenzbereich ist, dass die Quellen sich nicht vollständig überlappen. In dieser Arbeit betrachten wir jedoch einige Fälle, in denen die Überlappung immens ist zum Beispiel, wenn Instrumente den gleichen Ton spielen (unisono) oder wenn viele Menschen gleichzeitig sprechen (Cocktailparty) —, so dass neue Signal-Repräsentationen und leistungsfähigere Modelle notwendig sind. Um die zwei genannten Probleme zu bewältigen, verwenden wir sowohl konventionelle Signalverbeitungsmethoden als auch tiefgehende neuronale Netze (DNN). Wir gehen zunächst auf das Problem der Quellentrennung für Unisono-Instrumentenmischungen ein und untersuchen die speziellen, durch Vibrato ausgelösten, zeitlich-spektralen Modulationen. Um diese Modulationen auszunutzen entwickelten wir eine Methode, die auf Zeitverzerrung basiert und eine Schätzung der Grundfrequenz als zusätzliche Information nutzt. Für Fälle, in denen diese Schätzungen nicht verfügbar sind, stellen wir ein unüberwachtes Modell vor, das inspiriert ist von der Art und Weise, wie Menschen zeitveränderliche Quellen gruppieren (Common Fate). Dieser Beitrag enthält eine neuartige Repräsentation, die die Separierbarkeit für überlappte und modulierte Quellen in Unisono-Mischungen erhöht, aber auch die Trennung in Gesang und Begleitung verbessert, wenn sie in einem DNN-Modell verwendet wird. Im Weiteren beschäftigen wir uns mit der Schätzung der Anzahl von Quellen in einer Mischung, was für reale Szenarien wichtig ist. Unsere Arbeit an der Schätzung der Anzahl war motiviert durch eine Studie, die zeigt, wie wir Menschen diese Aufgabe angehen. Dies hat uns dazu veranlasst, eigene Hörexperimente durchzuführen, die bestätigten, dass Menschen nur in der Lage sind, die Anzahl von bis zu vier Quellen korrekt abzuschätzen. Um nun die Frage zu beantworten, ob Maschinen dies ähnlich gut können, stellen wir eine DNN-Architektur vor, die erlernt hat, die Anzahl der gleichzeitig sprechenden Sprecher zu ermitteln. Die Ergebnisse zeigen Verbesserungen im Vergleich zu anderen Methoden, aber vor allem auch im Vergleich zu menschlichen Hörern. Sowohl bei der Quellentrennung als auch bei der Schätzung der Anzahl an Quellen ist ein Kernbeitrag dieser Arbeit das Konzept der “Modulation”, welches wichtig ist, um die Strategien von Menschen mittels Computern nachzuahmen. Unsere vorgeschlagene Common Fate Transformation ist eine adäquate Darstellung, um die Überlappung von Signalen für die Trennung zugänglich zu machen und eine Inspektion unseres DNN-Zählmodells ergab schließlich, dass sich auch hier modulationsähnliche Merkmale finden lassen
    corecore