29 research outputs found

    Visualization of dynamic multidimensional and hierarchical datasets

    Get PDF
    When it comes to tools and techniques designed to help understanding complex abstract data, visualization methods play a prominent role. They enable human operators to lever age their pattern finding, outlier detection, and questioning abilities to visually reason about a given dataset. Many methods exist that create suitable and useful visual represen tations of static abstract, non-spatial, data. However, for temporal abstract, non-spatial, datasets, in which the data changes and evolves through time, far fewer visualization tech niques exist. This thesis focuses on the particular cases of temporal hierarchical data representation via dynamic treemaps, and temporal high-dimensional data visualization via dynamic projec tions. We tackle the joint question of how to extend projections and treemaps to stably, accurately, and scalably handle temporal multivariate and hierarchical data. The literature for static visualization techniques is rich and the state-of-the-art methods have proven to be valuable tools in data analysis. Their temporal/dynamic counterparts, however, are not as well studied, and, until recently, there were few hierarchical and high-dimensional methods that explicitly took into consideration the temporal aspect of the data. In addi tion, there are few or no metrics to assess the quality of these temporal mappings, and even fewer comprehensive benchmarks to compare these methods. This thesis addresses the abovementioned shortcomings. For both dynamic treemaps and dynamic projections, we propose ways to accurately measure temporal stability; we eval uate existing methods considering the tradeoff between stability and visual quality; and we propose new methods that strike a better balance between stability and visual quality than existing state-of-the-art techniques. We demonstrate our methods with a wide range of real-world data, including an application of our new dynamic projection methods to support the analysis and classification of hyperkinetic movement disorder data.Quando se trata de ferramentas e técnicas projetadas para ajudar na compreensão dados abstratos complexos, métodos de visualização desempenham um papel proeminente. Eles permitem que os operadores humanos alavanquem suas habilidades de descoberta de padrões, detecção de valores discrepantes, e questionamento visual para a raciocinar sobre um determinado conjunto de dados. Existem muitos métodos que criam representações visuais adequadas e úteis de para dados estáticos, abstratos, e não-espaciais. No entanto, para dados temporais, abstratos, e não-espaciais, isto é, dados que mudam e evoluem no tempo, existem poucas técnicas apropriadas. Esta tese concentra-se nos casos específicos de representação temporal de dados hierárquicos por meio de treemaps dinâmicos, e visualização temporal de dados de alta dimen sionalidade via projeções dinâmicas. Nós abordar a questão conjunta de como estender projeções e treemaps de forma estável, precisa e escalável para lidar com conjuntos de dados hierárquico-temporais e multivariado-temporais. Em ambos os casos, a literatura para técnicas estáticas é rica e os métodos estado da arte provam ser ferramentas valiosas em análise de dados. Suas contrapartes temporais/dinâmicas, no entanto, não são tão bem estudadas e, até recentemente, existiam poucos métodos hierárquicos e de alta dimensão que explicitamente levavam em consideração o aspecto temporal dos dados. Além disso, existiam poucas métricas para avaliar a qualidade desses mapeamentos visuais temporais, e ainda menos benchmarks abrangentes para comparação esses métodos. Esta tese aborda as deficiências acima mencionadas para treemaps dinâmicos e projeções dinâmicas. Propomos maneiras de medir com precisão a estabilidade temporal; avalia mos os métodos existentes, considerando o compromisso entre estabilidade e qualidade visual; e propomos novos métodos que atingem um melhor equilíbrio entre estabilidade e a qualidade visual do que as técnicas estado da arte atuais. Demonstramos nossos mé todos com uma ampla gama de dados do mundo real, incluindo uma aplicação de nossos novos métodos de projeção dinâmica para apoiar a análise e classificação dos dados de transtorno de movimentos

    A Survey on Continual Semantic Segmentation: Theory, Challenge, Method and Application

    Full text link
    Continual learning, also known as incremental learning or life-long learning, stands at the forefront of deep learning and AI systems. It breaks through the obstacle of one-way training on close sets and enables continuous adaptive learning on open-set conditions. In the recent decade, continual learning has been explored and applied in multiple fields especially in computer vision covering classification, detection and segmentation tasks. Continual semantic segmentation (CSS), of which the dense prediction peculiarity makes it a challenging, intricate and burgeoning task. In this paper, we present a review of CSS, committing to building a comprehensive survey on problem formulations, primary challenges, universal datasets, neoteric theories and multifarious applications. Concretely, we begin by elucidating the problem definitions and primary challenges. Based on an in-depth investigation of relevant approaches, we sort out and categorize current CSS models into two main branches including \textit{data-replay} and \textit{data-free} sets. In each branch, the corresponding approaches are similarity-based clustered and thoroughly analyzed, following qualitative comparison and quantitative reproductions on relevant datasets. Besides, we also introduce four CSS specialities with diverse application scenarios and development tendencies. Furthermore, we develop a benchmark for CSS encompassing representative references, evaluation results and reproductions, which is available at~\url{https://github.com/YBIO/SurveyCSS}. We hope this survey can serve as a reference-worthy and stimulating contribution to the advancement of the life-long learning field, while also providing valuable perspectives for related fields.Comment: 20 pages, 12 figures. Undergoing Revie

    Identifying safe intersection design through unsupervised feature extraction from satellite imagery

    Get PDF
    The World Health Organization has listed the design of safer intersections as a key intervention to reduce global road trauma. This article presents the first study to systematically analyze the design of all intersections in a large country, based on aerial imagery and deep learning. Approximately 900,000 satellite images were downloaded for all intersections in Australia and customized computer vision techniques emphasized the road infrastructure. A deep autoencoder extracted high-level features, including the intersection's type, size, shape, lane markings, and complexity, which were used to cluster similar designs. An Australian telematics data set linked infrastructure design to driving behaviors captured during 66 million kilometers of driving. This showed more frequent hard acceleration events (per vehicle) at four- than three-way intersections, relatively low hard deceleration frequencies at T-intersections, and consistently low average speeds on roundabouts. Overall, domain-specific feature extraction enabled the identification of infrastructure improvements that could result in safer driving behaviors, potentially reducing road trauma.Comment: 16 pages, 10 figures. Computer-Aided Civil and Infrastructure Engineering (2020

    A Survey on ML4VIS: Applying Machine Learning Advances to Data Visualization

    Full text link
    Inspired by the great success of machine learning (ML), researchers have applied ML techniques to visualizations to achieve a better design, development, and evaluation of visualizations. This branch of studies, known as ML4VIS, is gaining increasing research attention in recent years. To successfully adapt ML techniques for visualizations, a structured understanding of the integration of ML4VISis needed. In this paper, we systematically survey 88 ML4VIS studies, aiming to answer two motivating questions: "what visualization processes can be assisted by ML?" and "how ML techniques can be used to solve visualization problems?" This survey reveals seven main processes where the employment of ML techniques can benefit visualizations:Data Processing4VIS, Data-VIS Mapping, InsightCommunication, Style Imitation, VIS Interaction, VIS Reading, and User Profiling. The seven processes are related to existing visualization theoretical models in an ML4VIS pipeline, aiming to illuminate the role of ML-assisted visualization in general visualizations.Meanwhile, the seven processes are mapped into main learning tasks in ML to align the capabilities of ML with the needs in visualization. Current practices and future opportunities of ML4VIS are discussed in the context of the ML4VIS pipeline and the ML-VIS mapping. While more studies are still needed in the area of ML4VIS, we hope this paper can provide a stepping-stone for future exploration. A web-based interactive browser of this survey is available at https://ml4vis.github.ioComment: 19 pages, 12 figures, 4 table

    A Study on Robustness and Semantic Understanding of Visual Models

    Get PDF
    Vision models have improved in popularity and performance on many tasks since the emergence of large-scale datasets, improved access to computational resources, and new model architectures like the transformer. However, it is still not well understood if these models can be deployed in the real world. Because these models are blackbox architectures, we do not fully understand what these models are truly learning. An understanding of what models learn underneath the hood would result in better improvements for real-world scenarios. Motivated by this, we benchmark these impressive visual models using newly proposed datasets and tasks on their robustness and their general understanding, using semantics as both a probe and an area of improvement. We first propose a new task of graphical representation for video, using language as a semantic signal to enable quick and interpretable video understanding through cross-attention between language and video. We then explore robustness of video action-recognition models. Given real-world shifts from the original video distribution deep learning models are trained on, where do models fail, and how can we improve these failures. Next, we explore the robustness of video-language models for text-to-video retrieval. Given real-world shifts in either the video or the text distribution models were trained on, how are models failing, and where can improvements be made. Findings in this work indicated visual-language models may struggle with human-level understanding. So, we next benchmark visual-language models on conceptual understandings of object-relations, attribute-object relations, and context-object relations by proposing new datasets. Across all works in this dissertation, we empirically provide both weaknesses and strengths of large, vision models and potential areas of improvement. Through this research, we aim to contribute to the advancement of computer vision model understanding, paving the way for more robust and generalizable models that can effectively handle real-world scenarios

    “Rethinking High-Grade Serous Carcinoma: Development of new tools for deep tissue profiling”

    Get PDF
    Background: High-grade serous ovarian cancer (HGSOC) is the most frequently occurring and most fatal epithelial ovarian cancer (EOC) subtype. The reciprocal interplay of the different components encompassed within the tumour microenvironment (TME) are fundamental for tumour growth, advancement, and therapy response. It is therefore important to be able to deeply characterize the complex and diverse TME with multidimensional approaches. Aims: The main aim of this project was to establish novel multiparametric mass cytometry panels and thoroughly characterise the HGSOC TME. Methods: We first developed a novel 35-marker ovarian TME-based Cytometry by time-of-flight (CyTOF) panel (pan-tumour panel) and utilized it to examine the effects of six different tissue dissociation methods on cell surface antigen expression profiles in HGSOC tumour samples (Paper I). We further established an unique immune panel (pan-immune) for the detailed immunophenotyping of chemo-naïve HGSOC patients. The individual tumour immune microenvironments were characterized with tailored computational analysis (Paper II). With the use of an established merging algorithm— CyTOFmerge—the pan-tumour and pan-immune datasets were merged for a more in- depth immune delineation of the ten ovarian chemo-naïve TME profiles in addition to tumour and stromal cell phenotyping (Paper III). Results: We have established a novel ovarian TME-based CyTOF panel for HGSOC that is capable of delineating the immune, tumour, and stromal cells of the TME. Utilizing this panel, we demonstrated that, although the six tissue dissociation methods have a certain level of influence on the TME antigen expression profiles, inter-patient differences between the tumour samples are still clear. In addition, we identified a previously undescribed stem-like cell subset (Paper I). We have developed a unique 34-marker immune panel and have provided a detailed characterization of the ovarian tumour immune microenvironment of chemo-naïve patients. We identified a high degree of interpatient immune cell heterogenicity and discovered an abundance of conventional dendritic cells (DC), natural killer (NK) cells, and unassigned hematopoietic cells. Certain monocyte and dendritic cell (DC) clusters have shown prognostic relevance within the ovarian TME (Paper II). The merged dataset analysis revealed a new level of complexity with a more in-depth immune (myeloid cells) delineation in addition to tumour and stromal (fibroblast subsets) cell phenotypes. We identified an even higher degree of interpatient TME heterogenicity and a novel tumour cell metacluster, CD45-CD56-(EpCAM-FOLR1-CD24-). As a benefit of integrating the datasets, we identified even higher clinical associations (from 12 [pan-tumour dataset] to 20 [merged dataset]). Furthermore, most of these observed associations were majorly between PFS, OS, and infiltrating immune cell subsets (Paper III). Conclusions and consequences: (Paper I) In conclusion, the panel represents a promising profiling tool for the in-depth phenotyping of the HGSOC TME cell subsets. Although the tissue dissociation methods have influence on the TME antigen expression profiles, inter-patient differences are still clear. (Paper II) Our findings revealed a high degree of heterogeneity and identified phenotypic profiles that can be explored for use in HGSOC phenotypic profiling. (Paper III) Together, the merged sketching illustrates that comprehensive individual TME mapping for HGSOC patients can contribute to a better understanding each patient’s unique micromilieu given the need for more personalized treatment approaches.Doktorgradsavhandlin
    corecore