104 research outputs found

    Business Intelligence for Small and Middle-Sized Entreprises

    Full text link
    Data warehouses are the core of decision support sys- tems, which nowadays are used by all kind of enter- prises in the entire world. Although many studies have been conducted on the need of decision support systems (DSSs) for small businesses, most of them adopt ex- isting solutions and approaches, which are appropriate for large-scaled enterprises, but are inadequate for small and middle-sized enterprises. Small enterprises require cheap, lightweight architec- tures and tools (hardware and software) providing on- line data analysis. In order to ensure these features, we review web-based business intelligence approaches. For real-time analysis, the traditional OLAP architecture is cumbersome and storage-costly; therefore, we also re- view in-memory processing. Consequently, this paper discusses the existing approa- ches and tools working in main memory and/or with web interfaces (including freeware tools), relevant for small and middle-sized enterprises in decision making

    A probabilistic multidimensional data model and its applications in business management

    Get PDF
    This dissertation develops a conceptual data model that can efficiently handle huge volumes of data containing uncertainty and are subject to frequent changes. This model can be used to build Decision Support Systems to improve decision-making process. Business intelligence and decision-making in today\u27s business world require extensive use of huge volumes of data. Real world data contain uncertainty and change over time. Business leaders should have access to Decision Support Systems that can efficiently handle voluminous data, uncertainty, and modifications to uncertain data. Database product vendors provide several extensions and features to support these requirements; however, these extensions lack support of standard conceptual models. Standardization generally creates more competition and leads to lower prices and improved standards of living. Results from this study could become a data model standard in the area of applied decisions sciences. The conceptual data model developed in this dissertation uses a mathematical concept based on set theory, probability axioms, and the Bayesian framework. Conceptual data model, algebra to manipulate data, a framework and an algorithm to modify the data are presented. The data modification algorithm is analyzed for time and space efficiency. Formal mathematical proof is provided to support identified properties of model, algebra, and the modification framework. Decision-making ability of this model was investigated using sample data. Advantages of this model and improvements in inventory management through its application are described. Comparison and contrast between this model and Bayesian belief networks are presented. Finally, scope and topics for further research are described

    Dwarf: A Complete System for Analyzing High-Dimensional Data Sets

    Get PDF
    The need for data analysis by different industries, including telecommunications, retail, manufacturing and financial services, has generated a flurry of research, highly sophisticated methods and commercial products. However, all of the current attempts are haunted by the so-called "high-dimensionality curse"; the complexity of space and time increases exponentially with the number of analysis "dimensions". This means that all existing approaches are limited only to coarse levels of analysis and/or to approximate answers with reduced precision. As the need for detailed analysis keeps increasing, along with the volume and the detail of the data that is stored, these approaches are very quickly rendered unusable. I have developed a unique method for efficiently performing analysis that is not affected by the high-dimensionality of data and scales only polynomially -and almost linearly- with the dimensions without sacrificing any accuracy in the returned results. I have implemented a complete system (called "Dwarf") and performed an extensive experimental evaluation that demonstrated tremendous improvements over existing methods for all aspects of performing analysis -initial computation, storing, querying and updating it. I have extended my research to the "data-streaming" model where updates are performed on-line, exacerbating any concurrent analysis but has a very high impact on applications like security, network management/monitoring router traffic control and sensor networks. I have devised streaming algorithms that provide complex statistics within user-specified relative-error bounds over a data stream. I introduced the class of "distinct implicated statistics", which is much more general than the established class of "distinct count" statistics. The latter has been proved invaluable in applications such as analyzing and monitoring the distinct count of species in a population or even in query optimization. The "distinct implicated statistics" class provides invaluable information about the correlations in the stream and is necessary for applications such as security. My algorithms are designed to use bounded amounts of memory and processing -so that they can even be implemented in hardware for resource-limited environments such as network-routers or sensors- and also to work in "noisy" environments, where some data may be flawed either implicitly due to the extraction process or explicitly

    Business Intelligence on Non-Conventional Data

    Get PDF
    The revolution in digital communications witnessed over the last decade had a significant impact on the world of Business Intelligence (BI). In the big data era, the amount and diversity of data that can be collected and analyzed for the decision-making process transcends the restricted and structured set of internal data that BI systems are conventionally limited to. This thesis investigates the unique challenges imposed by three specific categories of non-conventional data: social data, linked data and schemaless data. Social data comprises the user-generated contents published through websites and social media, which can provide a fresh and timely perception about people’s tastes and opinions. In Social BI (SBI), the analysis focuses on topics, meant as specific concepts of interest within the subject area. In this context, this thesis proposes meta-star, an alternative strategy to the traditional star-schema for modeling hierarchies of topics to enable OLAP analyses. The thesis also presents an architectural framework of a real SBI project and a cross-disciplinary benchmark for SBI. Linked data employ the Resource Description Framework (RDF) to provide a public network of interlinked, structured, cross-domain knowledge. In this context, this thesis proposes an interactive and collaborative approach to build aggregation hierarchies from linked data. Schemaless data refers to the storage of data in NoSQL databases that do not force a predefined schema, but let database instances embed their own local schemata. In this context, this thesis proposes an approach to determine the schema profile of a document-based database; the goal is to facilitate users in a schema-on-read analysis process by understanding the rules that drove the usage of the different schemata. A final and complementary contribution of this thesis is an innovative technique in the field of recommendation systems to overcome user disorientation in the analysis of a large and heterogeneous wealth of data

    Towards Geo Decision Support Systems for Renewable Energy Outreach

    Get PDF
    La Tierra se encuentra afectada por numerosos fenómenos tales como los desastres naturales, sobre urbanización, contaminación, etc. Todas estas actividades afectan enormemente a los recursos naturales del planeta llevando a la escasez de los mismos. Un tema especialmente relevante es el uso exhaustivo de energía fósil y su impacto negativo sobre nuestro medio ambiente. Resulta de este modo fundamental la búsqueda de nuevos recursos energéticos limpios para satisfacer nuestras necesidades y reducir la dependencia de recursos energéticos fósiles. La transformación de una infraestructura de generación de energía basada en recursos fósiles a otra basada en recursos energéticos renovables tales como eólica, solar y energía hidroeléctrica llevará a un mejor mantenimiento del medio ambiente ya que supondrá poco o ningún efecto en el calentamiento global por las emisiones, y a una reducción de la dependencia de fuentes de energía fósil. Las energías renovables son una fuente natural de energía que tiene importantes beneficios ya que proporciona un sistema de producción de energía confiable, con precios de la energía estables, puestos de trabajo especializados, y beneficios económicos y el medio ambiente. La energía solar es una de las mejores energías renovables. El sol es la fuente natural y fundamental de la existencia humana sobre la tierra y afecta a todos los procesos químicos, físicos y biológicos. Una hora de la energía del sol en la tierra es suficiente para alimentar a todo el planeta durante un año. La energía del sol o la radiación solar y su presencia geográfica determinan posibles inversiones en energía solar y las estrategias de desarrollo de las mismas. De este modo es esencial para poder proporcionar respuestas relacionadas con el "qué, quién, cuando y donde". Por ejemplo: ¿Cuál es el perfil de trabajo que mejor adapta a una posición gerencial de las energías renovables? ¿Dónde está el mejor lugar para invertir en huertos solares y/o parques eólicos? ¿En qué fecha se registra la más alta productividad? ¿Por qué este lugar no es apto para proyectos hidráulicos? ¿Por qué hay un bajón en la radiación solar en el año 2000 frente a 2012? Etc. En general, la toma de decisiones es el proceso de seleccionar la mejor opción viable de un conjunto de posibles maneras de hacer las cosas. Los Sistemas de Soporte de Decisión (del inglés Decision Support System, DSS) constituyen un ecosistema cognitivo que facilita la interacción entre los seres humanos y los datos para facilitar de forma profunda, significativa y útil la creación de soluciones efectivas en tiempo y costes. Grandes almacenamientos de Datos (Data warehousing), procesos de Extracción, Transformación y Carga (del inglés Extract Transform and Load, ETL) y la Inteligencia de Negocios (del ingles Business Intelligence, BI) son aspectos tecnológicos clave vinculados a la toma de decisiones. Además, la toma de decisiones en el contexto de la energía solar depende de Sistemas de Información Geográfica. Aunque la energía del Sol está disponible en todo el mundo, es evidente que la energía solar es más abundante cerca de los trópicos. Por ejemplo, una inversión en plantas de energía fotovoltaica en lugares cerca de los trópicos y del ecuador requerirá menos tiempo para su amortización. Dependiendo de la ubicación geográfica y las condiciones climáticas, la intensidad solar varía. Por esta razón, es importante seleccionar la ubicación adecuada que optimice la inversión teniendo en cuenta factores como la intensidad de la radiación solar, clima, tierras aptas y economía. Hay modelos como Global atlas y SimuSOLAR que dan información de idoneidad sobre la radiación solar y las ubicaciones. Sin embargo, estos modelos están restringidos a expertos, cubren áreas geográficas limitadas, no son aptos para casos de uso diferentes de los inicialmente previstos, y adolecen de falta de informes detallados e intuitivos para el público en general. El desarrollo de una cartografía extensa sobre la relación de zonas de sol y de sombra es un trabajo muy complejo que involucra diversos conceptos y retos de ingeniería, necesitando de la integración de diferentes modelos de datos, de calidad y cantidad heterogéneas, con limitaciones presupuestarias, etc. El objetivo de los trabajos de investigación desarrollados ha sido establecer la arquitectura de software para el desarrollo de Sistemas de Soporte de Decisión en el ámbito de las energías renovables en general, y de la energía solar en particular. La característica clave de este enfoque de arquitectura de software es ser capaz de proporcionar Sistemas de Soporte de Decisión que ofrezcan servicios de bajo coste ("low cost") en este contexto. Hagamos una analogía. Imagínese que usted está buscando comprar o alquilar una casa en España. Quiere analizar las características del edificio (por ejemplo dimensiones, jardín, más de una edificación en la parcela) y su entorno (por ejemplo, conexiones, servicios). Para realizar esta tarea puede utilizar los datos gratuitos proporcionados por la Oficina Virtual del Catastro de España junto con imágenes libres de un proveedor de ortofotografías (por ejemplo PNOA, Google o Bing) y datos contextuales libres procedentes de otros organismos locales, regionales y/o nacionales (por ejemplo el Ayuntamiento de Zaragoza, el Gobierno de Aragón, el proyecto Cartociudad). Si alguien integra todos estos orígenes de datos en un sistema (por ejemplo el cliente del servicio de mapas de la Infraestructura de Datos Espaciales de España, IDEE), tiene un Sistema de Soporte de Decisión "low cost" para comprar o alquilar una casa. Este trabajo de investigación tiene como objetivo el desarrollo de un enfoque de arquitectura de software que podría proporcionar un Sistema de Soporte de Decisión "low cost" cuando los consumidores necesitan tomar decisiones relacionadas con las energías renovables, en particular sistemas de energía solar, como podría ser la selección de la mejor opción para instalar un sistema solar, o decidir una inversión en una granja solar comunitaria. Una parte importante de este proceso de investigación ha consistido en el análisis sobre la idoneidad de las tecnologías vinculadas a Grandes almacenamientos de Datos y procesos de Extracción, Transformación y Carga para almacenar y procesar gran cantidad de datos históricos referentes a la energía, e Inteligencia de Negocios para la estructuración y presentación de informes. Por otro lado, ha sido necesario centrar el trabajo en modelos de negocio abierto (infraestructura de servicios web, modelos de datos 3D, técnicas de representación de datos sobre zonas de sol y sombra, y fuentes de datos) para el desarrollo económico del producto. Además, este trabajo identifica casos de uso donde los Sistemas de Soporte de Decisión deben ser el instrumento de resolución de problemas de mercado y de problemas científicos. Por lo tanto, esta tesis tiene como objetivo enfatizar y adoptar las tecnologías citadas para proponer un Sistema de Soporte de Decisión completo para un mejor uso potencial de las energías renovables que denominamos REDSS (del inglés Renewable Energy Decision Support System). El trabajo de investigación ha sido desarrollado con el objeto de encontrar respuestas a las siguientes preguntas de investigación: Preguntas relacionadas a los datos: - ¿Cómo elegir el proceso de creación de datos más adecuado para crear modelos geográficos cuyo coste económico sea razonable? Preguntas relacionadas con la tecnología: - ¿Qué limitaciones tecnológicas actuales tienen las herramientas computacionales para el cálculo de la intensidad y sombra solar? - ¿Cómo se puede adaptar conceptos como Grandes almacenamientos de Datos y la Inteligencia de Negocios en el campo de las energías renovables? - ¿Cómo estructurar y organizar datos relacionados con la intensidad solar y la sombra? - ¿Cuáles son las diferencias significativas entre el método propuesto y otros servicios globales existentes? Preguntas relacionadas con casos de uso: - ¿Cuáles son los casos de uso de REDSS? - ¿Cuáles son los beneficios de REDSS para expertos y público en general? Para darle una forma concreta a la contribución y el enfoque propuesto, se ha desarrollado un prototipo denominado Energy2People basado en principios de Inteligencia de Negocio que no sólo proporciona datos de localización avanzada sino que es una base sobre la que para desarrollar futuros productos comerciales. En su conformación actual, esta herramienta ayuda a descubrir y representar las relaciones de datos clave en el sector de las energías renovables y, permite descubrir al público en general relaciones entre los datos en casos donde no era evidente. Esencialmente, el enfoque propuesto conduce a un aumento en el rendimiento de gestión y visualización de datos. Las principales aportaciones de esta tesis pueden resumirse como siguen: - En primer lugar, esta tesis hace una revisión de varios modelos de sol-sombra de código abierto y cerrado para identificar el alcance de la necesidad de modelos de decisión y de su soporte efectivo. Además, proporciona información detallada sobre fuentes de información gratuita relacionada con datos de radiación solar. - En segundo lugar, se plantea un armazón conceptual para el desarrollo de modelos geográficos de bajo coste. Como ejemplo de la aplicación de esta aproximación se ha desarrollado un modelo de bajo coste de ciudad virtual 3D utilizando datos catastrales públicamente disponibles vía servicios Web. - En tercer lugar, este trabajo propone el uso de REDSS al problema de la toma de decisiones en el campo de la energía solar. Este modelo también cuenta con otros puntos distinguibles como los enfoques de co-creación y Mix-and-match. - En cuarto lugar, esta tesis identifica varios escenarios de aplicaciones reales y varios tipos de actores que deberían salir beneficiados por la aplicación de esta estrategia. - Por último, esta tesis presenta el prototipo "Enery2People" desarrollado para explorar datos de localización de la radiación solar y eventos temporales que sirve como ejemplo práctico de la aproximación planteada en esta tesis. Para hacer más claro el potencial del enfoque propuesto, este prototipo es comparado con otros Atlas Internacionales de la energía renovable

    BlinkDB: queries with bounded errors and bounded response times on very large data

    Get PDF
    In this paper, we present BlinkDB, a massively parallel, approximate query engine for running interactive SQL queries on large volumes of data. BlinkDB allows users to trade-off query accuracy for response time, enabling interactive queries over massive data by running queries on data samples and presenting results annotated with meaningful error bars. To achieve this, BlinkDB uses two key ideas: (1) an adaptive optimization framework that builds and maintains a set of multi-dimensional stratified samples from original data over time, and (2) a dynamic sample selection strategy that selects an appropriately sized sample based on a query's accuracy or response time requirements. We evaluate BlinkDB against the well-known TPC-H benchmarks and a real-world analytic workload derived from Conviva Inc., a company that manages video distribution over the Internet. Our experiments on a 100 node cluster show that BlinkDB can answer queries on up to 17 TBs of data in less than 2 seconds (over 200 x faster than Hive), within an error of 2-10%.National Science Foundation (U.S.) (CISE Expeditions Award CCF-1139158)United States. Defense Advanced Research Projects Agency (XData Award FA8750-12-2-0331)

    A Survey on Array Storage, Query Languages, and Systems

    Full text link
    Since scientific investigation is one of the most important providers of massive amounts of ordered data, there is a renewed interest in array data processing in the context of Big Data. To the best of our knowledge, a unified resource that summarizes and analyzes array processing research over its long existence is currently missing. In this survey, we provide a guide for past, present, and future research in array processing. The survey is organized along three main topics. Array storage discusses all the aspects related to array partitioning into chunks. The identification of a reduced set of array operators to form the foundation for an array query language is analyzed across multiple such proposals. Lastly, we survey real systems for array processing. The result is a thorough survey on array data storage and processing that should be consulted by anyone interested in this research topic, independent of experience level. The survey is not complete though. We greatly appreciate pointers towards any work we might have forgotten to mention.Comment: 44 page

    Enabling Ubiquitous OLAP Analyses

    Get PDF
    An OLAP analysis session is carried out as a sequence of OLAP operations applied to multidimensional cubes. At each step of a session, an operation is applied to the result of the previous step in an incremental fashion. Due to its simplicity and flexibility, OLAP is the most adopted paradigm used to explore the data stored in data warehouses. With the goal of expanding the fruition of OLAP analyses, in this thesis we touch several critical topics. We first present our contributions to deal with data extractions from service-oriented sources, which are nowadays used to provide access to many databases and analytic platforms. By addressing data extraction from these sources we make a step towards the integration of external databases into the data warehouse, thus providing richer data that can be analyzed through OLAP sessions. The second topic that we study is that of visualization of multidimensional data, which we exploit to enable OLAP on devices with limited screen and bandwidth capabilities (i.e., mobile devices). Finally, we propose solutions to obtain multidimensional schemata from unconventional sources (e.g., sensor networks), which are crucial to perform multidimensional analyses

    Leveraging query logs for user-centric OLAP

    Get PDF
    OLAP (On-Line Analytical Processing), the process of efficiently enabling common analytical operations on the multidimensional view of data, is a corner stone of Business Intelligence.While OLAP is now a mature, efficiently implemented technology, very little attention has been paid to the effectiveness of the analysis and the user-friendliness of this technology, often considered tedious of use.This dissertation is a contribution to developing user-centric OLAP, focusing on the use of former queries logged by an OLAP server to enhance subsequent analyses. It shows how logs of OLAP queries can be modeled, constructed, manipulated, compared, and finally leveraged for personalization and recommendation.Logs are modeled as sets of analytical sessions, sessions being modeled as sequences of OLAP queries. Three main approaches are presented for modeling queries: as unevaluated collections of fragments (e.g., group by sets, sets of selection predicates, sets of measures), as sets of references obtained by partially evaluating the query over dimensions, or as query answers. Such logs can be constructed even from sets of SQL query expressions, by translating these expressions into a multidimensional algebra, and bridging the translations to detect analytical sessions. Logs can be searched, filtered, compared, combined, modified and summarized with a language inspired by the relational algebra and parametrized by binary relations over sessions. In particular, these relations can be specialization relations or based on similarity measures tailored for OLAP queries and analytical sessions. Logs can be mined for various hidden knowledge, that, depending on the query model used, accurately represents the user behavior extracted.This knowledge includes simple preferences, navigational habits and discoveries made during former explorations,and can be it used in various query personalization or query recommendation approaches.Such approaches vary in terms of formulation effort, proactiveness, prescriptiveness and expressive power:query personalization, i.e., coping with a current query too few or too many results, can use dedicated operators for expressing preferences, or be based on query expansion;query recommendation, i.e., suggesting queries to pursue an analytical session,can be based on information extracted from the current state of the database and the query, or be purely history based, i.e., leveraging the query log.While they can be immediately integrated into a complete architecture for User-Centric Query Answering in data warehouses, the models and approaches introduced in this dissertation can also be seen as a starting point for assessing the effectiveness of analytical sessions, with the ultimate goal to enhance the overall decision making process

    대용량 데이터 탐색을 위한 점진적 시각화 시스템 설계

    Get PDF
    학위논문(박사)--서울대학교 대학원 :공과대학 컴퓨터공학부,2020. 2. 서진욱.Understanding data through interactive visualization, also known as visual analytics, is a common and necessary practice in modern data science. However, as data sizes have increased at unprecedented rates, the computation latency of visualization systems becomes a significant hurdle to visual analytics. The goal of this dissertation is to design a series of systems for progressive visual analytics (PVA)—a visual analytics paradigm that can provide intermediate results during computation and allow visual exploration of these results—to address the scalability hurdle. To support the interactive exploration of data with billions of records, we first introduce SwiftTuna, an interactive visualization system with scalable visualization and computation components. Our performance benchmark demonstrates that it can handle data with four billion records, giving responsive feedback every few seconds without precomputation. Second, we present PANENE, a progressive algorithm for the Approximate k-Nearest Neighbor (AKNN) problem. PANENE brings useful machine learning methods into visual analytics, which has been challenging due to their long initial latency resulting from AKNN computation. In particular, we accelerate t-Distributed Stochastic Neighbor Embedding (t-SNE), a popular non-linear dimensionality reduction technique, which enables the responsive visualization of data with a few hundred columns. Each of these two contributions aims to address the scalability issues stemming from a large number of rows or columns in data, respectively. Third, from the users' perspective, we focus on improving the trustworthiness of intermediate knowledge gained from uncertain results in PVA. We propose a novel PVA concept, Progressive Visual Analytics with Safeguards, and introduce PVA-Guards, safeguards people can leave on uncertain intermediate knowledge that needs to be verified. We also present a proof-of-concept system, ProReveal, designed and developed to integrate seven safeguards into progressive data exploration. Our user study demonstrates that people not only successfully created PVA-Guards on ProReveal but also voluntarily used PVA-Guards to manage the uncertainty of their knowledge. Finally, summarizing the three studies, we discuss design challenges for progressive systems as well as future research agendas for PVA.현대 데이터 사이언스에서 인터랙티브한 시각화를 통해 데이터를 이해하는 것은 필수적인 분석 방법 중 하나이다. 그러나, 최근 데이터의 크기가 폭발적으로 증가하면서 데이터 크기로 인해 발생하는 지연 시간이 인터랙티브한 시각적 분석에 큰 걸림돌이 되었다. 본 연구에서는 이러한 확장성 문제를 해결하기 위해 점진적 시각적 분석(Progressive Visual Analytics)을 지원하는 일련의 시스템을 디자인하고 개발한다. 이러한 점진적 시각적 분석 시스템은 데이터 처리가 완전히 끝나지 않더라도 중간 분석 결과를 사용자에게 제공함으로써 데이터의 크기로 인해 발생하는 지연 시간 문제를 완화할 수 있다. 첫째로, 수십억 건의 행을 가지는 데이터를 시각적으로 탐색할 수 있는 SwiftTuna 시스템을 제안한다. 데이터 처리 및 시각적 표현의 확장성을 목표로 개발된 이 시스템은, 약 40억 건의 행을 가진 데이터에 대한 시각화를 전처리 없이 수 초마다 업데이트할 수 있는 것으로 나타났다. 둘째로, 근사적 k-최근접점(Approximate k-Nearest Neighbor) 문제를 점진적으로 계산하는 PANENE 알고리즘을 제안한다. 근사적 k-최근접점 문제는 여러 기계 학습 기법에서 쓰임에도 불구하고 초기 계산 시간이 길어서 인터랙티브한 시스템에 적용하기 힘든 한계가 있었다. PANENE 알고리즘은 이러한 긴 초기 계산 시간을 획기적으로 개선하여 다양한 기계 학습 기법을 시각적 분석에 활용할 수 있도록 한다. 특히, 유용한 비선형적 차원 감소 기법인 t-분포 확률적 임베딩(t-Distributed Stochastic Neighbor Embedding)을 가속하여 수백 개의 차원을 가지는 데이터를 빠른 시간 내에 사영할 수 있다. 위의 두 시스템과 알고리즘이 데이터의 행 또는 열의 개수로 인한 확장성 문제를 해결하고자 했다면, 세 번째 시스템에서는 점진적 시각적 분석의 신뢰도 문제를 개선하고자 한다. 점진적 시각적 분석에서 사용자에게 주어지는 중간 계산 결과는 최종 결과의 근사치이므로 불확실성이 존재한다. 본 연구에서는 세이프가드를 이용한 점진적 시각적 분석(Progressive Visual Analytics with Safeguards)이라는 새로운 개념을 제안한다. 이 개념은 사용자가 점진적 탐색에서 마주하는 불확실한 중간 지식에 세이프가드를 남길 수 있도록 하여 탐색에서 얻은 지식의 정확도를 추후 검증할 수 있도록 한다. 또한, 이러한 개념을 실제로 구현하여 탑재한 ProReveal 시스템을 소개한다. ProReveal를 이용한 사용자 실험에서 사용자들은 세이프가드를 성공적으로 만들 수 있었을 뿐만 아니라, 중간 지식의 불확실성을 다루기 위해 세이프가드를 자발적으로 이용한다는 것을 알 수 있었다. 마지막으로, 위 세 가지 연구의 결과를 종합하여 점진적 시각적 분석 시스템을 구현할 때의 디자인적 난제와 향후 연구 방향을 모색한다.CHAPTER1. Introduction 2 1.1 Background and Motivation 2 1.2 Thesis Statement and Research Questions 5 1.3 Thesis Contributions 5 1.3.1 Responsive and Incremental Visual Exploration of Large-scale Multidimensional Data 6 1.3.2 ProgressiveComputation of Approximate k-Nearest Neighbors and Responsive t-SNE 7 1.3.3 Progressive Visual Analytics with Safeguards 8 1.4 Structure of Dissertation 9 CHAPTER2. Related Work 11 2.1 Progressive Visual Analytics 11 2.1.1 Definitions 11 2.1.2 System Latency and Human Factors 13 2.1.3 Users, Tasks, and Models 15 2.1.4 Techniques, Algorithms, and Systems. 17 2.1.5 Uncertainty Visualization 19 2.2 Approaches for Scalable Visualization Systems 20 2.3 The k-Nearest Neighbor (KNN) Problem 22 2.4 t-Distributed Stochastic Neighbor Embedding 26 CHAPTER3. SwiTuna: Responsive and Incremental Visual Exploration of Large-scale Multidimensional Data 28 3.1 The SwiTuna Design 31 3.1.1 Design Considerations 32 3.1.2 System Overview 33 3.1.3 Scalable Visualization Components 36 3.1.4 Visualization Cards 40 3.1.5 User Interface and Interaction 42 3.2 Responsive Querying 44 3.2.1 Querying Pipeline 44 3.2.2 Prompt Responses 47 3.2.3 Incremental Processing 47 3.3 Evaluation: Performance Benchmark 49 3.3.1 Study Design 49 3.3.2 Results and Discussion 52 3.4 Implementation 56 3.5 Summary 56 CHAPTER4. PANENE:AProgressive Algorithm for IndexingandQuerying Approximate k-Nearest Neighbors 58 4.1 Approximate k-Nearest Neighbor 61 4.1.1 A Sequential Algorithm 62 4.1.2 An Online Algorithm 63 4.1.3 A Progressive Algorithm 66 4.1.4 Filtered AKNN Search 71 4.2 k-Nearest Neighbor Lookup Table 72 4.3 Benchmark. 78 4.3.1 Online and Progressive k-d Trees 78 4.3.2 k-Nearest Neighbor Lookup Tables 83 4.4 Applications 85 4.4.1 Progressive Regression and Density Estimation 85 4.4.2 Responsive t-SNE 87 4.5 Implementation 92 4.6 Discussion 92 4.7 Summary 93 CHAPTER5. ProReveal: Progressive Visual Analytics with Safeguards 95 5.1 Progressive Visual Analytics with Safeguards 98 5.1.1 Definition 98 5.1.2 Examples 101 5.1.3 Design Considerations 103 5.2 ProReveal 105 5.3 Evaluation 121 5.4 Discussion 127 5.5 Summary 130 CHAPTER6. Discussion 132 6.1 Lessons Learned 132 6.2 Limitations 135 CHAPTER7. Conclusion 137 7.1 Thesis Contributions Revisited 137 7.2 Future Research Agenda 139 7.3 Final Remarks 141 Abstract (Korean) 155 Acknowledgments (Korean) 157Docto
    corecore