3,606 research outputs found

    ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop

    Full text link
    Industrial recommender systems face the challenge of operating in non-stationary environments, where data distribution shifts arise from evolving user behaviors over time. To tackle this challenge, a common approach is to periodically re-train or incrementally update deployed deep models with newly observed data, resulting in a continual training process. However, the conventional learning paradigm of neural networks relies on iterative gradient-based updates with a small learning rate, making it slow for large recommendation models to adapt. In this paper, we introduce ReLoop2, a self-correcting learning loop that facilitates fast model adaptation in online recommender systems through responsive error compensation. Inspired by the slow-fast complementary learning system observed in human brains, we propose an error memory module that directly stores error samples from incoming data streams. These stored samples are subsequently leveraged to compensate for model prediction errors during testing, particularly under distribution shifts. The error memory module is designed with fast access capabilities and undergoes continual refreshing with newly observed data samples during the model serving phase to support fast model adaptation. We evaluate the effectiveness of ReLoop2 on three open benchmark datasets as well as a real-world production dataset. The results demonstrate the potential of ReLoop2 in enhancing the responsiveness and adaptiveness of recommender systems operating in non-stationary environments.Comment: Accepted by KDD 2023. See the project page at https://xpai.github.io/ReLoo

    Analyzing library collections with starfield visualizations

    Get PDF
    This paper presents a qualitative and formative study of the uses of a starfield-based visualization interface for analysis of library collections. The evaluation process has produced feedback that suggests ways to significantly improve starfield interfaces and the interaction process to improve their learnability and usability. The study also gave us clear indication of additional potential uses of starfield visualizations that can be exploited by further functionality and interface development. We report on resulting implications for the design and use of starfield visualizations that will impact their graphical interface features, their use for managing data quality and their potential for various forms of visual data mining. Although the current implementation and analysis focuses on the collection of a physical library, the most important contributions of our work will be in digital libraries, in which volume, complexity and dynamism of collections are increasing dramatically and tools are needed for visualization and analysis

    Experiences with starfield visualizations for analysis of library collections

    Get PDF
    This paper presents a qualitative and formative study of the uses of a starfield-based visualization interface for analysis of library collections. The evaluation process has produced feedback that suggests ways to significantly improve starfield interfaces and the interaction process to improve their learnability and usability. The study also gave us clear indication of additional potential uses of starfield visualizations that can be exploited by further functionality and interface development. We report on resulting implications for the design and use of starfield visualizations that will impact their graphical interface features, their use for managing data quality and their potential for various forms of visual data mining. Although the current implementation and analysis focuses on the collection of a physical library, the most important contributions of our work will be in digital libraries, in which volume, complexity and dynamism of collections are increasing dramatically and tools are needed for visualization and analysis

    Identifying Correlated Heavy-Hitters in a Two-Dimensional Data Stream

    Full text link
    We consider online mining of correlated heavy-hitters from a data stream. Given a stream of two-dimensional data, a correlated aggregate query first extracts a substream by applying a predicate along a primary dimension, and then computes an aggregate along a secondary dimension. Prior work on identifying heavy-hitters in streams has almost exclusively focused on identifying heavy-hitters on a single dimensional stream, and these yield little insight into the properties of heavy-hitters along other dimensions. In typical applications however, an analyst is interested not only in identifying heavy-hitters, but also in understanding further properties such as: what other items appear frequently along with a heavy-hitter, or what is the frequency distribution of items that appear along with the heavy-hitters. We consider queries of the following form: In a stream S of (x, y) tuples, on the substream H of all x values that are heavy-hitters, maintain those y values that occur frequently with the x values in H. We call this problem as Correlated Heavy-Hitters (CHH). We formulate an approximate formulation of CHH identification, and present an algorithm for tracking CHHs on a data stream. The algorithm is easy to implement and uses workspace which is orders of magnitude smaller than the stream itself. We present provable guarantees on the maximum error, as well as detailed experimental results that demonstrate the space-accuracy trade-off

    ์‹œ๊ฐํ™” ์ดˆ์‹ฌ์ž์—๊ฒŒ ์‹œ๊ฐ์  ๋น„๊ต๋ฅผ ๋•๋Š” ์ •๋ณด ์‹œ๊ฐํ™” ๊ธฐ์ˆ ์˜ ๋””์ž์ธ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€,2020. 2. ์„œ์ง„์šฑ.The visual comparison is one of the fundamental tasks in information visualization (InfoVis) that enables people to organize, evaluate, and combine information fragmented in visualizations. For example, people perform visual comparison tasks to compare data over time, from different sources, or with different analytic models. While the InfoVis community has focused on understanding the effectiveness of different visualization designs for supporting visual comparison tasks, it is still unclear how to design effective comparative visualizations due to several limitations: (1) Empirical findings and practical implications from those studies are fragmented, and (2) we lack user studies that directly investigated the effectiveness of different visualization designs for visual comparison. In this dissertation, we present the results of three studies to build our knowledge on how to support effective visual comparison to InfoVis novicesโ โ€”general people who are not familiar with visual representations and visual data exploration process. Identifying the major stages in the visualization construction process where novices confront challenges with visual comparison tasks, we explored two high-level comparison tasks with actual users: comparing visual mapping (encoding barrier) and comparing information (interpretation barrier) in visualizations. First, we conducted a systematical literature review on research papers (N = 104) that focused on supporting visual comparison tasks to gather and organize the practical insights that researchers gained in the wild. From this study, we offered implications for designing comparative visualizations, such as actionable guidelines, as well as the lucid categorization of comparative designs which can help researchers explore the design space. In the second study, we performed a qualitative user study (N = 24) to investigate how novices compare and understand visual mapping suggested in a visual-encoding recommendation interface. Based on the study, we present novices' main challenges in using visual encoding recommendations and design implications as remedies. In the third study, we conducted a design study in the area on bioinformatics to design and implement a visual analytics tool, XCluSim, that helps users to compare multiple clustering results. Case studies with a bioinformatician showed that our system enables analysts to easily evaluate the quality of a large number of clustering results. Based on the results of three studies in this dissertation, we suggest a future research agenda, such as designing recommendations for visual comparison and distinguishing InfoVis novices from experts.์‹œ๊ฐ์  ๋น„๊ต๋Š” ์ •๋ณด ์‹œ๊ฐํ™”๋ฅผ ์ด์šฉํ•œ ํ•ต์‹ฌ์ ์ธ ๋ฐ์ดํ„ฐ ๋ถ„์„ ๊ณผ์ • ์ค‘ ํ•˜๋‚˜๋กœ์จ, ๋ถ„์‚ฐ๋˜์–ด ์žˆ๋Š” ์ •๋ณด๋“ค์„ ์‚ฌ๋žŒ๋“ค์ด ์„œ๋กœ ์ •๋ฆฌ, ํ‰๊ฐ€, ๋ณ‘ํ•ฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๋•๋Š”๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์‚ฌ๋žŒ๋“ค์€ ์‹œ๊ฐ„์˜ ํ๋ฆ„์— ๋”ฐ๋ฅธ ๋ฐ์ดํ„ฐ์˜ ๋ณ€ํ™”๋ฅผ ๋ณด๊ฑฐ๋‚˜, ์„œ๋กœ ๋‹ค๋ฅธ ์ถœ์ฒ˜์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋น„๊ตํ•˜๊ฑฐ๋‚˜, ๊ฐ™์€ ๋ฐ์ดํ„ฐ๋ฅผ ์—ฌ๋Ÿฌ ๋ถ„์„ ๋ชจ๋ธ๋“ค์„ ์ด์šฉํ•ด ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ์‹œ๊ฐ์  ๋น„๊ต ๊ณผ์—…์„ ํ”ํžˆ ์ˆ˜ํ–‰ํ•˜๊ฒŒ ๋œ๋‹ค. ํšจ๊ณผ์ ์ธ ์‹œ๊ฐํ™” ๋””์ž์ธ์„ ์œ„ํ•œ ์—ฌ๋Ÿฌ ์—ฐ๊ตฌ๊ฐ€ ์ •๋ณด ์‹œ๊ฐํ™” ๋ถ„์•ผ์—์„œ ์ด๋ฃจ์–ด์ง€๊ณ  ์žˆ๋Š” ๋ฐ˜๋ฉด, ์–ด๋–ค ๋””์ž์ธ์„ ํ†ตํ•ด ํšจ๊ณผ์ ์œผ๋กœ ์‹œ๊ฐ์  ๋น„๊ต๋ฅผ ์ง€์›ํ•  ์ˆ˜ ์žˆ๋Š”์ง€์— ๋Œ€ํ•œ ์ดํ•ด๋Š” ๋‹ค์Œ์˜ ์ œ์•ฝ๋“ค๋กœ ์ธํ•ด ์•„์ง๊นŒ์ง€ ๋ถˆ๋ถ„๋ช…ํ•˜๋‹ค. (1) ๊ฒฝํ—˜์  ํ†ต์ฐฐ๋“ค๊ณผ ์‹ค์šฉ์  ์„ค๊ณ„ ์ง€์นจ๋“ค์ด ํŒŒํŽธํ™”๋˜์–ด ์žˆ์œผ๋ฉฐ (2) ๋น„๊ต ์‹œ๊ฐํ™”๋ฅผ ์ง€์›ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•œ ์‚ฌ์šฉ์ž ์‹คํ—˜์˜ ์ˆ˜๊ฐ€ ์—ฌ์ „ํžˆ ์ œํ•œ์ ์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์‹œ๊ฐํ™” ์ดˆ์‹ฌ์ž๋“ค์—๊ฒŒ ํšจ๊ณผ์ ์œผ๋กœ ์‹œ๊ฐ์  ๋น„๊ต๋ฅผ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•œ ์ •๋ณด ์‹œ๊ฐํ™” ๋””์ž์ธ ๋ฐฉ๋ฒ•์„ ๋” ๊นŠ์ด ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด์„œ ์ผ๋ จ์˜ ์„ธ ์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ•˜๊ณ  ์ด์— ๋Œ€ํ•œ ๊ฒฐ๊ณผ๋ฅผ ์ œ์‹œํ•œ๋‹ค. ํŠน๋ณ„ํžˆ, ์‹œ๊ฐํ™” ์ดˆ์‹ฌ์ž๋“ค์ด ์‹œ๊ฐ์  ๋น„๊ต๋ฅผ ํ•  ๋•Œ ์–ด๋ ค์›€์„ ๊ฒฝํ—˜ํ•  ์ˆ˜ ์žˆ๋Š” ๋‘ ์ฃผ์š” ์‹œ๊ฐํ™” ๋‹จ๊ณ„๋ฅผ ํ™•์ธํ•จ์œผ๋กœ์จ, ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์‹œ๊ฐ์  ์ธ์ฝ”๋”ฉ ๋น„๊ต (์ธ์ฝ”๋”ฉ ์žฅ๋ฒฝ) ๋ฐ ์ •๋ณด ๋น„๊ต (ํ•ด์„ ์žฅ๋ฒฝ) ๊ณผ์—…๋“ค์— ์ดˆ์ ์„ ๋งž์ถ˜๋‹ค. ์ฒซ์งธ, ๋น„๊ต ์‹œ๊ฐํ™” ๋””์ž์ธ์„ ์ œ์‹œํ•œ ๋ฌธํ—Œ๋“ค(N = 104)์„ ์ฒด๊ณ„์ ์œผ๋กœ ์กฐ์‚ฌ ๋ฐ ๋ถ„์„ํ•จ์œผ๋กœ์จ ์‹œ๊ฐํ™” ์—ฐ๊ตฌ์ž๋“ค์ด ์‚ฌ์šฉ์ž ์‹คํ—˜๊ณผ ์‹œ๊ฐํ™” ์„ค๊ณ„ ๊ณผ์ •์„ ํ†ตํ•ด ์–ป์€ ์‹ค์šฉ์  ํ†ต์ฐฐ๋“ค์„ ์ •๋ฆฌํ•˜์˜€๋‹ค. ์ด ๋ฌธํ—Œ์กฐ์‚ฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋น„๊ต ์‹œ๊ฐํ™” ์„ค๊ณ„์— ๋Œ€ํ•œ ์ง€์นจ๋“ค์„ ์ •๋ฆฝํ•˜๊ณ , ๋น„๊ต ์‹œ๊ฐํ™”๋ฅผ ์œ„ํ•œ ๋””์ž์ธ ๊ณต๊ฐ„์„ ๋” ๊นŠ์ด ์ดํ•ดํ•˜๊ณ  ํƒ์ƒ‰ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ๋Š” ์‹œ๊ฐํ™” ๋ถ„๋ฅ˜ ๋ฐ ์˜ˆ์‹œ๋“ค์„ ์ œ๊ณตํ•œ๋‹ค. ๋‘˜์งธ, ์ดˆ์‹ฌ์ž๋“ค์ด ์‹œ๊ฐํ™” ์ถ”์ฒœ ์ธํ„ฐํŽ˜์ด์Šค์—์„œ ์–ด๋–ป๊ฒŒ ์ƒˆ๋กœ์šด ์‹œ๊ฐ์  ์ธ์ฝ”๋”ฉ๋“ค์„ ์„œ๋กœ ๋น„๊ตํ•˜๊ณ  ์‚ฌ์šฉํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ์ดํ•ด๋ฅผ ๋•๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ์ž ์‹คํ—˜(N = 24)์„ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ์ด ์‹คํ—˜์˜ ๊ฒฐ๊ณผ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ, ์ดˆ์‹ฌ์ž๋“ค์˜ ์ฃผ์š” ์–ด๋ ค์›€๋“ค๊ณผ ์ด๋“ค์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ๋””์ž์ธ ์ง€์นจ๋“ค์„ ์ œ์‹œํ•œ๋‹ค. ์…‹์งธ, ์ƒ๋ช…์ •๋ณดํ•™์ž๊ฐ€ ์‹œ๊ฐ์ ์œผ๋กœ ๋‹ค์ˆ˜ ๊ฐœ์˜ ํด๋Ÿฌ์Šคํ„ฐ๋ง ๊ฒฐ๊ณผ๋“ค์„ ๋น„๊ต ๋ฐ ๋ถ„์„ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์‹œ๊ฐํ™” ์‹œ์Šคํ…œ, XCluSim์„ ๋””์ž์ธํ•˜๊ณ  ๊ตฌํ˜„ํ•˜๋Š” ๋””์ž์ธ ์Šคํ„ฐ๋””๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ์‚ฌ๋ก€ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ์‹ค์ œ๋กœ ์ƒ๋ช…์ •๋ณดํ•™์ž๊ฐ€ XCluSim์„ ์ด์šฉํ•˜์—ฌ ๋งŽ์€ ํด๋Ÿฌ์Šคํ„ฐ๋ง ๊ฒฐ๊ณผ๋“ค์„ ์‰ฝ๊ฒŒ ๋น„๊ต ๋ฐ ํ‰๊ฐ€ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ด ์„ธ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋“ค์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋น„๊ต ์‹œ๊ฐํ™” ๋ถ„์•ผ์—์„œ ์œ ๋งํ•œ ํ–ฅํ›„ ์—ฐ๊ตฌ๋“ค์„ ์ œ์‹œํ•œ๋‹ค.CHAPTER 1. Introduction 1 1.1 Background and Motivation 1 1.2 Research Questions and Approaches 4 1.2.1 Revisiting Comparative Layouts: Design Space, Guidelines, and Future Directions 5 1.2.2 Understanding How InfoVis Novices Compare Visual Encoding Recommendation 6 1.2.3 Designing XCluSim: a Visual Analytics System for Comparing Multiple Clustering Results 7 1.3 Dissertation Outline 8 CHAPTER 2. Related Work 9 2.1 Visual Comparison Tasks 9 2.2 Visualization Designs for Comparison 10 2.2.1 Gleicher et al.s Comparative Layout 11 2.3 Understanding InfoVis Novices 12 2.4 Visualization Recommendation Interfaces 13 2.5 Comparative Visualizations for Cluster Analysis 14 CHAPTER 3. Comparative Layouts Revisited: Design Space, Guidelines, and Future Directions 19 3.1 Introduction 19 3.2 Literature Review 21 3.2.1 Method 22 3.3 Comparative Layouts in The Wild 23 3.3.1 Classifying Comparison Tasks in User Studies 25 3.3.2 Same LayoutIs Called Differently 26 3.3.3 Lucid Classification of Comparative Layouts 28 3.3.4 Advantages and Concerns of Using Each Layout 30 3.3.5 Trade-offs between Comparative Layouts 36 3.3.6 Approaches to Overcome the Concerns 38 3.3.7 Comparative Layout Explorer 42 3.4 Discussion 42 3.4.1 Guidelines for Comparative Layouts 44 3.4.2 Promising Directions for Future Research 48 3.5 Summary 49 CHAPTER 4. Understanding How InfoVis Novices Compare Visual Encoding Recommendation 51 4.1 Motivation 51 4.2 Interface 53 4.2.1 Visualization Goals 53 4.2.2 Recommendations 54 4.2.3 Representation Methods for Recommendations 54 4.2.4 Interface 58 4.2.5 Pilot Study 61 4.3 User Study 62 4.3.1 Participants 62 4.3.2 Interface 62 4.3.3 Tasks and Datasets 65 4.3.4 Procedure. 65 4.4 Findings 68 4.4.1 Poor Design Decisions 68 4.4.2 Role of Preview, Animated Transition, and Text 69 4.4.3 Challenges For Understanding Recommendations 70 4.4.4 Learning By Doing 71 4.4.5 Effects of Recommendation Order 71 4.4.6 Personal Criteria for Selecting Recommendations 72 4.5 Discussion 73 4.5.1 Design Implications 73 4.5.2 Limitations and FutureWork 75 4.6 Summary 77 CHAPTER 5. Designing XCluSim: a Visual Analytics System for Comparing Multiple Clustering Results 78 5.1 Motivation 78 5.2 Task Analysis and Design Goals 79 5.3 XCluSim 80 5.3.1 Color Encoding of Clusters Using Tree Colors 82 5.3.2 Overview of All Clustering Results 83 5.3.3 Visualization for Comparing Selected Clustering Results 86 5.3.4 Visualization for Individual Clustering Results 92 5.3.5 Implementation 100 5.4 CaseStudy 100 5.4.1 Elucidating the Role of Ferroxidase in Cryptococcus Neoformans Var. Grubii H99 (CaseStudy 1) 100 5.4.2 Finding a Clustering Result that Clearly Represents Biological Relations (CaseStudy 2) 103 5.5 Discussion 106 5.5.1 Limitations and FutureWork 108 5.6 Summary 108 CHAPTER 6. Future Research Agenda 110 6.0.1 Recommendation for Visual Comparison 110 6.0.2 Understanding the Perception of Subtle Difference 111 6.0.3 Distinguishing InfoVis Novices from Experts 112 CHAPTER 7. Conclusion. 113 Abstract (Korean) 129 Acknowledgments (Korean) 131Docto

    Empathy, connectivity, authenticity, and trust: A rhetorical framework for creating and evaluating interaction design

    Get PDF
    Relationships are synergistic. Relational theories describe how we create and sustain relationships and take into consideration our own experiences, our own social location and include broad cultural signifiers. Part of our development as people is to learn about power; our own power, and others\u27 power. This thesis offers the combinational addition of Relational-Cultural Theory and the Connectivity Model to the spectrum of interaction design. Since interaction design is about designing mediating tools for people and their subsequent behaviors, particular attention is needed into establishing and maintaining relationship between designer and audience. Relational-Cultural Theory pushes against typical patriarchal structures and values in the United States. These typical power over values/structures include men over women, whites over blacks, logic over emotion, provider over nurturer, and so on. Relational-Cultural Theory seeks a flatness of power. It creates a sense of shared power, or power with others. This idea of shared power can lead to collaborative creation in interaction design to produce useful and good designs. Empathy, mutuality, and authenticity are essential in recognizing our own limits and strengths in connection with others. Building trust requires a mix of all three of these tenets, as well as evolution through conflict. Interaction designers can move toward creating an inclusive theory for this discipline by becoming vulnerable and sharing power with the people with whom they design interactions. Therefore, the rhetorical framework of empathy, connectivity, authenticity, and trust (e-CAT) is presented as a means of creating and evaluating interaction design
    • โ€ฆ
    corecore