1,500 research outputs found

    Visual exploration and retrieval of XML document collections with the generic system X2

    Get PDF
    This article reports on the XML retrieval system X2 which has been developed at the University of Munich over the last five years. In a typical session with X2, the user first browses a structural summary of the XML database in order to select interesting elements and keywords occurring in documents. Using this intermediate result, queries combining structure and textual references are composed semiautomatically. After query evaluation, the full set of answers is presented in a visual and structured way. X2 largely exploits the structure found in documents, queries and answers to enable new interactive visualization and exploration techniques that support mixed IR and database-oriented querying, thus bridging the gap between these three views on the data to be retrieved. Another salient characteristic of X2 which distinguishes it from other visual query systems for XML is that it supports various degrees of detailedness in the presentation of answers, as well as techniques for dynamically reordering and grouping retrieved elements once the complete answer set has been computed

    Object-oriented program animation using JAN

    Get PDF
    Concept and design of the program animation system JAN are described. JAN visualizes the execution of a Java program by dynamically unfolding an object diagram and an interaction diagram. Several features distinguish JAN from existing program visualization systems and visual debuggers. Annotations in the program code can be used to control the animation by selecting the relevant events and customizing the visual appearance. In addition, the user can interactively steer the animation in various ways. JAN is an integrated visualization system which includes an elaborate graphical user interface, a preprocessor for annotated Java source code and a visualization engine that runs in a separate Java virtual machine. The design of the system is described in detail

    Defining a critical data literacy for recommender systems: A media-grounded approach

    Get PDF
    The digital processing of massive data is becoming a central component of our technological infrastructures. While being able to use these tools efficiently is an issue that cannot be ignored, it appears crucial to provide citizens with the means to control their technical environment. Recommender systems and personalization technologies are currently being blamed for the destabilization of users’ informational ecosystems and a growing polarization of opinions. However, a critical review of the current literature on the subject indicates that these recommender systems may also be beneficial to the user in specific circumstances. Building on current critical data literacies approaches, key concepts from the philosophy of technology and a media literacy perspective, this paper suggests a framework defining the competences needed to help users assess these technologies and critically include them in their digital ecosystem

    WG1N5315 - Response to Call for AIC evaluation methodologies and compression technologies for medical images: LAR Codec

    Get PDF
    This document presents the LAR image codec as a response to Call for AIC evaluation methodologies and compression technologies for medical images.This document describes the IETR response to the specific call for contributions of medical imaging technologies to be considered for AIC. The philosophy behind our coder is not to outperform JPEG2000 in compression; our goal is to propose an open source, royalty free, alternative image coder with integrated services. While keeping the compression performances in the same range as JPEG2000 but with lower complexity, our coder also provides services such as scalability, cryptography, data hiding, lossy to lossless compression, region of interest, free region representation and coding

    Automatically Score Tissue Images Like a Pathologist by Transfer Learning

    Full text link
    Cancer is the second leading cause of death in the world. Diagnosing cancer early on can save many lives. Pathologists have to look at tissue microarray (TMA) images manually to identify tumors, which can be time-consuming, inconsistent and subjective. Existing algorithms that automatically detect tumors have either not achieved the accuracy level of a pathologist or require substantial human involvements. A major challenge is that TMA images with different shapes, sizes, and locations can have the same score. Learning staining patterns in TMA images requires a huge number of images, which are severely limited due to privacy concerns and regulations in medical organizations. TMA images from different cancer types may have common characteristics that could provide valuable information, but using them directly harms the accuracy. By selective transfer learning from multiple small auxiliary sets, the proposed algorithm is able to extract knowledge from tissue images showing a ``similar" scoring pattern but with different cancer types. Remarkably, transfer learning has made it possible for the algorithm to break the critical accuracy barrier -- the proposed algorithm reports an accuracy of 75.9% on breast cancer TMA images from the Stanford Tissue Microarray Database, achieving the 75\% accuracy level of pathologists. This will allow pathologists to confidently use automatic algorithms to assist them in recognizing tumors consistently with a higher accuracy in real time.Comment: 19 pages, 6 figure

    Managing Temporal Dynamics of Filter Bubbles

    Get PDF
    Filter bubbles have attracted much attention in recent years in terms of their impact on society. Whereas it is commonly agreed that filter bubbles should be managed, the question is still how. We draw a picture of filter bubbles as dynamic, slowly changing constructs that underlie temporal dynamics and that are constantly influenced by both machine and human. Anchored in a research setting with a major public broadcaster, we follow a design science approach on how to design the temporal dynamics in filter bubbles and how to design users' influence over time. We qualitatively evaluate our approach with a smartphone app for personalized radio and found that the adjustability of filter bubbles leads to a better co-creation of information flows between information broadcaster and listener

    Algorythmics. Technologically and artistically enhanced computer science education

    Get PDF
    A major responsibility of educational systems in the 21st century is to prepare future generations for the challenges involved with the increasing computerization of our everyday lives and to meet the demands of one of the fastest-growing job markets: computing. The goal of our beloved AlgoRythmics project is to promote computing education for all by taking into account the key elements from the most relevant computational thinking definitions. For this purpose, we have created an engaging algorithm visualization environment that is built around a collection of interactive dynamic visualizations illustrating basic computer algorithms. Making computing education attractive for different categories of learners is a challenging initiative. A possible approach might be contextualization. The AlgoRythmics learning environment has been designed along this approach. Since music and dance are relatively close to most people, this environment visualizes searching and sorting algorithms by professional dance choreographies (folkdance, flamenco, ballet). The “dance floor” we have created is an interactive and intuitive user interface which guides learners from dance to code. From the perspective of the teaching-learning process, the most important features of the environment are its unified, artistically enhanced, human-movement-effect-enriched, multisensory, and interactive character. What is this book about? It is about the AlgoRythmics universe. Of course, we have not dreamt up a complex teaching-learning tool and the attached didactical methods overnight. The AlgoRythmics project has its own particular history. Through this book, we invite the reader to accompany us as we virtually relive the AlgoRythmics adventure

    Asking Clarifying Questions:To benefit or to disturb users in Web search?

    Get PDF
    Modern information-seeking systems are becoming more interactive, mainly through asking Clarifying Questions (CQs) to refine users’ information needs. System-generated CQs may be of different qualities. However, the impact of asking multiple CQs of different qualities in a search session remains underexplored. Given the multi-turn nature of conversational information-seeking sessions, it is critical to understand and measure the impact of CQs of different qualities, when they are posed in various orders. In this paper, we conduct a user study on CQ quality trajectories, i.e., asking CQs of different qualities in chronological order. We aim to investigate to what extent the trajectory of CQs of different qualities affects user search behavior and satisfaction, on both query-level and session-level. Our user study is conducted with 89 participants as search engine users. Participants are asked to complete a set of Web search tasks. We find that the trajectory of CQs does affect the way users interact with Search Engine Result Pages (SERPs), e.g., a preceding high-quality CQ prompts the depth users to interact with SERPs, while a preceding low-quality CQ prevents such interaction. Our study also demonstrates that asking follow-up high-quality CQs improves the low search performance and user satisfaction caused by earlier low-quality CQs. In addition, only showing high-quality CQs while hiding other CQs receives better gains with less effort. That is, always showing all CQs may be risky and low-quality CQs do disturb users. Based on observations from our user study, we further propose a transformer-based model to predict which CQs to ask, to avoid disturbing users. In short, our study provides insights into the effects of trajectory of asking CQs, and our results will be helpful in designing more effective and enjoyable search clarification systems.This study is supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from Singapore Telecommunications Limited (Singtel), through Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU). This study is also supported by the NWO Smart Culture - Big Data/Digital Humanities (314-99-301), the NWO Innovational Research Incentives Scheme Vidi (016.Vidi.189.039), and the H2020- EU.3.4. - SOCIETAL CHALLENGES - Smart, Green, And Integrated Transport (814961)
    • 

    corecore