2,096 research outputs found

    System for identifying and reporting changes to course websites

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 63-64).CourseDiff is a prototype system that periodically samples course websites and notifies users via email when it identifies changes to those sites. The system was developed after conducting a study of 120 web pages from 50 MIT course websites sampled for two months during the spring semester of 2009. The study found that only 18% of changes to the HTML content of course website data are actually important to the content of the page. A closer examination of the corpus identified two major sources of trivial changes. The first is automatically generated content that changes on every visit to the page. The second is formatting and whitespace changes that do not affect the page's textual content. Together, these two sources produce over 99% of the trivial changes. CourseDiff implements an algorithm to filter out these trivial changes from the webpages it samples and a change reporting format for the changes that are identified as important. A small user test on part of the CourseDiff interface indicated that the system could feasibly be used by students to track changes to course websites.by Igor Kopylov.M.Eng

    Interactive Exploration of Temporal Event Sequences

    Get PDF
    Life can often be described as a series of events. These events contain rich information that, when put together, can reveal history, expose facts, or lead to discoveries. Therefore, many leading organizations are increasingly collecting databases of event sequences: Electronic Medical Records (EMRs), transportation incident logs, student progress reports, web logs, sports logs, etc. Heavy investments were made in data collection and storage, but difficulties still arise when it comes to making use of the collected data. Analyzing millions of event sequences is a non-trivial task that is gaining more attention and requires better support due to its complex nature. Therefore, I aimed to use information visualization techniques to support exploratory data analysis---an approach to analyzing data to formulate hypotheses worth testing---for event sequences. By working with the domain experts who were analyzing event sequences, I identified two important scenarios that guided my dissertation: First, I explored how to provide an overview of multiple event sequences? Lengthy reports often have an executive summary to provide an overview of the report. Unfortunately, there was no executive summary to provide an overview for event sequences. Therefore, I designed LifeFlow, a compact overview visualization that summarizes multiple event sequences, and interaction techniques that supports users' exploration. Second, I examined how to support users in querying for event sequences when they are uncertain about what they are looking for. To support this task, I developed similarity measures (the M&M measure 1-2) and user interfaces (Similan 1-2) for querying event sequences based on similarity, allowing users to search for event sequences that are similar to the query. After that, I ran a controlled experiment comparing exact match and similarity search interfaces, and learned the advantages and disadvantages of both interfaces. These lessons learned inspired me to develop Flexible Temporal Search (FTS) that combines the benefits of both interfaces. FTS gives confident and countable results, and also ranks results by similarity. I continued to work with domain experts as partners, getting them involved in the iterative design, and constantly using their feedback to guide my research directions. As the research progressed, several short-term user studies were conducted to evaluate particular features of the user interfaces. Both quantitative and qualitative results were reported. To address the limitations of short-term evaluations, I included several multi-dimensional in-depth long-term case studies with domain experts in various fields to evaluate deeper benefits, validate generalizability of the ideas, and demonstrate practicability of this research in non-laboratory environments. The experience from these long-term studies was combined into a set of design guidelines for temporal event sequence exploration. My contributions from this research are LifeFlow, a visualization that compactly displays summaries of multiple event sequences, along with interaction techniques for users' explorations; similarity measures (the M&M measure 1-2) and similarity search interfaces (Similan 1-2) for querying event sequences; Flexible Temporal Search (FTS), a hybrid query approach that combines the benefits of exact match and similarity search; and case study evaluations that results in a process model and a set of design guidelines for temporal event sequence exploration. Finally, this research has revealed new directions for exploring event sequences

    Interactive Visual Displays for Results Management in Complex Medical Workflows

    Get PDF
    Clinicians manage medical orders to ensure that the results are returned promptly to the correct physician and followed up on time. Delays in results management occur frequently, physically harm patients, and often cause malpractice litigation. Better tracking of medical orders that showed progress and indicated delays, could result in improved care, better safety, and reduced clinician effort. This dissertation presents novel displays of rich tables with an interaction technique called ARCs (Actions for Rapid Completion). Rich tables are generated by MStart (Multi-Step Task Analyzing, Reporting, and Tracking) from a workflow model that defines order processes. Rich tables help clinicians perceive each order's status, prioritize the critical ones, and act on results in a timely fashion. A second contribution is the design of an interactive visualization called MSProVis (Multi-Step Process Visualization), which is composed of several PCDs (Process Completion Diagrams) that show the number and duration of in-time, late, and not-completed orders. With MSProVis, managers perform retrospective analyses to make decisions by studying an overview of the order process, durations of order steps, and performances of individuals. I visited seven hospitals and clinics to define sample results management workflows. Iterative design reviews with clinicians, designers, and researchers led to refinements of the rich tables, ARCs, and design guidelines. A controlled experiment with 18 participants under time pressure and distractions tested two features (showing pending orders and prioritizing by lateness) of rich tables. These changes statistically significantly reduce the time from nine to one minute to correctly identify late orders compared to the traditional chronologically-ordered lists. Another study demonstrated that ARCs speed performance up by 25% compared to state-of-the-art systems. A usability study with two clinicians and five novices showed that participants were able to understand MSProVis and efficiently perform representative tasks. Two subjective preference surveys suggested new design choices for the PCDs. This dissertation provides designers of results management systems with clear guidance about showing pending results and prioritizing by lateness, and tested strategies for performing retrospective analyses. It also offers detailed design guidelines for results management, tables, and integrated actions on tables that speed performance for common tasks

    A Disease Tracking EHR for Ghana

    Get PDF
    The goal of the project was to develop a disease tracking electronic health record (EHR) system for Ghana in order to improve efficiency within medical facilities and to increase the quality of patient care. There are only a few medical facilities that have implemented EHR systems, and even fewer with immunization and disease tracking. Our proposed system, VermaMS, stores a history of patient visits for every patient diagnosed with malaria, tuberculosis, or meningitis. Users of the application are able to view patient data graphically as well as generate a report that contains information on each patient’s visit. VermaMS is intended to be used in conjunction with the current EHR systems, but it could potentially be developed into a full EHR system of its own in the future

    Navigating the Kaleidoscope of Object(ive)s: A User-Experience Approach to Cultural-Historical Activity Theory

    Get PDF
    Activity Theory, specifically third-generation activity theory also known as Cultural-Historical Activity Theory or CHAT (Engeström, 2001, 2015; Leontiev, 1978, 1981; Vygotsky, 1978) has largely been used as a framework for studying different networks of activity, encountered by subjects who utilize tools or mediating artifacts in order to divide their labor within particular communities. This theoretical and empirical project analyzes a transnational user’s experiences performing their identity on Instagram by answering the research question: How does a user with transnational literacy experiences perform their identity and manage communities through the mediation of particular technologies on Instagram? Using mixed-methods from four data streams—1) semi-structured interviews, 2) rhetorical analysis of a participant’s personal Instagram data (including images, captions, account biographies, and stories), 3) recordings of a participant using think-aloud protocol, and 4) analytical memos of the participant’s Instagram activity—in this thesis project I aimed to accomplish three goals. First, to outline and historicize influential generations of Activity Theory. Second, to present a new approach to Cultural-Historical Activity Theory called the “User-Experience CHAT Model.” Third, to apply the new model to a case study. The results of the study suggest that users on social media sites may communicate with particular communities, but also past, present, and future versions of themselves. As users engage in activities across time, they encounter a field of interpretation informed by contexts, which influence their present experiences as they produce an object. Thus, users’ identities are constantly in a state of transformation and becoming as their object(ive)s in social media activities transform across time

    A Consumers Guide to Grants Management Systems 2016

    Get PDF
    This report has been released by Grants Managers Network (GMN) and Technology Affinity Group (TAG), with research conducted by Idealware. The report compares 29 grants management systems across 174 requirements criteria, looks at what each system does, and compares the strengths and weaknesses of each system available to grantmakers. The report looks at how they stack up against high-level categories and details the functionality of each system against specific criteria important to the grant-making community

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Implementing GitHub Actions Continuous Integration to Reduce Error Rates in Ecological Data Collection

    Get PDF
    Accurate field data are essential to understanding ecological systems and forecasting their responses to global change. Yet, data collection errors are common, and data analysis often lags far enough behind its collection that many errors can no longer be corrected, nor can anomalous observations be revisited. Needed is a system in which data quality assurance and control (QA/QC), along with the production of basic data summaries, can be automated immediately following data collection. Here, we implement and test a system to satisfy these needs. For two annual tree mortality censuses and a dendrometer band survey at two forest research sites, we used GitHub Actions continuous integration (CI) to automate data QA/QC and run routine data wrangling scripts to produce cleaned datasets ready for analysis. This system automation had numerous benefits, including (1) the production of near real-time information on data collection status and errors requiring correction, resulting in final datasets free of detectable errors, (2) an apparent learning effect among field technicians, wherein original error rates in field data collection declined significantly following implementation of the system, and (3) an assurance of computational reproducibility—that is, robustness of the system to changes in code, data and software. By implementing CI, researchers can ensure that datasets are free of any errors for which a test can be coded. The result is dramatically improved data quality, increased skill among field technicians, and reduced need for expert oversight. Furthermore, we view CI implementation as a first step towards a data collection and analysis pipeline that is also more responsive to rapidly changing ecological dynamics, making it better suited to study ecological systems in the current era of rapid environmental change
    • 

    corecore