318 research outputs found

    Improving User Involvement Through Live Collaborative Creation

    Full text link
    Creating an artifact - such as writing a book, developing software, or performing a piece of music - is often limited to those with domain-specific experience or training. As a consequence, effectively involving non-expert end users in such creative processes is challenging. This work explores how computational systems can facilitate collaboration, communication, and participation in the context of involving users in the process of creating artifacts while mitigating the challenges inherent to such processes. In particular, the interactive systems presented in this work support live collaborative creation, in which artifact users collaboratively participate in the artifact creation process with creators in real time. In the systems that I have created, I explored liveness, the extent to which the process of creating artifacts and the state of the artifacts are immediately and continuously perceptible, for applications such as programming, writing, music performance, and UI design. Liveness helps preserve natural expressivity, supports real-time communication, and facilitates participation in the creative process. Live collaboration is beneficial for users and creators alike: making the process of creation visible encourages users to engage in the process and better understand the final artifact. Additionally, creators can receive immediate feedback in a continuous, closed loop with users. Through these interactive systems, non-expert participants help create such artifacts as GUI prototypes, software, and musical performances. This dissertation explores three topics: (1) the challenges inherent to collaborative creation in live settings, and computational tools that address them; (2) methods for reducing the barriers of entry to live collaboration; and (3) approaches to preserving liveness in the creative process, affording creators more expressivity in making artifacts and affording users access to information traditionally only available in real-time processes. In this work, I showed that enabling collaborative, expressive, and live interactions in computational systems allow the broader population to take part in various creative practices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145810/1/snaglee_1.pd

    Annual Report 2019-2020

    Get PDF
    LETTER FROM THE DEAN As I write this letter wrapping up the 2019-20 academic year, we remain in a global pandemic that has profoundly altered our lives. While many things have changed, some stayed the same: our CDM community worked hard, showed up for one another, and continued to advance their respective fields. A year that began like many others changed swiftly on March 11th when the University announced that spring classes would run remotely. By March 28th, the first day of spring quarter, we had moved 500 CDM courses online thanks to the diligent work of our faculty, staff, and instructional designers. But CDM’s work went beyond the (virtual) classroom. We mobilized our makerspaces to assist in the production of personal protective equipment for Illinois healthcare workers, participated in COVID-19 research initiatives, and were inspired by the innovative ways our student groups learned to network. You can read more about our response to the COVID-19 pandemic on pgs. 17-19. Throughout the year, our students were nationally recognized for their skills and creative work while our faculty were published dozens of times and screened their films at prestigious film festivals. We added a new undergraduate Industrial Design program, opened a second makerspace on the Lincoln Park Campus, and created new opportunities for Chicago youth. I am pleased to share with you the College of Computing and Digital Media’s (CDM) 2019-20 annual report, highlighting our collective accomplishments. David MillerDeanhttps://via.library.depaul.edu/cdmannual/1003/thumbnail.jp

    On-Demand Collaboration in Programming

    Full text link
    In programming, on-demand assistance occurs when developers seek support for their tasks as needed. Traditionally, this collaboration happens within teams and organizations in which people are familiar with the context of requests and tasks. More recently, this type of collaboration has become ubiquitous outside of teams and organizations, due to the success of paid online crowdsourcing marketplaces (e.g., Upwork) and free online question-answering websites (e.g., Stack Overflow). Thousands of requests are posted on these platforms on a daily basis, and many of them are not addressed in a timely manner for a variety of reasons, including requests that often lack sufficient context and access to relevant artifacts. In consequence, on-demand collaboration often results in suboptimal productivity and unsatisfactory user experiences. This dissertation includes three main parts: First, I explored the challenges developers face when requesting help from or providing assistance to others on demand. I have found seven common types of requests (e.g., seeking code examples) that developers use in various projects when an on-demand agent is available. Compared to studying existing supporting systems, I suggest eight key system features to enable more effective on-demand remote assistance for developers. Second, driven by these findings, I designed and developed two systems: 1) CodeOn, a system that enables more effective task hand-offs (e.g., rich context capturing) between end-user developers and remote helpers than exciting synchronous support systems by allowing asynchronous responses to on-demand requests; and 2) CoCapture, a system that enables interface designers to easily create and then accurately describe UI behavior mockups, including changes they want to propose or questions they want to ask about an aspect of the existing UI. Third, beyond software development assistance, I also studied intelligent assistance for embedded system development (e.g., Arduino) and revealed six challenges (e.g., communication setup remains tedious) that developers have during on-demand collaboration. Through an imaginary study, I propose four design implications to help develop future support systems with embedded system development. This thesis envisions a future in which developers in all kinds of domains can effortlessly make context-rich, on-demand requests at any stage of their development processes, and qualified agents (machine or human) can quickly be notified and orchestrate their efforts to promptly respond to the requests.PHDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/166144/1/yanchenm_1.pd

    Scalable Real-Time Rendering for Extremely Complex 3D Environments Using Multiple GPUs

    Get PDF
    In 3D visualization, real-time rendering of high-quality meshes in complex 3D environments is still one of the major challenges in computer graphics. New data acquisition techniques like 3D modeling and scanning have drastically increased the requirement for more complex models and the demand for higher display resolutions in recent years. Most of the existing acceleration techniques using a single GPU for rendering suffer from the limited GPU memory budget, the time-consuming sequential executions, and the finite display resolution. Recently, people have started building commodity workstations with multiple GPUs and multiple displays. As a result, more GPU memory is available across a distributed cluster of GPUs, more computational power is provided throughout the combination of multiple GPUs, and a higher display resolution can be achieved by connecting each GPU to a display monitor (resulting in a tiled large display configuration). However, using a multi-GPU workstation may not always give the desired rendering performance due to the imbalanced rendering workloads among GPUs and overheads caused by inter-GPU communication. In this dissertation, I contribute a multi-GPU multi-display parallel rendering approach for complex 3D environments. The approach has the capability to support a high-performance and high-quality rendering of static and dynamic 3D environments. A novel parallel load balancing algorithm is developed based on a screen partitioning strategy to dynamically balance the number of vertices and triangles rendered by each GPU. The overhead of inter-GPU communication is minimized by transferring only a small amount of image pixels rather than chunks of 3D primitives with a novel frame exchanging algorithm. The state-of-the-art parallel mesh simplification and GPU out-of-core techniques are integrated into the multi-GPU multi-display system to accelerate the rendering process

    The accessibility and scalability of gene family analysis

    Get PDF
    Gene family detection allows us to gain a better understanding of how different genomes are related. At UNH, we have a pipeline that computes these families using a variety of methods. However, the pipeline is inefficient, and performs poorly on large numbers of genornes. The pipeline is comprised of many Pert scripts, which are complex to use, and require specific organization of the data at each step. This means that all users of the pipeline must undergo training to understand each step of the pipeline and the intricacies of each script. The goal of my thesis is two-fold. First, I have optimized the scripts used in determining the gene families. This allows users to run gene family analysis on any number of genomes, without using excessive amounts of memory. My second step was to create a web interface for the pipeline. Each user is given an account that they can use to create pipeline projects. Within a project, users can simply upload their data, create the jobs they wish to run, and the web interface takes care of all the details. The server structures their data in the correct form, and the pipeline scripts are run automatically. The results are produced in an easy to understand format, and can be downloaded by the users. We have taken this interface, and have created a machine image containing all the tools needed to run the pipeline, and have made it available publicly on the Amazon Elastic Compute Cloud

    adPerf: Characterizing the Performance of Third-party Ads

    Get PDF
    Monetizing websites and web apps through online advertising is widespread in the web ecosystem. The online advertising ecosystem nowadays forces publishers to integrate ads from these third-party domains. On the one hand, this raises several privacy and security concerns that are actively studied in recent years. On the other hand, given the ability of today's browsers to load dynamic web pages with complex animations and Javascript, online advertising has also transformed and can have a significant impact on webpage performance. The performance cost of online ads is critical since it eventually impacts user satisfaction as well as their Internet bill and device energy consumption. In this paper, we apply an in-depth and first-of-a-kind performance evaluation of web ads. Unlike prior efforts that rely primarily on adblockers, we perform a fine-grained analysis on the web browser's page loading process to demystify the performance cost of web ads. We aim to characterize the cost by every component of an ad, so the publisher, ad syndicate, and advertiser can improve the ad's performance with detailed guidance. For this purpose, we develop an infrastructure, adPerf, for the Chrome browser that classifies page loading workloads into ad-related and main-content at the granularity of browser activities (such as Javascript and Layout). Our evaluations show that online advertising entails more than 15% of browser page loading workload and approximately 88% of that is spent on JavaScript. We also track the sources and delivery chain of web ads and analyze performance considering the origin of the ad contents. We observe that 2 of the well-known third-party ad domains contribute to 35% of the ads performance cost and surprisingly, top news websites implicitly include unknown third-party ads which in some cases build up to more than 37% of the ads performance cost

    WebAL Comes of Age: A review of the first 21 years of Artificial Life on the Web

    Get PDF
    We present a survey of the first 21 years of web-based artificial life (WebAL) research and applications, broadly construed to include the many different ways in which artificial life and web technologies might intersect. Our survey covers the period from 1994—when the first WebAL work appeared—up to the present day, together with a brief discussion of relevant precursors. We examine recent projects, from 2010–2015, in greater detail in order to highlight the current state of the art. We follow the survey with a discussion of common themes and methodologies that can be observed in recent work and identify a number of likely directions for future work in this exciting area

    Big Data Analytics and Application Deployment on Cloud Infrastructure

    Get PDF
    This dissertation describes a project began in October 2016. It was born from the collaboration between Mr.Alessandro Bandini and me, and has been developed under the supervision of professor Gianluigi Zavattaro. The main objective was to study, and in particular to experiment with, the cloud computing in general and its potentiality in the data elaboration field. Cloud computing is a utility-oriented and Internet-centric way of delivering IT services on demand. The first chapter is a theoretical introduction on cloud computing, analyzing the main aspects, the keywords, and the technologies behind clouds, as well as the reasons for the success of this technology and its problems. After the introduction section, I will briefly describe the three main cloud platforms in the market. During this project we developed a simple Social Network. Consequently in the third chapter I will analyze the social network development, with the initial solution realized through Amazon Web Services and the steps we took to obtain the final version using Google Cloud Platform with its charateristics. To conclude, the last section is specific for the data elaboration and contains a initial theoretical part that describes MapReduce and Hadoop followed by a description of our analysis. We used Google App Engine to execute these elaborations on a large dataset. I will explain the basic idea, the code and the problems encountered
    • …
    corecore