397 research outputs found
Recommended from our members
Computational Strategies for Scalable Genomics Analysis.
The revolution in next-generation DNA sequencing technologies is leading to explosive data growth in genomics, posing a significant challenge to the computing infrastructure and software algorithms for genomics analysis. Various big data technologies have been explored to scale up/out current bioinformatics solutions to mine the big genomics data. In this review, we survey some of these exciting developments in the applications of parallel distributed computing and special hardware to genomics. We comment on the pros and cons of each strategy in the context of ease of development, robustness, scalability, and efficiency. Although this review is written for an audience from the genomics and bioinformatics fields, it may also be informative for the audience of computer science with interests in genomics applications
Skyline: Interactive In-Editor Computational Performance Profiling for Deep Neural Network Training
Training a state-of-the-art deep neural network (DNN) is a
computationally-expensive and time-consuming process, which incentivizes deep
learning developers to debug their DNNs for computational performance. However,
effectively performing this debugging requires intimate knowledge about the
underlying software and hardware systems---something that the typical deep
learning developer may not have. To help bridge this gap, we present Skyline: a
new interactive tool for DNN training that supports in-editor computational
performance profiling, visualization, and debugging. Skyline's key contribution
is that it leverages special computational properties of DNN training to
provide (i) interactive performance predictions and visualizations, and (ii)
directly manipulatable visualizations that, when dragged, mutate the batch size
in the code. As an in-editor tool, Skyline allows users to leverage these
diagnostic features to debug the performance of their DNNs during development.
An exploratory qualitative user study of Skyline produced promising results;
all the participants found Skyline to be useful and easy to use.Comment: 14 pages, 5 figures. Appears in the proceedings of UIST'2
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of agents, such as programmers, code reviewers, and test engineers, fostering
collaborative dialogue and facilitating a seamless workflow. The chat chain
acts as a facilitator, breaking down each stage into atomic subtasks. This
enables dual roles, allowing for proposing and validating solutions through
context-aware communication, leading to efficient resolution of specific
subtasks. The instrumental analysis of ChatDev highlights its remarkable
efficacy in software generation, enabling the completion of the entire software
development process in under seven minutes at a cost of less than one dollar.
It not only identifies and alleviates potential vulnerabilities but also
rectifies potential hallucinations while maintaining commendable efficiency and
cost-effectiveness. The potential of ChatDev unveils fresh possibilities for
integrating LLMs into the realm of software development.Comment: 25 pages, 9 figures, 2 table
Annual Report 2017-2018
LETTER FROM THE DEAN
I am pleased to share with you the College of Computing and Digital Media’s (CDM) 2017-18 annual report, highlighting the many achievements across our community. It was a big year. We began offering five new programs (two bachelor’s, two master’s, and one PhD) across our three schools, in addition to several new certificate programs through our Institute for Professional Development. We built new, cutting-edge spaces to support these and other programs— most notably a 4,500 square-foot makerspace, a robotics and medical engineering lab, an augmented and virtual reality lab, and plans for a cyber-physical systems project lab. Our faculty continued to pursue their research and creative agendas, offering collaborative opportunities with students and partners. CDM students and alumni were celebrated for their many achievements— everything from leading the winning teams at the U.S. Cyber Challenge and Campus 1871 to showcasing their games at juried festivals and winning national screenwriting competitions. We encouraged greater research and teaching collaboration, both between our own schools and with units outside CDM. Design and Computing faculty are working together on an NSA grant for smart home devices that considers both software and interface/design, as well as a new grant-funded game lab. One Project Bluelight film team collaborated with The Theatre School and the School of Music while CDM and College of Science and Health faculty joined forces to research the links between traumatic brain injury, domestic violence, and deep games. It has been exciting and inspiring to witness the accomplishments of our innovative and dedicated community. We are proud to provide the space and resources for them to do their exceptional work.
David MillerDean, College of Computing and Digital Mediahttps://via.library.depaul.edu/cdmannual/1001/thumbnail.jp
Bridging the Gulf of Envisioning: Cognitive Design Challenges in LLM Interfaces
Large language models (LLMs) exhibit dynamic capabilities and appear to
comprehend complex and ambiguous natural language prompts. However, calibrating
LLM interactions is challenging for interface designers and end-users alike. A
central issue is our limited grasp of how human cognitive processes begin with
a goal and form intentions for executing actions, a blindspot even in
established interaction models such as Norman's gulfs of execution and
evaluation. To address this gap, we theorize how end-users 'envision'
translating their goals into clear intentions and craft prompts to obtain the
desired LLM response. We define a process of Envisioning by highlighting three
misalignments: (1) knowing whether LLMs can accomplish the task, (2) how to
instruct the LLM to do the task, and (3) how to evaluate the success of the
LLM's output in meeting the goal. Finally, we make recommendations to narrow
the envisioning gulf in human-LLM interactions
ACS: Concurrent Kernel Execution on Irregular, Input-Dependent Computational Graphs
GPUs are widely used to accelerate many important classes of workloads today.
However, we observe that several important emerging classes of workloads,
including simulation engines for deep reinforcement learning and dynamic neural
networks, are unable to fully utilize the massive parallelism that GPUs offer.
These applications tend to have kernels that are small in size, i.e., have few
thread blocks that do not saturate compute resources. Executing independent
kernels concurrently is a promising approach to improve parallelism and
utilization. However, this inter-kernel concurrency is difficult to leverage in
such workloads with existing approaches: First, the inter-kernel dependencies
and computational graph are input-dependent and vary each time the application
is executed. Second, the computational graphs tend to be irregular, requiring
fine-grain scheduling and synchronization; thus incurring significant
synchronization overheads if kernel execution is parallelized. In this work, we
propose ACS, a framework that enables lightweight detection of inter-kernel
dependencies and low overhead kernel scheduling at runtime. The key idea behind
ACS is to perform inter-kernel dependency checks for a small window of kernels
at runtime, similar to out-of order instruction scheduling. This enables
concurrent execution of kernels in applications whose computational graphs are
input dependent and require fine-grained scheduling. We propose ACS-SW, a
software-only open-source implementation of ACS and ACS-HW, a hardware-software
cooperative implementation. ACS-HW further reduces synchronization overheads by
reducing communication between the CPU and GPU. We evaluate ACS for deep RL
simulation and dynamic DNNs on both real hardware and a GPU simulator. We
demonstrate speedups of up to 2.19x (1.56x on average) by improving GPU
utilization with concurrent kernel execution
- …