472 research outputs found
Ensuring Average Recovery with Adversarial Scheduler
In this paper, we focus on revising a given program so that the average recovery time in the presence of an adversarial scheduler is bounded by a given threshold lambda. Specifically, we consider the scenario where the fault (or other unexpected action) perturbs the program to a state that is outside its set of legitimate states. Starting from this state, the program executes its actions/transitions to recover to legitimate states. However, the adversarial scheduler can force the program to reach one illegitimate state that requires a longer recovery time.
To ensure that the average recovery time is less than lambda, we need to remove certain transitions/behaviors. We show that achieving this average response time while removing minimum transitions is NP-hard. In other words, there is a tradeoff between the time taken to synthesize the program and the transitions preserved to reduce the average convergence time. We present six different heuristics and evaluate this tradeoff with case studies. Finally, we note that the average convergence time considered here requires formalization of hyperproperties. Hence, this work also demonstrates feasibility of adding (certain) hyperproperties to an existing program
The Parallel Persistent Memory Model
We consider a parallel computational model that consists of processors,
each with a fast local ephemeral memory of limited size, and sharing a large
persistent memory. The model allows for each processor to fault with bounded
probability, and possibly restart. On faulting all processor state and local
ephemeral memory are lost, but the persistent memory remains. This model is
motivated by upcoming non-volatile memories that are as fast as existing random
access memory, are accessible at the granularity of cache lines, and have the
capability of surviving power outages. It is further motivated by the
observation that in large parallel systems, failure of processors and their
caches is not unusual.
Within the model we develop a framework for developing locality efficient
parallel algorithms that are resilient to failures. There are several
challenges, including the need to recover from failures, the desire to do this
in an asynchronous setting (i.e., not blocking other processors when one
fails), and the need for synchronization primitives that are robust to
failures. We describe approaches to solve these challenges based on breaking
computations into what we call capsules, which have certain properties, and
developing a work-stealing scheduler that functions properly within the context
of failures. The scheduler guarantees a time bound of in expectation, where and are the work and
depth of the computation (in the absence of failures), is the average
number of processors available during the computation, and is the
probability that a capsule fails. Within the model and using the proposed
methods, we develop efficient algorithms for parallel sorting and other
primitives.Comment: This paper is the full version of a paper at SPAA 2018 with the same
nam
SAGE: Sequential Attribute Generator for Analyzing Glioblastomas using Limited Dataset
While deep learning approaches have shown remarkable performance in many
imaging tasks, most of these methods rely on availability of large quantities
of data. Medical image data, however, is scarce and fragmented. Generative
Adversarial Networks (GANs) have recently been very effective in handling such
datasets by generating more data. If the datasets are very small, however, GANs
cannot learn the data distribution properly, resulting in less diverse or
low-quality results. One such limited dataset is that for the concurrent gain
of 19 and 20 chromosomes (19/20 co-gain), a mutation with positive prognostic
value in Glioblastomas (GBM). In this paper, we detect imaging biomarkers for
the mutation to streamline the extensive and invasive prognosis pipeline. Since
this mutation is relatively rare, i.e. small dataset, we propose a novel
generative framework - the Sequential Attribute GEnerator (SAGE), that
generates detailed tumor imaging features while learning from a limited
dataset. Experiments show that not only does SAGE generate high quality tumors
when compared to standard Deep Convolutional GAN (DC-GAN) and Wasserstein GAN
with Gradient Penalty (WGAN-GP), it also captures the imaging biomarkers
accurately
The Trickle-down Impact of Reward (In-)consistency on RLHF
Standard practice within Reinforcement Learning from Human Feedback (RLHF)
involves optimizing against a Reward Model (RM), which itself is trained to
reflect human preferences for desirable generations. A notable subject that is
understudied is the (in-)consistency of RMs -- whether they can recognize the
semantic changes to different prompts and appropriately adapt their reward
assignments -- and their impact on the downstream RLHF model.
In this paper, we visit a series of research questions relevant to RM
inconsistency: (1) How can we measure the consistency of reward models? (2) How
consistent are the existing RMs and how can we improve them? (3) In what ways
does reward inconsistency influence the chatbots resulting from the RLHF model
training?
We propose Contrast Instructions -- a benchmarking strategy for the
consistency of RM. Each example in Contrast Instructions features a pair of
lexically similar instructions with different ground truth responses. A
consistent RM is expected to rank the corresponding instruction and response
higher than other combinations. We observe that current RMs trained with the
standard ranking objective fail miserably on Contrast Instructions compared to
average humans. To show that RM consistency can be improved efficiently without
using extra training budget, we propose two techniques ConvexDA and
RewardFusion, which enhance reward consistency through extrapolation during the
RM training and inference stage, respectively. We show that RLHF models trained
with a more consistent RM yield more useful responses, suggesting that reward
inconsistency exhibits a trickle-down effect on the downstream RLHF process
A physics-informed GAN Framework based on Model-free Data-Driven Computational Mechanics
Model-free data-driven computational mechanics, first proposed by
Kirchdoerfer and Ortiz, replace phenomenological models with numerical
simulations based on sample data sets in strain-stress space. In this study, we
integrate this paradigm within physics-informed generative adversarial networks
(GANs). We enhance the conventional physics-informed neural network framework
by implementing the principles of data-driven computational mechanics into
GANs. Specifically, the generator is informed by physical constraints, while
the discriminator utilizes the closest strain-stress data to discern the
authenticity of the generator's output. This combined approach presents a new
formalism to harness data-driven mechanics and deep learning to simulate and
predict mechanical behaviors
- …