736 research outputs found
Relating L-Resilience and Wait-Freedom via Hitting Sets
The condition of t-resilience stipulates that an n-process program is only
obliged to make progress when at least n-t processes are correct. Put another
way, the live sets, the collection of process sets such that progress is
required if all the processes in one of these sets are correct, are all sets
with at least n-t processes.
We show that the ability of arbitrary collection of live sets L to solve
distributed tasks is tightly related to the minimum hitting set of L, a minimum
cardinality subset of processes that has a non-empty intersection with every
live set. Thus, finding the computing power of L is NP-complete.
For the special case of colorless tasks that allow participating processes to
adopt input or output values of each other, we use a simple simulation to show
that a task can be solved L-resiliently if and only if it can be solved
(h-1)-resiliently, where h is the size of the minimum hitting set of L.
For general tasks, we characterize L-resilient solvability of tasks with
respect to a limited notion of weak solvability: in every execution where all
processes in some set in L are correct, outputs must be produced for every
process in some (possibly different) participating set in L. Given a task T, we
construct another task T_L such that T is solvable weakly L-resiliently if and
only if T_L is solvable weakly wait-free
Strong Equivalence Relations for Iterated Models
The Iterated Immediate Snapshot model (IIS), due to its elegant geometrical
representation, has become standard for applying topological reasoning to
distributed computing. Its modular structure makes it easier to analyze than
the more realistic (non-iterated) read-write Atomic-Snapshot memory model (AS).
It is known that AS and IIS are equivalent with respect to \emph{wait-free
task} computability: a distributed task is solvable in AS if and only if it
solvable in IIS. We observe, however, that this equivalence is not sufficient
in order to explore solvability of tasks in \emph{sub-models} of AS (i.e.
proper subsets of its runs) or computability of \emph{long-lived} objects, and
a stronger equivalence relation is needed. In this paper, we consider
\emph{adversarial} sub-models of AS and IIS specified by the sets of processes
that can be \emph{correct} in a model run. We show that AS and IIS are
equivalent in a strong way: a (possibly long-lived) object is implementable in
AS under a given adversary if and only if it is implementable in IIS under the
same adversary. %This holds whether the object is one-shot or long-lived.
Therefore, the computability of any object in shared memory under an
adversarial AS scheduler can be equivalently investigated in IIS
Predicting the stability of atom-like and molecule-like unit-charge Coulomb three-particle systems
Non-relativistic quantum chemical calculations of the particle mass, m ± 2 , corresponding to the dissociation threshold in a range of Coulomb three-particle systems of the form {m ± 1 m ± 2 m â 3 } , are performed variationally using a series solution method with a Laguerre-based wavefunction. These masses are used to calculate an accurate stability boundary, i.e., the line that separates the stability domain from the instability domains, in a reciprocal mass fraction ternary diagram. This result is compared to a lower bound to the stability domain derived from symmetric systems and reveals the importance of the asymmetric (mass-symmetry breaking) terms in the Hamiltonian at dissociation. A functional fit to the stability boundary data provides a simple analytical expression for calculating the minimum mass of a third particle required for stable binding to a two-particle system, i.e., for predicting the bound state stability of any unit-charge three-particle system
Monotonic Prefix Consistency in Distributed Systems
We study the issue of data consistency in distributed systems. Specifically,
we consider a distributed system that replicates its data at multiple sites,
which is prone to partitions, and which is assumed to be available (in the
sense that queries are always eventually answered). In such a setting, strong
consistency, where all replicas of the system apply synchronously every
operation, is not possible to implement. However, many weaker consistency
criteria that allow a greater number of behaviors than strong consistency, are
implementable in available distributed systems. We focus on determining the
strongest consistency criterion that can be implemented in a convergent and
available distributed system that tolerates partitions. We focus on objects
where the set of operations can be split into updates and queries. We show that
no criterion stronger than Monotonic Prefix Consistency (MPC) can be
implemented.Comment: Submitted pape
Linking Two Seemingly Unrelated Diseases, Cancer and Acute Respiratory Distress Syndrome, Through a Dictyostelium Secreted Protein
The work in this dissertation links two diseases through a protein secreted by Dictyostelium discoideum cells. The protein, AprA, inhibits cell proliferation and induces chemorepulsion (movement away) of Dictyostelium cells. This has implications in both cancer research and the study of Acute Respiratory Distress Syndrome.
Cancer is a misregulation of cellular proliferation. Often the removal of a primary tumor results in rapid metastatic cell proliferation. The rapid proliferation of metastatic cells indicates the presence of a factor, called a chalone, secreted by the primary tumor cells, that inhibits metastatic cell proliferation. The ability of AprA to inhibit proliferation of the cells that secretes it classifies it as a chalone. Using the model organism Dictyostelium and the protein AprA allows us to study chalone signaling mechanisms.
Acute Respiratory Distress Syndrome (ARDS) is characterized by an excess influx of neutrophils into the lungs. Neutrophils damage the lung tissue and ultimately recruit more neutrophils that repeat the process. A need exists to remove these cells and allow resolution to occur. One way to accomplish this is through chemorepulsion, the directional movement of cells away from an external cue. We can use AprA to study the mechanisms of chemorepulsion.
In this dissertation, I have found that the PTEN-like protein CnrN, which is an inhibitor of proliferation and chemotaxis, is involved in both AprA proliferation inhibition and chemorepulsion of Dictyostelium cells. I have shown that the human protein DPPIV, which is structurally similar to AprA, causes chemorepulsion of human neutrophils. Additionally, aspirated DPPIV reduces the accumulation of neutrophils in the lungs of a mouse model of ARDS. Work shown in the appendices suggests that AprA signals through specific G protein-coupled receptors.
The work in this dissertation studies the role of chalones and chemorepellents. It allows the unique opportunity to study chemorepulsion in both Dictyostelium and human cells. The hope and goal is that the work in this dissertation could lead to novel therapies for diseases such as cancer and ARDS
EARLY PEANUT INTRODUCTION IN INFANTS TO PREVENT PEANUT ALLERGY: IMPROVING GUIDELINE ADHERENCE THROUGH EMR STANDARDIZATION
Background: Peanut allergy in children is a population health problem affecting individuals, families, and healthcare systems. Strong evidence from the Learning Early About Peanut (LEAP) study suggests that early peanut introduction (EPI) for infants after four months of age but before 12 months can reduce the risk of developing peanut allergy (Du Toit et al., 2015; Fleischer et al., 2021; Obbagy et al., 2019; Togias et al., 2017). The success of peanut allergy prevention in infants is highly dependent on primary care providers (PCPs) incorporating the addendum guidelines into routine well-child check (WCC) encounters (Bilaver et al., 2019; Lai & Sicherer, 2019). Addendum guidelines recommending EPI have not been widely adopted in primary care settings. The Children's Primary and Specialty Clinic at UNC had notably low adoption of the addendum guidelines for EPI.Methods: Using quality improvement (QI) methodology and the model for improvement, researchers developed and implemented a workflow protocol and clinical decision support (CDS) tools to improve guideline adherence through standardization. These tools, available in the electronic medical record (EMR), included smart lists, visit templates, and patient education handouts for home peanut introduction at 4, 6, and 9-month WCC encounters. Through plan-do-study-act (PDSA) cycles, the team executed changes and modifications to improve outcomes.Results: The team collected data from 292 WCC encounters during the QI project. EMR documentation of clinically appropriate EPI guidance at 4, 6, and 9-month WCCs shifted from a mean of 8.8% at baseline to 74.7% after 18 weeks of PDSA cycles (p<0.001). Mean provider adoption of smart lists and templates was 67.3%, and distribution of home peanut introduction handouts was 50.2% after 18 weeks of project implementation. There were no statistically significant changes in patient time-in-room (p=0.795). Rates of DTaP vaccination remained at 100% for 6M visits during the intervention. Conclusion: QI methodology, PDSA cycles, and interprofessional collaboration in primary care settings improved documentation of EPI guidance at routine WCC encounters without impacting other measures. Broader PCP use of bundled CDS tools and EMR standardization could further improve guideline adherence to prevent peanut allergy in infants.Doctor of Nursing Practic
Open Transactions on Shared Memory
Transactional memory has arisen as a good way for solving many of the issues
of lock-based programming. However, most implementations admit isolated
transactions only, which are not adequate when we have to coordinate
communicating processes. To this end, in this paper we present OCTM, an
Haskell-like language with open transactions over shared transactional memory:
processes can join transactions at runtime just by accessing to shared
variables. Thus a transaction can co-operate with the environment through
shared variables, but if it is rolled-back, also all its effects on the
environment are retracted. For proving the expressive power of TCCS we give an
implementation of TCCS, a CCS-like calculus with open transactions
The Parallel Persistent Memory Model
We consider a parallel computational model that consists of processors,
each with a fast local ephemeral memory of limited size, and sharing a large
persistent memory. The model allows for each processor to fault with bounded
probability, and possibly restart. On faulting all processor state and local
ephemeral memory are lost, but the persistent memory remains. This model is
motivated by upcoming non-volatile memories that are as fast as existing random
access memory, are accessible at the granularity of cache lines, and have the
capability of surviving power outages. It is further motivated by the
observation that in large parallel systems, failure of processors and their
caches is not unusual.
Within the model we develop a framework for developing locality efficient
parallel algorithms that are resilient to failures. There are several
challenges, including the need to recover from failures, the desire to do this
in an asynchronous setting (i.e., not blocking other processors when one
fails), and the need for synchronization primitives that are robust to
failures. We describe approaches to solve these challenges based on breaking
computations into what we call capsules, which have certain properties, and
developing a work-stealing scheduler that functions properly within the context
of failures. The scheduler guarantees a time bound of in expectation, where and are the work and
depth of the computation (in the absence of failures), is the average
number of processors available during the computation, and is the
probability that a capsule fails. Within the model and using the proposed
methods, we develop efficient algorithms for parallel sorting and other
primitives.Comment: This paper is the full version of a paper at SPAA 2018 with the same
nam
Detection of skewed X-chromosome inactivation in Fragile X syndrome and X chromosome aneuploidy using quantitative melt analysis.
Methylation of the fragile X mental retardation 1 (FMR1) exon 1/intron 1 boundary positioned fragile X related epigenetic element 2 (FREE2), reveals skewed X-chromosome inactivation (XCI) in fragile X syndrome full mutation (FM: CGGÂ >Â 200) females. XCI skewing has been also linked to abnormal X-linked gene expression with the broader clinical impact for sex chromosome aneuploidies (SCAs). In this study, 10 FREE2 CpG sites were targeted using methylation specific quantitative melt analysis (MS-QMA), including 3 sites that could not be analysed with previously used EpiTYPER system. The method was applied for detection of skewed XCI in FM females and in different types of SCA. We tested venous blood and saliva DNA collected from 107 controls (CGGÂ <Â 40), and 148 FM and 90 SCA individuals. MS-QMA identified: (i) most SCAs if combined with a Y chromosome test; (ii) locus-specific XCI skewing towards the hypomethylated state in FM females; and (iii) skewed XCI towards the hypermethylated state in SCA with 3 or more X chromosomes, and in 5% of the 47,XXY individuals. MS-QMA output also showed significant correlation with the EpiTYPER reference method in FM males and females (PÂ <Â 0.0001) and SCAs (PÂ <Â 0.05). In conclusion, we demonstrate use of MS-QMA to quantify skewed XCI in two applications with diagnostic utility
- âŠ