544 research outputs found
Lymphocyte blastogenesis to plaque antigens in human periodontal disease
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/66210/1/j.1600-0765.1977.tb00135.x.pd
Interrelationship between periodontics and adult orthodontics
The purpose of this review article is to provide the dental practitioner with an understanding of the interrelationship between periodontics and orthodontics in adults. Specific areas reviewed are how periodontal tissue reacts to orthodontic forces, influence of tooth movement on the periodontium, effect of circumferential supracrestal fiberotomy in preventing orthodontic relapse, effect of orthodontic bands on the periodontium, specific microbiology associated with orthodontic bands, mucogingival considerations and time relationship between orthodontic and periodontal therapy. In addition, the relationship between orthodontics and implant restorations (e.g., using dental implants as orthodontic anchorage) will be discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/75028/1/j.1600-051X.1998.tb02440.x.pd
Evaluation of a Collagen Membrane With and Without Bone Grafts in Treating Periodontal Intrabony Defects
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/141006/1/jper0838.pd
VarSaw: Application-tailored Measurement Error Mitigation for Variational Quantum Algorithms
For potential quantum advantage, Variational Quantum Algorithms (VQAs) need
high accuracy beyond the capability of today's NISQ devices, and thus will
benefit from error mitigation. In this work we are interested in mitigating
measurement errors which occur during qubit measurements after circuit
execution and tend to be the most error-prone operations, especially
detrimental to VQAs. Prior work, JigSaw, has shown that measuring only small
subsets of circuit qubits at a time and collecting results across all such
subset circuits can reduce measurement errors. Then, running the entire
(global) original circuit and extracting the qubit-qubit measurement
correlations can be used in conjunction with the subsets to construct a
high-fidelity output distribution of the original circuit. Unfortunately, the
execution cost of JigSaw scales polynomially in the number of qubits in the
circuit, and when compounded by the number of circuits and iterations in VQAs,
the resulting execution cost quickly turns insurmountable.
To combat this, we propose VarSaw, which improves JigSaw in an
application-tailored manner, by identifying considerable redundancy in the
JigSaw approach for VQAs: spatial redundancy across subsets from different VQA
circuits and temporal redundancy across globals from different VQA iterations.
VarSaw then eliminates these forms of redundancy by commuting the subset
circuits and selectively executing the global circuits, reducing computational
cost (in terms of the number of circuits executed) over naive JigSaw for VQA by
25x on average and up to 1000x, for the same VQA accuracy. Further, it can
recover, on average, 45% of the infidelity from measurement errors in the noisy
VQA baseline. Finally, it improves fidelity by 55%, on average, over JigSaw for
a fixed computational budget. VarSaw can be accessed here:
https://github.com/siddharthdangwal/VarSaw.Comment: Appears at the International Conference on Architectural Support for
Programming Languages and Operating Systems (ASPLOS) 2024. First two authors
contributed equall
Recommended from our members
Quantum Vulnerability Analysis to Guide Robust Quantum Computing System Design
While quantum computers provide exciting opportunities for information processing, they currently suffer from noise during computation that is not fully understood. Incomplete noise models have led to discrepancies between quantum program success rate (SR) estimates and actual machine outcomes. For example, the estimated probability of success (ESP) is the state-of-the-art metric used to gauge quantum program performance. The ESP suffers poor prediction since it fails to account for the unique combination of circuit structure, quantum state, and quantum computer properties specific to each program execution. Thus, an urgent need exists for a systematic approach that can elucidate various noise impacts and accurately and robustly predict quantum computer success rates, emphasizing application and device scaling. In this article, we propose quantum vulnerability analysis (QVA) to systematically quantify the error impact on quantum applications and address the gap between current success rate (SR) estimators and real quantum computer results. The QVA determines the cumulative quantum vulnerability (CQV) of the target quantum computation, which quantifies the quantum error impact based on the entire algorithm applied to the target quantum machine. By evaluating the CQV with well-known benchmarks on three 27-qubit quantum computers, the CQV success estimation outperforms the estimated probability of success state-of-the-art prediction technique by achieving on average six times less relative prediction error, with best cases at 30 times, for benchmarks with a real SR rate above 0.1%. Direct application of QVA has been provided that helps researchers choose a promising compiling strategy at compile time
Recommended from our members
Attenuation of RNA polymerase II pausing mitigates BRCA1-associated R-loop accumulation and tumorigenesis.
Most BRCA1-associated breast tumours are basal-like yet originate from luminal progenitors. BRCA1 is best known for its functions in double-strand break repair and resolution of DNA replication stress. However, it is unclear whether loss of these ubiquitously important functions fully explains the cell lineage-specific tumorigenesis. In vitro studies implicate BRCA1 in elimination of R-loops, DNA-RNA hybrid structures involved in transcription and genetic instability. Here we show that R-loops accumulate preferentially in breast luminal epithelial cells, not in basal epithelial or stromal cells, of BRCA1 mutation carriers. Furthermore, R-loops are enriched at the 5' end of those genes with promoter-proximal RNA polymerase II (Pol II) pausing. Genetic ablation of Cobra1, which encodes a Pol II-pausing and BRCA1-binding protein, ameliorates R-loop accumulation and reduces tumorigenesis in Brca1-knockout mouse mammary epithelium. Our studies show that Pol II pausing is an important contributor to BRCA1-associated R-loop accumulation and breast cancer development
Stellar Evolutionary Effects on the Abundances of PAH and SN-Condensed Dust in Galaxies
Spectral and photometric observations of nearby galaxies show a correlation
between the strength of their mid-IR aromatic features, attributed to PAH
molecules, and their metal abundance, leading to a deficiency of these features
in low-metallicity galaxies. In this paper, we suggest that the observed
correlation represents a trend of PAH abundance with galactic age, reflecting
the delayed injection of carbon dust into the ISM by AGB stars in the final
post-AGB phase of their evolution. AGB stars are the primary sources of PAHs
and carbon dust in galaxies, and recycle their ejecta back to the interstellar
medium only after a few hundred million years of evolution on the main
sequence. In contrast, more massive stars that explode as Type II supernovae
inject their metals and dust almost instantaneously after their formation. We
first determined the PAH abundance in galaxies by constructing detailed models
of UV-to-radio SED of galaxies that estimate the contribution of dust in
PAH-free HII regions, and PAHs and dust from photodissociation regions, to the
IR emission. All model components: the galaxies' stellar content, properties of
their HII regions, and their ionizing and non-ionizing radiation fields and
dust abundances, are constrained by their observed multiwavelength spectrum.
After determining the PAH and dust abundances in 35 nearby galaxies using our
SED model, we use a chemical evolution model to show that the delayed injection
of carbon dust by AGB stars provides a natural explanation to the dependence of
the PAH content in galaxies with metallicity. We also show that larger dust
particles giving rise to the far-IR emission follow a distinct evolutionary
trend closely related to the injection of dust by massive stars into the ISM.Comment: ApJ, 69 pages, 46 figures, Accepte
The Build-up of the Colour-Magnitude Relation as a Function of Environment
We discuss the environmental dependence of galaxy evolution based on deep
panoramic imaging of two distant clusters taken with Suprime-Cam as part of the
PISCES project. By combining with the SDSS data as a local counterpart for
comparison, we construct a large sample of galaxies that spans wide ranges in
environment, time, and stellar mass (or luminosity). We find that colours of
galaxies, especially those of faint galaxies (), change from blue
to red at a break density as we go to denser regions. Based on local and global
densities of galaxies, we classify three environments: field, groups, and
clusters. We show that the cluster colour-magnitude relation is already built
at . In contrast to this, the bright-end of the field colour-magnitude
relation has been vigorously built all the way down to the present-day and the
build-up at the faint-end has not started yet. A possible interpretation of
these results is that galaxies evolve in the 'down-sizing' fashion. That is,
massive galaxies complete their star formation first and the truncation of star
formation is propagated to smaller objects as time progresses. This trend is
likely to depend on environment since the build-up of the colour-magnitude
relation is delayed in lower-density environments. Therefore, we may suggest
that the evolution of galaxies took place earliest in massive galaxies and in
high density regions, and it is delayed in less massive galaxies and in lower
density regions.Comment: 23pages, 19 figures, accepted for publication in MNRA
Superstaq: Deep Optimization of Quantum Programs
We describe Superstaq, a quantum software platform that optimizes the
execution of quantum programs by tailoring to underlying hardware primitives.
For benchmarks such as the Bernstein-Vazirani algorithm and the Qubit Coupled
Cluster chemistry method, we find that deep optimization can improve program
execution performance by at least 10x compared to prevailing state-of-the-art
compilers. To highlight the versatility of our approach, we present results
from several hardware platforms: superconducting qubits (AQT @ LBNL, IBM
Quantum, Rigetti), trapped ions (QSCOUT), and neutral atoms (Infleqtion).
Across all platforms, we demonstrate new levels of performance and new
capabilities that are enabled by deeper integration between quantum programs
and the device physics of hardware.Comment: Appearing in IEEE QCE 2023 (Quantum Week) conferenc
- âŠ