7,441 research outputs found
AC/3 V1.00. A tool for automatic error correction of combinatorial circuits
AC/3 is a tool for performing automatic error correction
in combinatorial circuits. Two circuits must be provided to the
system where one serves as the specification circuit and the
other one as the current implementation. AC/3 tries to prove
equivalence between both designs and performs automatic error
correction if equivalence does not hold. The tool is based on
the rectification theory developed in [TechReport1999-6]
Using BDD-based decomposition for automatic error correction of combinatorial circuits
Boolean equivalence checking has turned out to be a powerful
method for verifying combinatorial circuits and has been widely
accepted both in academia and industry.
In this paper, we present a method for localizing and correcting
errors in combinatorial circuits for which equivalence checking
has failed. Our approach is general and does not assume any
error model. Working directly on BDDs, the approach is well
suited for integration into commonly used equivalence checkers.
Since circuits can be corrected fully automatically, our
approach can save considerable debugging time and therefore
will speed up the whole design cycle.
We have implemented a prototype verification tool and evaluated
our method with the Berkeley benchmark circuits. In addition, we
have applied it successfully to a real life example taken from
[DrFe96]
Quark condensate in one-flavor QCD
We compute the condensate in QCD with a single quark flavor using numerical
simulations with the overlap formulation of lattice fermions. The condensate is
extracted by fitting the distribution of low lying eigenvalues of the Dirac
operator in sectors of fixed topological charge to the predictions of Random
Matrix Theory. Our results are in excellent agreement with estimates from the
orientifold large-N_c expansion.Comment: 12 pages, 4 figures, REVTeX4, v2: Small changes, extended
introduction, published versio
Recommended from our members
The impact of adjusting for baseline in pharmacogenomic genome-wide association studies of quantitative change.
In pharmacogenomic studies of quantitative change, any association between genetic variants and the pretreatment (baseline) measurement can bias the estimate of effect between those variants and drug response. A putative solution is to adjust for baseline. We conducted a series of genome-wide association studies (GWASs) for low-density lipoprotein cholesterol (LDL-C) response to statin therapy in 34,874 participants of the Genetic Epidemiology Research on Adult Health and Aging (GERA) cohort as a case study to investigate the impact of baseline adjustment on results generated from pharmacogenomic studies of quantitative change. Across phenotypes of statin-induced LDL-C change, baseline adjustment identified variants from six loci meeting genome-wide significance (SORT/CELSR2/PSRC1, LPA, SLCO1B1, APOE, APOB, and SMARCA4/LDLR). In contrast, baseline-unadjusted analyses yielded variants from three loci meeting the criteria for genome-wide significance (LPA, APOE, and SLCO1B1). A genome-wide heterogeneity test of baseline versus statin on-treatment LDL-C levels was performed as the definitive test for the true effect of genetic variants on statin-induced LDL-C change. These findings were generally consistent with the models not adjusting for baseline signifying that genome-wide significant hits generated only from baseline-adjusted analyses (SORT/CELSR2/PSRC1, APOB, SMARCA4/LDLR) were likely biased. We then comprehensively reviewed published GWASs of drug-induced quantitative change and discovered that more than half (59%) inappropriately adjusted for baseline. Altogether, we demonstrate that (1) baseline adjustment introduces bias in pharmacogenomic studies of quantitative change and (2) this erroneous methodology is highly prevalent. We conclude that it is critical to avoid this common statistical approach in future pharmacogenomic studies of quantitative change
Verification of a GF(2supra(m)) multiplier-circuit for digital signal processing
Hard-wired solutions for Finite Field Arithmetic have become
increasingly important in recent years and are mostly part
of domain specific Digital Signal Processors (DSPs).
We have specified and verified a real-life example of an
array-type multiplier for Finite Field multiplication in
GF(2^m) [1].
The multiplier has been specified in higher-order logic and
correctness has been proven using the HOL98 theorem prover.
Since our model is generic, the correctness results hold for
arbitrary scaled circuits
Long-term in vivo imaging of fibrillar tau in the retina of P301S transgenic mice.
Tauopathies are widespread neurodegenerative disorders characterised by the intracellular accumulation of hyperphosphorylated tau. Especially in Alzheimer's disease, pathological alterations in the retina are discussed as potential biomarkers to improve early diagnosis of the disease. Using mice expressing human mutant P301S tau, we demonstrate for the first time a straightforward optical approach for the in vivo detection of fibrillar tau in the retina. Longitudinal examinations of individual animals revealed the fate of single cells containing fibrillar tau and the progression of tau pathology over several months. This technique is most suitable to monitor therapeutic interventions aimed at reducing the accumulation of fibrillar tau. In order to evaluate if this approach can be translated to human diagnosis, we tried to detect fibrillar protein aggregates in the post-mortem retinas of patients that had suffered from Alzheimer's disease or Progressive Supranuclear Palsy. Even though we could detect hyperphosphorylated tau, we did not observe any fibrillar tau or Aß aggregates. In contradiction to previous studies, our observations do not support the notion that Aβ or tau in the retina are of diagnostic value in Alzheimer's disease
A genomic approach to inferring kinship reveals limited intergenerational dispersal in the yellow fever mosquito
Understanding past dispersal and breeding events can provide insight into ecology and evolution, and can help inform strategies for conservation and the control of pest species. However, parent-offspring dispersal can be difficult to investigate in rare species and in small pest species such as mosquitoes. Here we develop a methodology for estimating parent-offspring dispersal from the spatial distribution of close kin, using pairwise kinship estimates derived from genome-wide single nucleotide polymorphisms (SNPs). SNPs were scored in 162 Aedes aegypti (yellow fever mosquito) collected from eight close-set, high-rise apartment buildings in an area of Malaysia with high dengue incidence. We used the SNPs to reconstruct kinship groups across three orders of kinship. We transformed the geographical distances between all kin pairs within each kinship category into axial standard deviations of these distances, then decomposed these into components representing past dispersal events. From these components, we isolated the axial standard deviation of parent-offspring dispersal, and estimated neighbourhood area (129 m), median parent-offspring dispersal distance (75 m), and oviposition dispersal radius within a gonotrophic cycle (36 m). We also analysed genetic structure using distance-based redundancy analysis and linear regression, finding isolation by distance both within and between buildings and estimating neighbourhood size at 268 individuals. These findings indicate the scale required to suppress local outbreaks of arboviral disease and to target releases of modified mosquitoes for mosquito and disease control. Our methodology is readily implementable for studies of other species, including pests and species of conservation significance
Experimentally verified pulse formation model for high-power femtosecond VECSELs
Optically pumped vertical-external-cavity surface-emitting lasers (OP-VECSELs), passively modelocked with a semiconductor saturable absorber mirror (SESAM), have generated the highest average output power from any sub-picosecond semiconductor laser. Many applications, including frequency comb synthesis and coherent supercontinuum generation, require pulses in the sub-300-fs regime. A quantitative understanding of the pulse formation mechanism is required in order to reach this regime while maintaining stable, high-average-power performance. We present a numerical model with which we have obtained excellent quantitative agreement with two recent experiments in the femtosecond regime, and we have been able to correctly predict both the observed pulse duration and the output power for the first time. Our numerical model not only confirms the soliton-like pulse formation in the femtosecond regime, but also allows us to develop several clear guidelines to scale the performance toward shorter pulses and higher average output power. In particular, we show that a key VECSEL design parameter is a high gain saturation fluence. By optimizing this parameter, 200-fs pulses with an average output power of more than 1 W should be possible
A new liver perfusion and preservation system for transplantation Research in large animals
A kidney perfusion machine, model MOX-100 (Waters Instruments, Ltd, Rochester, MN) was modified to allow continuous perfusion of the portal vein and pulsatile perfusion of the hepatic artery of the liver. Additional apparatus consists of a cooling system, a membrane oxygenator, a filter for foreign bodies, and bubble traps. This system not only allows hypothermic perfusion preservation of the liver graft, but furthermore enables investigation of ex vivo simulation of various circulatory circumstances in which physiological perfusion of the liver is studied. We have used this system to evaluate the viability of liver allografts preserved by cold storage. The liver was placed on the perfusion system and perfused with blood with a hematocrit of approximately 20% and maintained at 37°C for 3 h. The flows of the hepatic artery and portal vein were adjusted to 0.33 mL and 0.67 mL/g of liver tissue, respectively. Parameters of viability consisted of hourly bile output, oxygen consumption, liver enzymes, electrolytes, vascular resistance, and liver histology. This method of liver assessment in large animals will allow the objective evaluation of organ viability for transplantation and thereby improve the outcome of organ transplantation. Furthermore, this pump enables investigation into the pathophysiology of liver ischemia and preservation. © 1990 Informa UK Ltd All rights reserved: reproduction in whole or part not permitted
- …