8,132 research outputs found

    Encoding of low-quality DNA profiles as genotype probability matrices for improved profile comparisons, relatedness evaluation and database searches

    Get PDF
    Many DNA profiles recovered from crime scene samples are of a quality that does not allow them to be searched against, nor entered into, databases. We propose a method for the comparison of profiles arising from two DNA samples, one or both of which can have multiple donors and be affected by low DNA template or degraded DNA. We compute likelihood ratios to evaluate the hypothesis that the two samples have a common DNA donor, and hypotheses specifying the relatedness of two donors. Our method uses a probability distribution for the genotype of the donor of interest in each sample. This distribution can be obtained from a statistical model, or we can exploit the ability of trained human experts to assess genotype probabilities, thus extracting much information that would be discarded by standard interpretation rules. Our method is compatible with established methods in simple settings, but is more widely applicable and can make better use of information than many current methods for the analysis of mixed-source, low-template DNA profiles. It can accommodate uncertainty arising from relatedness instead of or in addition to uncertainty arising from noisy genotyping. We describe a computer program GPMDNA, available under an open source license, to calculate LRs using the method presented in this paper.Comment: 28 pages. Accepted for publication 2-Sep-2016 - Forensic Science International: Genetic

    IEEE Compliant Double-Precision FPU and 64-bit ALU with Variable Latency Integer Divider

    Get PDF
    Together the arithmetic logic unit (ALU) and floating-point unit (FPU) perform all of the mathematical and logic operations of computer processors. Because they are used so prominently, they fall in the critical path of the central processing unit - often becoming the bottleneck, or limiting factor for performance. As such, the design of a high-speed ALU and FPU is vital to creating a processor capable of performing up to the demanding standards of today\u27s computer users. In this paper, both a 64-bit ALU and a 64-bit FPU are designed based on the reduced instruction set computer architecture. The ALU performs the four basic mathematical operations - addition, subtraction, multiplication and division - in both unsigned and two\u27s complement format, basic logic operations and shifting. The division algorithm is a novel approach, using a comparison multiples based SRT divider to create a variable latency integer divider. The floating-point unit performs the double-precision floating-point operations add, subtract, multiply and divide, in accordance with the IEEE 754 standard for number representation and rounding. The ALU and FPU were implemented in VHDL, simulated in ModelSim, and constrained and synthesized using Synopsys Design Compiler (2006.06). They were synthesized using TSMC 0.1 3nm CMOS technology. The timing, power and area synthesis results were recorded, and, where applicable, compared to those of the corresponding DesignWare components.The ALU synthesis reported an area of 122,215 gates, a power of 384 mW, and a delay of 2.89 ns - a frequency of 346 MHz. The FPU synthesis reported an area 84,440 gates, a delay of 2.82 ns and an operating frequency of 355 MHz. It has a maximum dynamic power of 153.9 mW

    Easiness Amplification and Uniform Circuit Lower Bounds

    Get PDF
    We present new consequences of the assumption that time-bounded algorithms can be "compressed" with non-uniform circuits. Our main contribution is an "easiness amplification" lemma for circuits. One instantiation of the lemma says: if n^{1+e}-time, tilde{O}(n)-space computations have n^{1+o(1)} size (non-uniform) circuits for some e > 0, then every problem solvable in polynomial time and tilde{O}(n) space has n^{1+o(1)} size (non-uniform) circuits as well. This amplification has several consequences: * An easy problem without small LOGSPACE-uniform circuits. For all e > 0, we give a natural decision problem, General Circuit n^e-Composition, that is solvable in about n^{1+e} time, but we prove that polynomial-time and logarithmic-space preprocessing cannot produce n^{1+o(1)}-size circuits for the problem. This shows that there are problems solvable in n^{1+e} time which are not in LOGSPACE-uniform n^{1+o(1)} size, the first result of its kind. We show that our lower bound is non-relativizing, by exhibiting an oracle relative to which the result is false. * Problems without low-depth LOGSPACE-uniform circuits. For all e > 0, 1 < d < 2, and e < d we give another natural circuit composition problem computable in tilde{O}(n^{1+e}) time, or in O((log n)^d) space (though not necessarily simultaneously) that we prove does not have SPACE[(log n)^e]-uniform circuits of tilde{O}(n) size and O((log n)^e) depth. We also show SAT does not have circuits of tilde{O}(n) size and log^{2-o(1)}(n) depth that can be constructed in log^{2-o(1)}(n) space. * A strong circuit complexity amplification. For every e > 0, we give a natural circuit composition problem and show that if it has tilde{O}(n)-size circuits (uniform or not), then every problem solvable in 2^{O(n)} time and 2^{O(sqrt{n log n})} space (simultaneously) has 2^{O(sqrt{n log n})}-size circuits (uniform or not). We also show the same consequence holds assuming SAT has tilde{O}(n)-size circuits. As a corollary, if n^{1.1} time computations (or O(n) nondeterministic time computations) have tilde{O}(n)-size circuits, then all problems in exponential time and subexponential space (such as quantified Boolean formulas) have significantly subexponential-size circuits. This is a new connection between the relative circuit complexities of easy and hard problems

    On the (Non) NP-Hardness of Computing Circuit Complexity

    Get PDF
    The Minimum Circuit Size Problem (MCSP) is: given the truth table of a Boolean function f and a size parameter k, is the circuit complexity of f at most k? This is the definitive problem of circuit synthesis, and it has been studied since the 1950s. Unlike many problems of its kind, MCSP is not known to be NP-hard, yet an efficient algorithm for this problem also seems very unlikely: for example, MCSP in P would imply there are no pseudorandom functions. Although most NP-complete problems are complete under strong "local" reduction notions such as poly-logarithmic time projections, we show that MCSP is provably not NP-hard under O(n^(1/2-epsilon))-time projections, for every epsilon > 0. We prove that the NP-hardness of MCSP under (logtime-uniform) AC0 reductions would imply extremely strong lower bounds: NP notsubset P/poly and E notsubset i.o.-SIZE(2^(delta * n)) for some delta > 0 (hence P = BPP also follows). We show that even the NP-hardness of MCSP under general polynomial-time reductions would separate complexity classes: EXP != NP cap P/poly, which implies EXP != ZPP. These results help explain why it has been so difficult to prove that MCSP is NP-hard. We also consider the nondeterministic generalization of MCSP: the Nondeterministic Minimum Circuit Size Problem (NMCSP), where one wishes to compute the nondeterministic circuit complexity of a given function. We prove that the Sigma_2 P-hardness of NMCSP, even under arbitrary polynomial-time reductions, would imply EXP notsubset P/poly

    Climatic versus biotic constraints on carbon and water fluxes in seasonally drought-affected ponderosa pine ecosystems

    Get PDF
    We investigated the relative importance of climatic versus biotic controls on gross primary production (GPP) and water vapor fluxes in seasonally drought-affected ponderosa pine forests. The study was conducted in young (YS), mature (MS), and old stands (OS) over 4 years at the AmeriFlux Metolius sites. Model simulations showed that interannual variation of GPP did not follow the same trends as precipitation, and effects of climatic variation were smallest at the OS (50%), and intermediate at the YS (<20%). In the young, developing stand, interannual variation in leaf area has larger effects on fluxes than climate, although leaf area is a function of climate in that climate can interact with age-related shifts in carbon allocation and affect whole-tree hydraulic conductance. Older forests, with well-established root systems, appear to be better buffered from effects of seasonal drought and interannual climatic variation. Interannual variation of net ecosystem exchange (NEE) was also lowest at the OS, where NEE is controlled more by interannual variation of ecosystem respiration, 70% of which is from soil, than by the variation of GPP, whereas variation in GPP is the primary reason for interannual changes in NEE at the YS and MS. Across spatially heterogeneous landscapes with high frequency of younger stands resulting from natural and anthropogenic disturbances, interannual climatic variation and change in leaf area are likely to result in large interannual variation in GPP and NEE

    csSampling: An R Package for Bayesian Models for Complex Survey Data

    Full text link
    We present csSampling, an R package for estimation of Bayesian models for data collected from complex survey samples. csSampling combines functionality from the probabilistic programming language Stan (via the rstan and brms R packages) and the handling of complex survey data from the survey R package. Under this approach, the user creates a survey-weighted model in brms or provides a custom weighted model via rstan. Survey design information is provided via the svydesign function of the survey package. The cs_sampling function of csSampling estimates the weighted stan model and provides an asymptotic covariance correction for model mis-specification due to using survey sampling weights as plug-in values in the likelihood. This is often known as a ``design effect'' which is the ratio between the variance from a complex survey sample and a simple random sample of the same size. The resulting adjusted posterior draws can then be used for the usual Bayesian inference while also achieving frequentist properties of asymptotic consistency and correct uncertainty (e.g. coverage).Comment: 22 pages, 5 figure

    The Question of School Resources and Student Achievement: A History and Reconsideration

    Get PDF
    One question posed continually over the past century of education research is to what extent school resources affect student outcomes. From the turn of the century to the present, a diverse set of actors, including politicians, physicians, and researchers from a number of disciplines, have studied whether and how money that is provided for schools translates into increased student achievement. The authors discuss the historical origins of the question of whether school resources relate to student achievement, and report the results of a meta- analysis of studies examining that relationship. They find that policymakers, researchers, and other stakeholders have addressed this question using diverse strategies. The way the question is asked, and the methods used to answer it, is shaped by history, as well by the scholarly, social, and political concerns of any given time. The diversity of methods has resulted in a body of literature too diverse and too inconsistent to yield reliable inferences through meta-analysis. The authors suggest that a collaborative approach addressing the question from a variety of disciplinary and practice perspectives may lead to more effective interventions to meet the needs of all students

    Uncoupling clutch size, prolactin, and luteinizing hormone using experimental egg removal

    Get PDF
    Clutch size is a key avian fitness and life history trait. A physiological model for clutch size determination CSD), involving an anti-gonadal effect of prolactin (PRL) via suppression of luteinizing hormone (LH),was proposed over 20 years ago, but has received scant experimental attention since. The few studies looking at a PRL-based mechanistic hypothesis for CSD have been equivocal, but recent experiments utilizing a pharmacological agent to manipulate PRL in the zebra finch (Taeniopygia guttata) found no support for a role of this hormone in clutch size determination. Here, we take a complementary approach by manipulating clutch size through egg removal, examining co-variation in PRL and LH between two breeding attempts, as well as through experimentally-extended laying. Clutch size increased for egg removal females, but not controls, but this was not correlated with changes in PRL or LH. There were also no differences in PRL between egg removal females and controls, nor did PRL levels during early, mid- or late-laying of supra-normal clutches predict clutch size. By uncoupling PRL, LH and clutch size in our study, several key predictions of the PRL-based mechanistic model for CSD were not supported. However,a positive correlation between PRL levels late in laying and days relative to the last egg (clutch completion) provides an alternative explanation for the equivocal results surrounding the conventional PRL-based physiological model for CSD. We suggest that females coordinate PRL-mediated incubation onset with clutch completion to minimize hatching asynchrony and sibling hierarchy, a behavior that is amplified in females laying larger clutches

    Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems

    Get PDF
    A frontier open problem in circuit complexity is to prove P^{NP} is not in SIZE[n^k] for all k; this is a necessary intermediate step towards NP is not in P_{/poly}. Previously, for several classes containing P^{NP}, including NP^{NP}, ZPP^{NP}, and S_2 P, such lower bounds have been proved via Karp-Lipton-style Theorems: to prove C is not in SIZE[n^k] for all k, we show that C subset P_{/poly} implies a "collapse" D = C for some larger class D, where we already know D is not in SIZE[n^k] for all k. It seems obvious that one could take a different approach to prove circuit lower bounds for P^{NP} that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for P^{NP} are equivalent to fixed-polynomial size circuit lower bounds for P^{NP}. That is, P^{NP} is not in SIZE[n^k] for all k if and only if (NP subset P_{/poly} implies PH subset i.o.- P^{NP}_{/n}). Next, we present new consequences of the assumption NP subset P_{/poly}, towards proving similar results for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses. That is, if NP is not in SIZE[n^k] for all k, then for all C in {Parity-P, PP, PSPACE, EXP}, C subset P_{/poly} implies C subset i.o.-NP_{/n^epsilon} for all epsilon > 0. Note that unconditionally, the collapses are only to MA and not NP. We also explore consequences of circuit lower bounds for a sparse language in NP. Among other results, we show if a polynomially-sparse NP language does not have n^{1+epsilon}-size circuits, then MA subset i.o.-NP_{/O(log n)}, MA subset i.o.-P^{NP[O(log n)]}, and NEXP is not in SIZE[2^{o(m)}]. Finally, we observe connections between these results and the "hardness magnification" phenomena described in recent works
    • …
    corecore