2,997 research outputs found

    Why Civil and Criminal Procedure Are So Different: A Forgotten History

    Get PDF
    Much has been written about the origins of civil procedure. Yet little is known about the origins of criminal procedure, even though it governs how millions of cases in federal and state courts are litigated each year. This Article’s examination of criminal procedure’s origin story questions the prevailing notion that civil and criminal procedure require different treatment. The Article’s starting point is the first draft of the Federal Rules of Criminal Procedure—confidential in 1941 and since forgotten. The draft reveals that reformers of criminal procedure turned to the new rules of civil procedure for guidance. The contents of this draft shed light on an extraordinary moment: reformers initially proposed that all litigation in the United States, civil and criminal, be governed by a unified procedural code. The implementation of this original vision of a unified code would have had dramatic implications for how criminal law is practiced and perceived today. The advisory committee’s final product in 1944, however, set criminal litigation on a very different course. Transcripts of the committee’s initial meetings reveal that the final code of criminal procedure emerged from the clash of ideas presented by two committee members, James Robinson and Alexander Holtzoff. Holtzoff’s traditional views would ultimately persuade other members, cleaving criminal procedure from civil procedure. Since then, differences in civil and criminal litigation have become entrenched and normalized. Yet, at the time the Federal Rules of Criminal Procedure were drafted, a unified code was not just a plausible alternative but the only proposal. The draft’s challenge to the prevailing notion that civil and criminal wrongs inherently require different procedural treatment is a critical contribution to the growing debate over whether the absence of discovery in criminal procedure is justified in light of discovery tools afforded by civil procedure. The first draft of criminal procedure, which called for uniform rules to govern proceedings in all civil and criminal courtrooms, suggests the possibility that current resistance to unification is, to a significant degree, historically contingent

    Large deviation asymptotics and control variates for simulating large functions

    Full text link
    Consider the normalized partial sums of a real-valued function FF of a Markov chain, ϕn:=n1k=0n1F(Φ(k)),n1.\phi_n:=n^{-1}\sum_{k=0}^{n-1}F(\Phi(k)),\qquad n\ge1. The chain {Φ(k):k0}\{\Phi(k):k\ge0\} takes values in a general state space X\mathsf {X}, with transition kernel PP, and it is assumed that the Lyapunov drift condition holds: PVVW+bICPV\le V-W+b\mathbb{I}_C where V:X(0,)V:\mathsf {X}\to(0,\infty), W:X[1,)W:\mathsf {X}\to[1,\infty), the set CC is small and WW dominates FF. Under these assumptions, the following conclusions are obtained: 1. It is known that this drift condition is equivalent to the existence of a unique invariant distribution π\pi satisfying π(W)<\pi(W)<\infty, and the law of large numbers holds for any function FF dominated by WW: ϕnϕ:=π(F),a.s.,n.\phi_n\to\phi:=\pi(F),\qquad{a.s.}, n\to\infty. 2. The lower error probability defined by P{ϕnc}\mathsf {P}\{\phi_n\le c\}, for c<ϕc<\phi, n1n\ge1, satisfies a large deviation limit theorem when the function FF satisfies a monotonicity condition. Under additional minor conditions an exact large deviations expansion is obtained. 3. If WW is near-monotone, then control-variates are constructed based on the Lyapunov function VV, providing a pair of estimators that together satisfy nontrivial large asymptotics for the lower and upper error probabilities. In an application to simulation of queues it is shown that exact large deviation asymptotics are possible even when the estimator does not satisfy a central limit theorem.Comment: Published at http://dx.doi.org/10.1214/105051605000000737 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Feature Extraction for Universal Hypothesis Testing via Rank-constrained Optimization

    Full text link
    This paper concerns the construction of tests for universal hypothesis testing problems, in which the alternate hypothesis is poorly modeled and the observation space is large. The mismatched universal test is a feature-based technique for this purpose. In prior work it is shown that its finite-observation performance can be much better than the (optimal) Hoeffding test, and good performance depends crucially on the choice of features. The contributions of this paper include: 1) We obtain bounds on the number of \epsilon distinguishable distributions in an exponential family. 2) This motivates a new framework for feature extraction, cast as a rank-constrained optimization problem. 3) We obtain a gradient-based algorithm to solve the rank-constrained optimization problem and prove its local convergence.Comment: 5 pages, 4 figures, submitted to ISIT 201

    Passive Dynamics in Mean Field Control

    Full text link
    Mean-field models are a popular tool in a variety of fields. They provide an understanding of the impact of interactions among a large number of particles or people or other "self-interested agents", and are an increasingly popular tool in distributed control. This paper considers a particular randomized distributed control architecture introduced in our own recent work. In numerical results it was found that the associated mean-field model had attractive properties for purposes of control. In particular, when viewed as an input-output system, its linearization was found to be minimum phase. In this paper we take a closer look at the control model. The results are summarized as follows: (i) The Markov Decision Process framework of Todorov is extended to continuous time models, in which the "control cost" is based on relative entropy. This is the basis of the construction of a family of controlled Markovian generators. (ii) A decentralized control architecture is proposed in which each agent evolves as a controlled Markov process. A central authority broadcasts a common control signal to each agent. The central authority chooses this signal based on an aggregate scalar output of the Markovian agents. (iii) Provided the control-free system is a reversible Markov process, the following identity holds for the linearization, Real(G(jω))=PSDY(ω)0,ω, \text{Real} (G(j\omega)) = \text{PSD}_Y(\omega)\ge 0, \quad \omega\in\Re, where the right hand side denotes the power spectral density for the output of any one of the individual (control-free) Markov processes.Comment: To appear IEEE CDC, 201

    Generalized Error Exponents For Small Sample Universal Hypothesis Testing

    Full text link
    The small sample universal hypothesis testing problem is investigated in this paper, in which the number of samples nn is smaller than the number of possible outcomes mm. The goal of this work is to find an appropriate criterion to analyze statistical tests in this setting. A suitable model for analysis is the high-dimensional model in which both nn and mm increase to infinity, and n=o(m)n=o(m). A new performance criterion based on large deviations analysis is proposed and it generalizes the classical error exponent applicable for large sample problems (in which m=O(n)m=O(n)). This generalized error exponent criterion provides insights that are not available from asymptotic consistency or central limit theorem analysis. The following results are established for the uniform null distribution: (i) The best achievable probability of error PeP_e decays as Pe=exp{(n2/m)J(1+o(1))}P_e=\exp\{-(n^2/m) J (1+o(1))\} for some J>0J>0. (ii) A class of tests based on separable statistics, including the coincidence-based test, attains the optimal generalized error exponents. (iii) Pearson's chi-square test has a zero generalized error exponent and thus its probability of error is asymptotically larger than the optimal test.Comment: 43 pages, 4 figure

    Sequences of binary irreducible polynomials

    Full text link
    In this paper we construct an infinite sequence of binary irreducible polynomials starting from any irreducible polynomial f_0 \in \F_2 [x]. If f0f_0 is of degree n=2lmn = 2^l \cdot m, where mm is odd and ll is a non-negative integer, after an initial finite sequence of polynomials f0,f1,...,fsf_0, f_1, ..., f_{s} with sl+3s \leq l+3, the degree of fi+1f_{i+1} is twice the degree of fif_i for any isi \geq s.Comment: 7 pages, minor adjustment

    Computable exponential bounds for screened estimation and simulation

    Full text link
    Suppose the expectation E(F(X))E(F(X)) is to be estimated by the empirical averages of the values of FF on independent and identically distributed samples {Xi}\{X_i\}. A sampling rule called the "screened" estimator is introduced, and its performance is studied. When the mean E(U(X))E(U(X)) of a different function UU is known, the estimates are "screened," in that we only consider those which correspond to times when the empirical average of the {U(Xi)}\{U(X_i)\} is sufficiently close to its known mean. As long as UU dominates FF appropriately, the screened estimates admit exponential error bounds, even when F(X)F(X) is heavy-tailed. The main results are several nonasymptotic, explicit exponential bounds for the screened estimates. A geometric interpretation, in the spirit of Sanov's theorem, is given for the fact that the screened estimates always admit exponential error bounds, even if the standard estimates do not. And when they do, the screened estimates' error probability has a significantly better exponent. This implies that screening can be interpreted as a variance reduction technique. Our main mathematical tools come from large deviations techniques. The results are illustrated by a detailed simulation example.Comment: Published in at http://dx.doi.org/10.1214/00-AAP492 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore