3,048 research outputs found
Why Civil and Criminal Procedure Are So Different: A Forgotten History
Much has been written about the origins of civil procedure. Yet little is known about the origins of criminal procedure, even though it governs how millions of cases in federal and state courts are litigated each year. This Article’s examination of criminal procedure’s origin story questions the prevailing notion that civil and criminal procedure require different treatment. The Article’s starting point is the first draft of the Federal Rules of Criminal Procedure—confidential in 1941 and since forgotten. The draft reveals that reformers of criminal procedure turned to the new rules of civil procedure for guidance. The contents of this draft shed light on an extraordinary moment: reformers initially proposed that all litigation in the United States, civil and criminal, be governed by a unified procedural code. The implementation of this original vision of a unified code would have had dramatic implications for how criminal law is practiced and perceived today. The advisory committee’s final product in 1944, however, set criminal litigation on a very different course. Transcripts of the committee’s initial meetings reveal that the final code of criminal procedure emerged from the clash of ideas presented by two committee members, James Robinson and Alexander Holtzoff. Holtzoff’s traditional views would ultimately persuade other members, cleaving criminal procedure from civil procedure. Since then, differences in civil and criminal litigation have become entrenched and normalized. Yet, at the time the Federal Rules of Criminal Procedure were drafted, a unified code was not just a plausible alternative but the only proposal. The draft’s challenge to the prevailing notion that civil and criminal wrongs inherently require different procedural treatment is a critical contribution to the growing debate over whether the absence of discovery in criminal procedure is justified in light of discovery tools afforded by civil procedure. The first draft of criminal procedure, which called for uniform rules to govern proceedings in all civil and criminal courtrooms, suggests the possibility that current resistance to unification is, to a significant degree, historically contingent
Large deviation asymptotics and control variates for simulating large functions
Consider the normalized partial sums of a real-valued function of a
Markov chain, The
chain takes values in a general state space ,
with transition kernel , and it is assumed that the Lyapunov drift condition
holds: where , , the set is small and dominates . Under these
assumptions, the following conclusions are obtained: 1. It is known that this
drift condition is equivalent to the existence of a unique invariant
distribution satisfying , and the law of large numbers
holds for any function dominated by :
2. The lower error
probability defined by , for , ,
satisfies a large deviation limit theorem when the function satisfies a
monotonicity condition. Under additional minor conditions an exact large
deviations expansion is obtained. 3. If is near-monotone, then
control-variates are constructed based on the Lyapunov function , providing
a pair of estimators that together satisfy nontrivial large asymptotics for the
lower and upper error probabilities. In an application to simulation of queues
it is shown that exact large deviation asymptotics are possible even when the
estimator does not satisfy a central limit theorem.Comment: Published at http://dx.doi.org/10.1214/105051605000000737 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Passive Dynamics in Mean Field Control
Mean-field models are a popular tool in a variety of fields. They provide an
understanding of the impact of interactions among a large number of particles
or people or other "self-interested agents", and are an increasingly popular
tool in distributed control.
This paper considers a particular randomized distributed control architecture
introduced in our own recent work. In numerical results it was found that the
associated mean-field model had attractive properties for purposes of control.
In particular, when viewed as an input-output system, its linearization was
found to be minimum phase.
In this paper we take a closer look at the control model. The results are
summarized as follows:
(i) The Markov Decision Process framework of Todorov is extended to
continuous time models, in which the "control cost" is based on relative
entropy. This is the basis of the construction of a family of controlled
Markovian generators.
(ii) A decentralized control architecture is proposed in which each agent
evolves as a controlled Markov process. A central authority broadcasts a common
control signal to each agent. The central authority chooses this signal based
on an aggregate scalar output of the Markovian agents.
(iii) Provided the control-free system is a reversible Markov process, the
following identity holds for the linearization, where the right hand side
denotes the power spectral density for the output of any one of the individual
(control-free) Markov processes.Comment: To appear IEEE CDC, 201
Feature Extraction for Universal Hypothesis Testing via Rank-constrained Optimization
This paper concerns the construction of tests for universal hypothesis
testing problems, in which the alternate hypothesis is poorly modeled and the
observation space is large. The mismatched universal test is a feature-based
technique for this purpose. In prior work it is shown that its
finite-observation performance can be much better than the (optimal) Hoeffding
test, and good performance depends crucially on the choice of features. The
contributions of this paper include: 1) We obtain bounds on the number of
\epsilon distinguishable distributions in an exponential family. 2) This
motivates a new framework for feature extraction, cast as a rank-constrained
optimization problem. 3) We obtain a gradient-based algorithm to solve the
rank-constrained optimization problem and prove its local convergence.Comment: 5 pages, 4 figures, submitted to ISIT 201
Generalized Error Exponents For Small Sample Universal Hypothesis Testing
The small sample universal hypothesis testing problem is investigated in this
paper, in which the number of samples is smaller than the number of
possible outcomes . The goal of this work is to find an appropriate
criterion to analyze statistical tests in this setting. A suitable model for
analysis is the high-dimensional model in which both and increase to
infinity, and . A new performance criterion based on large deviations
analysis is proposed and it generalizes the classical error exponent applicable
for large sample problems (in which ). This generalized error exponent
criterion provides insights that are not available from asymptotic consistency
or central limit theorem analysis. The following results are established for
the uniform null distribution:
(i) The best achievable probability of error decays as
for some .
(ii) A class of tests based on separable statistics, including the
coincidence-based test, attains the optimal generalized error exponents.
(iii) Pearson's chi-square test has a zero generalized error exponent and
thus its probability of error is asymptotically larger than the optimal test.Comment: 43 pages, 4 figure
Sequences of binary irreducible polynomials
In this paper we construct an infinite sequence of binary irreducible
polynomials starting from any irreducible polynomial f_0 \in \F_2 [x]. If
is of degree , where is odd and is a
non-negative integer, after an initial finite sequence of polynomials with , the degree of is twice the degree
of for any .Comment: 7 pages, minor adjustment
- …