5,033 research outputs found
Prizefighting and the Birth of Movie Censorship
Censorship scholars unanimously, but mistakenly, treat a 1907 ordinance of the City of Chicago as the first act of censorship in the United States. This Article finds, however, that movie censorship was born in March 1897 with prohibitions against a now-extinct genre: prizefight films that depicted real and staged boxing fights. At the time, boxing was generally illegal, yet the sport was enormously popular and boxers enjoyed privileged social status. In fact, shortly after Thomas Edison commercialized moving picture technologies in 1894, he accommodated the production of prizefight films at his studio in New Jersey, where prizefighting was prohibited.
The Article documents the reasons for Edison\u27s decision to veto of the use of his equipment for prizefight films, only a few months after the production of prizefight films at his studio. Because of Edison\u27s position in the industry, this decision effectively constituted the first form of content self-regulation in the motion-picture industry, approximately thirteen years before the presently-believed-to-be first form of content self-regulation in the industry.
This Article, therefore, begins to close a neglected gap in the literature on movie censorship. Its findings require a reexamination of content regulation in the motion picture industry, whose presumed twentieth century origins hide legislatures and industries already experienced with censorship campaigns and laws. Despite this Article\u27s historical reach, it provides important insights into modem-day social regulation. The failures of the nineteenth-century regulators to curtail popular activities like prizefighting can inform and shape current regulatory efforts, such as the design of anti-smoking policies
The Durapolist Puzzle: Monopoly Power in Durable-Goods Markets
This Article studies the durapolist, the durable-goods monopolist. Durapolists have long argued that, unlike perishable-goods monopolists, they face difficulties in exercising market power despite their monopolistic position. During the past thirty years, economists have extensively studied the individual arguments durapolists deploy regarding their inability to exert market power. While economists have confirmed some of these arguments, a general framework for analyzing durapolists as a distinct group of monopolists has not emerged. This Article offers such a framework. It first presents the problems of durapolists in exercising market power and explains how courts have treated these problems. It then analyzes the strategies durapolists have devised to overcome difficulties in acquiring and maintaining monopoly power and the legal implications of these strategies. This Article\u27s major contributions are (a) expanding the conceptual scope of the durapolist problem, (b) presenting the durapolist problem as an explanation for many common business practices employed by durapolists, and (c) analyzing the legal implications of strategies employed to overcome the durapolist problem
Piggybackers and freeloaders: platform economics and indirect liability for copyright infringement
Many, if not most, copyright cases of alleged indirect liability for
copyright infringement arise in platform markets: One of the litigating
parties is a market intermediary that connects members of different
distinct groups. Indirect liability for copyright infringement is still
controversial and frequently litigated. This paper develops an
analytical framework that is applicable to many of the debated cases.
The presented framework offers strong justifications for the imposition
of indirect liability for copyright infringement in platform markets and
offers tools to establish certain elements of indirect liability for
copyright infringement
Quadratic Scaling Bosonic Path Integral Molecular Dynamics
We present an algorithm for bosonic path integral molecular dynamics
simulations, which reduces the computational complexity with the number of
particles from cubic to quadratic. Path integral molecular dynamics simulations
of large bosonic systems are challenging, since a straightforward
implementation scales exponentially with the number of particles. We recently
developed a recursive algorithm that reduced the computational complexity from
exponential to cubic. It allowed performing the first path integral molecular
dynamics simulations of ~100 bosons, but the cubic scaling hindered
applications to much larger systems. Here, we report an improved algorithm that
scales only quadratically with system size. Simulations with our new method are
orders of magnitude faster, with a speedup that scales as , where and
are the number of beads (imaginary time slices) and particles,
respectively. In practice, this eliminates most of the cost of including
bosonic exchange effects in path integral molecular dynamics simulations. We
use the algorithm to simulate thousands of interacting bosons using path
integral molecular dynamics for the first time, spending days of computation on
simulations that would have otherwise taken decades to complete
Using Backpropagation with Temporal Windows to Learn the Dynamics of the CMU Direct-Drive Arm II
Computing the inverse dynamics of a robot arm is an active area of research in the control literature. We hope to learn the inverse dynamics by training a neural network on the measured response of a physical arm. The input to the network is a temporal window of measured positions; output is a vector of torques. We train the network on data measured from the first two joints of the CMU Direct-Drive Arm II as it moves through a randomly-generated sample of "pick-and-place" trajectories. We then test generalization with a new trajectory and compare its output with the torque measured at the physical arm. The network is shown to generalize with a root mean square error/standard deviation (RMSS) of 0.10. We interpreted the weights of the network in terms of the velocity and acceleration filters used in conventional control theory
Automatic Learning Rate Maximization by On-Line Estimation of the Hessian's Eigenvectors
We propose a very simple, and well principled wayofcomputing the optimal step size in gradient descent algorithms. The on-line version is very efficient computationally, and is applicable to large backpropagation networks trained on large data sets. The main ingredient is a technique for estimating the principal eigenvalue(s) and eigenvector(s) of the objective function's second derivativematrix (Hessian), which does not require to even calculate the Hessian. Several other applications of this technique are proposed for speeding up learning, or for eliminating useless parameters
Modulus Computational Entropy
The so-called {\em leakage-chain rule} is a very important tool used in many
security proofs. It gives an upper bound on the entropy loss of a random
variable in case the adversary who having already learned some random
variables correlated with , obtains some further
information about . Analogously to the information-theoretic
case, one might expect that also for the \emph{computational} variants of
entropy the loss depends only on the actual leakage, i.e. on .
Surprisingly, Krenn et al.\ have shown recently that for the most commonly used
definitions of computational entropy this holds only if the computational
quality of the entropy deteriorates exponentially in
. This means that the current standard definitions
of computational entropy do not allow to fully capture leakage that occurred
"in the past", which severely limits the applicability of this notion.
As a remedy for this problem we propose a slightly stronger definition of the
computational entropy, which we call the \emph{modulus computational entropy},
and use it as a technical tool that allows us to prove a desired chain rule
that depends only on the actual leakage and not on its history. Moreover, we
show that the modulus computational entropy unifies other,sometimes seemingly
unrelated, notions already studied in the literature in the context of
information leakage and chain rules. Our results indicate that the modulus
entropy is, up to now, the weakest restriction that guarantees that the chain
rule for the computational entropy works. As an example of application we
demonstrate a few interesting cases where our restricted definition is
fulfilled and the chain rule holds.Comment: Accepted at ICTS 201
Secure self-calibrating quantum random bit generator
Random bit generators (RBGs) are key components of a variety of information
processing applications ranging from simulations to cryptography. In
particular, cryptographic systems require "strong" RBGs that produce
high-entropy bit sequences, but traditional software pseudo-RBGs have very low
entropy content and therefore are relatively weak for cryptography. Hardware
RBGs yield entropy from chaotic or quantum physical systems and therefore are
expected to exhibit high entropy, but in current implementations their exact
entropy content is unknown. Here we report a quantum random bit generator
(QRBG) that harvests entropy by measuring single-photon and entangled
two-photon polarization states. We introduce and implement a quantum
tomographic method to measure a lower bound on the "min-entropy" of the system,
and we employ this value to distill a truly random bit sequence. This approach
is secure: even if an attacker takes control of the source of optical states, a
secure random sequence can be distilled.Comment: 5 pages, 2 figure
- …