3,073 research outputs found
Our Patient System and Health Care Information Technology: Valuable Incentive or Impediment to Innovation?
Patentable inventions have often been transformative, but the pace of such innovation has changed exponentially in the last thirty years. The patent law still seeks to reward ingenuity and nowhere should this maxim be truer than in the area of health information technology. But the pace and scope of changes in that arena have made rewarding that ingenuity with a patent increasingly difficult. The courts have struggled to apply patent laws to technology that is new and novel to a fault. This Article seeks to address how it is possible to continue to reward ingenuity in a field where progress will save not just money but lives
Automatic Order Detection and Restoration Through Systematically Improvable Variational Wave Functions
Variational wave function ansatze are an invaluable tool to study the
properties of strongly correlated systems. We propose such a wave function,
based on the theory of auxiliary fields and combining aspects of
auxiliary-field quantum Monte Carlo and modern variational optimization
techniques including automatic differentiation. The resulting ansatz,
consisting of several slices of optimized projectors, is highly expressive and
systematically improvable. We benchmark this form on the two-dimensional
Hubbard model, using both cylindrical and large, fully periodic supercells. The
computed ground-state energies are competitive with the best variational
results. Moreover, the optimized wave functions predict the correct
ground-state order with near full symmetry restoration (i.e. translation
invariance) despite initial states with incorrect orders. The ansatz can become
a tool for local order prediction, leading to a new paradigm for variational
studies of bulk systems. It can also be viewed as an approach to produce
accurate and systematically improvable wave functions in a convenient form of
non-orthogonal Slater determinants (e.g., for quantum chemistry) at polynomial
computational cost
Classical Shadows for Quantum Process Tomography on Near-term Quantum Computers
Quantum process tomography is a powerful tool for understanding quantum
channels and characterizing properties of quantum devices. Inspired by recent
advances using classical shadows in quantum state tomography[1], we have
developed ShadowQPT, a classical shadow method for quantum process tomography.
We introduce two related formulations with and without ancilla qubits.
ShadowQPT stochastically reconstructs the Choi matrix of the device allowing
for an a-posteri classical evaluation of the device on arbitrary inputs with
respect to arbitrary outputs. Using shadows we then show how to compute
overlaps, generate all -weight reduced processes, and perform reconstruction
via Hamiltonian learning. These latter two tasks are efficient for large
systems as the number of quantum measurements needed scales only
logarithmically with the number of qubits. A number of additional
approximations and improvements are developed including the use of a
pair-factorized Clifford shadow and a series of post-processing techniques
which significantly enhance the accuracy for recovering the quantum channel. We
have implemented ShadowQPT using both Pauli and Clifford measurements on the
IonQ trapped ion quantum computer for quantum processes up to qubits and
achieved good performance.Comment: Revised with additional Hamiltonian learning sectio
On the Effect of Anticipation on Reading Times
Over the past two decades, numerous studies have demonstrated how less
predictable (i.e., higher surprisal) words take more time to read. In general,
these studies have implicitly assumed the reading process is purely responsive:
Readers observe a new word and allocate time to process it as required. We
argue that prior results are also compatible with a reading process that is at
least partially anticipatory: Readers could make predictions about a future
word and allocate time to process it based on their expectation. In this work,
we operationalize this anticipation as a word's contextual entropy. We assess
the effect of anticipation on reading by comparing how well surprisal and
contextual entropy predict reading times on four naturalistic reading datasets:
two self-paced and two eye-tracking. Experimentally, across datasets and
analyses, we find substantial evidence for effects of contextual entropy over
surprisal on a word's reading time (RT): in fact, entropy is sometimes better
than surprisal in predicting a word's RT. Spillover effects, however, are
generally not captured by entropy, but only by surprisal. Further, we
hypothesize four cognitive mechanisms through which contextual entropy could
impact RTs -- three of which we are able to design experiments to analyze.
Overall, our results support a view of reading that is not just responsive, but
also anticipatory.Comment: This is a pre-MIT Press publication version of the paper. Code is
available in https://github.com/rycolab/anticipation-on-reading-time
Testing the Predictions of Surprisal Theory in 11 Languages
A fundamental result in psycholinguistics is that less predictable words take
a longer time to process. One theoretical explanation for this finding is
Surprisal Theory (Hale, 2001; Levy, 2008), which quantifies a word's
predictability as its surprisal, i.e. its negative log-probability given a
context. While evidence supporting the predictions of Surprisal Theory have
been replicated widely, most have focused on a very narrow slice of data:
native English speakers reading English texts. Indeed, no comprehensive
multilingual analysis exists. We address this gap in the current literature by
investigating the relationship between surprisal and reading times in eleven
different languages, distributed across five language families. Deriving
estimates from language models trained on monolingual and multilingual corpora,
we test three predictions associated with surprisal theory: (i) whether
surprisal is predictive of reading times; (ii) whether expected surprisal, i.e.
contextual entropy, is predictive of reading times; (iii) and whether the
linking function between surprisal and reading times is linear. We find that
all three predictions are borne out crosslinguistically. By focusing on a more
diverse set of languages, we argue that these results offer the most robust
link to-date between information theory and incremental language processing
across languages.Comment: This is a pre-MIT Press publication version of the pape
- …