1,197 research outputs found
A High Power Hydrogen Target for Parity Violation Experiments
Parity-violating electron scattering measurements on hydrogen and deuterium,
such as those underway at the Bates and CEBAF laboratories, require
luminosities exceeding cms, resulting in large beam
power deposition into cryogenic liquid. Such targets must be able to absorb 500
watts or more with minimal change in target density. A 40~cm long liquid
hydrogen target, designed to absorb 500~watts of beam power without boiling,
has been developed for the SAMPLE experiment at Bates. In recent tests with
40~A of incident beam, no evidence was seen for density fluctuations in
the target, at a sensitivity level of better than 1\%. A summary of the target
design and operational experience will be presented.Comment: 13 pages, 9 postscript figure
Parity Violation with Electrons and Hadrons
A key question in understanding the structure of nucleons involves the role
of sea quarks in their ground state electromagnetic properties such as charge
and magnetism. Parity-violating electron scattering, when combined with
determination of nucleon electromagnetic form factors from parity-conserving
e-N scattering, provides another degree of freedom to separately determine the
up, down and strange quark contributions to nucleon electromagnetic structure.
Strange quarks are unique in that they are exclusively in the nucleon's sea. A
program of experiments using parity violating electron scattering has been
underway for approximately a decade, and results are beginning to emerge. This
paper is a brief overview of the various experiments and their results to date
along with a short-term outlook of what can be anticipated from experiments in
the next few years.Comment: Invited talk at the 17th International IUPAP Conference on Few-Body
Problems in Physic
Bayesian system identification of dynamical systems using highly informative training data
This paper is concerned with the Bayesian system identification of structural dynamical systems using experimentally obtained training data. It is motivated by situations where, from a large quantity of training data, one must select a subset to infer probabilistic models. To that end, using concepts from information theory, expressions are derived which allow one to approximate the effect that a set of training data will have on parameter uncertainty as well as the plausibility of candidate model structures. The usefulness of this concept is then demonstrated through the system identification of several dynamical systems using both physics-based and emulator models. The result is a rigorous scientific framework which can be used to select 'highly informative' subsets from large quantities of training data
An Energy Feedback System for the MIT/Bates Linear Accelerator
We report the development and implementation of an energy feedback system for
the MIT/Bates Linear Accelerator Center. General requirements of the system are
described, as are the specific requirements, features, and components of the
system unique to its implementation at the Bates Laboratory. We demonstrate
that with the system in operation, energy fluctuations correlated with the 60
Hz line voltage and with drifts of thermal origin are reduced by an order of
magnitude
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
This paper develops a general framework for solving a variety of convex cone
problems that frequently arise in signal processing, machine learning,
statistics, and other fields. The approach works as follows: first, determine a
conic formulation of the problem; second, determine its dual; third, apply
smoothing; and fourth, solve using an optimal first-order method. A merit of
this approach is its flexibility: for example, all compressed sensing problems
can be solved via this approach. These include models with objective
functionals such as the total-variation norm, ||Wx||_1 where W is arbitrary, or
a combination thereof. In addition, the paper also introduces a number of
technical contributions such as a novel continuation scheme, a novel approach
for controlling the step size, and some new results showing that the smooth and
unsmoothed problems are sometimes formally equivalent. Combined with our
framework, these lead to novel, stable and computationally efficient
algorithms. For instance, our general implementation is competitive with
state-of-the-art methods for solving intensively studied problems such as the
LASSO. Further, numerical experiments show that one can solve the Dantzig
selector problem, for which no efficient large-scale solvers exist, in a few
hundred iterations. Finally, the paper is accompanied with a software release.
This software is not a single, monolithic solver; rather, it is a suite of
programs and routines designed to serve as building blocks for constructing
complete algorithms.Comment: The TFOCS software is available at http://tfocs.stanford.edu This
version has updated reference
Repetitive Negative Thinking in Anticipation of a Stressor
Repetitive negative thinking (RNT) has been confirmed as a transdiagnostic phenomenon, but most measures of RNT are contaminated with diagnosis-specific content. The first aim of this study was to examine the structure of an anticipatory version of the Repetitive Thinking Questionnaire (RTQ-Ant) as a trans-emotional measure of anticipatory RNT. The original RTQ was completed with reference to a past stressor, whereas the RTQ-Ant instructs respondents to link their responses to a future stressor. The second aim was to test if the associations between a range of emotions (anxiety, depression, shame, anger, general distress) and the original post-stressor version of the RTQ would be replicated. Undergraduates (N = 175, 61% women) completed the RTQ-Ant, along with measures of various emotions, with reference to upcoming university exams. Principal axis factor analysis yielded many similarities between the original post-event RTQ and the RTQ-Ant, and some differences. The RTQ-Ant was comprised of two subscales: the RNT subscale measures engagement in repetitive thinking, negative thoughts about oneself, and ‘why’ questions; and the Isolated Contemplation (IC) subscale included items referring to isolating oneself and reflecting on negative thoughts, feelings, loneliness, and listening to sad music. RNT was more strongly related to negative emotions than IC. The RTQ-Ant appears to be a reliable measure of anticipatory RNT that is associated with a broad array of emotions
Search for a T_20 Analyzer for Deuterons
This work was supported by the National Science Foundation Grant NSF PHY 81-14339 and by Indiana Universit
Efficient MR Image Reconstruction for Compressed MR Imaging
Abstract. In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original prob-lem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Fi-nally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the recon-struction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for com-pressed MR image reconstruction.
Hypermagnetic Field Effects in the Thermal Bath of Chiral Fermions
The dispersion relations for leptons in the symmetric phase of the
electroweak model in the presence of a constant hypermagnetic field are
investigated. The one-loop fermion self-energies are calculated in the lowest
Landau level approximation and used to show that the hypermagnetic field
forbids the generation of the ''effective mass'' found as a pole of the
fermions' propagators at high temperature and zero fields. In the considered
approximation leptons behave as massless particles propagating only along the
direction of the external field. The reported results can be of interest for
the cosmological implications of primordial hypermagnetic fields.Comment: 5 page
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …