933 research outputs found
"Ordinary, the same as anywhere else": notes on the management of spoiled identity in 'marginal' middle class neighbourhoods
Urban sociologists are becoming increasingly interested in neighbourhood as a source of middle-class identity. Particular emphasis is currently being given to two types of middle-class neighbourhood; gentrified urban neighbourhoods of ‘distinction’ and inconspicuous ‘suburban landscapes of privilege’. However, there has been a dearth of work on ‘marginal’ middle-class neighbourhoods that are similarly ‘inconspicuous’ rather than distinctive, but less exclusive, thus containing sources of ‘spoiled identity’. This article draws on data gathered from two ‘marginal’ middleclass neighbourhoods that contained a particular source of ‘spoiled identity’: social renters. Urban sociological analyses of neighbour responses to these situations highlight a process of dis-identification with the maligned object, which exacerbates neighbour differences. Our analysis of data from the ‘marginal’ middle-class neighbourhoods suggests something entirely different and Goffmanesque. This entailed the management of spoiled identity, which emphasized similarities rather than differences between neighbours.</p
Unfolding Rates for the Diffusion-Collision Model
In the diffusion-collision model, the unfolding rates are given by the
likelihood of secondary structural cluster dissociation. In this work, we
introduce an unfolding rate calculation for proteins whose secondary structural
elements are -helices, modeled from thermal escape over a barrier which
arises from the free energy in buried hydrophobic residues. Our results are in
good agreement with currently accepted values for the attempt rate.Comment: Shorter version of cond-mat/0011024 accepted for publication in PR
A "superstorm": When moral panic and new risk discourses converge in the media
This is an Author's Accepted Manuscript of an article published in Health, Risk and Society, 15(6), 681-698, 2013, copyright Taylor & Francis, available online at: http://www.tandfonline.com/10.1080/13698575.2013.851180.There has been a proliferation of risk discourses in recent decades but studies of these have been polarised, drawing either on moral panic or new risk frameworks to analyse journalistic discourses. This article opens the theoretical possibility that the two may co-exist and converge in the same scare. I do this by bringing together more recent developments in moral panic thesis, with new risk theory and the concept of media logic. I then apply this theoretical approach to an empirical analysis of how and with what consequences moral panic and new risk type discourses converged in the editorials of four newspaper campaigns against GM food policy in Britain in the late 1990s. The article analyses 112 editorials published between January 1998 and December 2000, supplemented with news stories where these were needed for contextual clarity. This analysis shows that not only did this novel food generate intense media and public reactions; these developed in the absence of the type of concrete details journalists usually look for in risk stories. Media logic is important in understanding how journalists were able to engage and hence how a major scare could be constructed around convergent moral panic and new risk type discourses. The result was a media ‘superstorm’ of sustained coverage in which both types of discourse converged in highly emotive mutually reinforcing ways that resonated in a highly sensitised context. The consequence was acute anxiety, social volatility and the potential for the disruption of policy and social change
Motivational engagement in first-time hearing aid users: a feasibility study
Objective: To assess (1) the feasibility of incorporating the Ida Institute’s Motivation Tools into a UK audiology service, (2) the potential benefits of motivational engagement in first-time hearing aid users, and (3) predictors of hearing aid and general health outcome measures.
Design: A feasibility study using a single-centre, prospective, quasi-randomized controlled design with two arms. The Ida Institute’s Motivation Tools formed the basis for motivational engagement. Study sample: First-time hearing aid users were recruited at the initial hearing assessment appointment. The intervention arm underwent motivational engagement (M+, n = 32), and a control arm (M-, n = 36) received standard care only. Results: The M+ group showed greater self-efficacy, reduced anxiety, and greater engagement with the audiologist at assessment and fitting appointments. However, there were no significant between-group differences 10-weeks post-fitting. Hearing-related communication scores predicted anxiety, and social isolation scores predicted depression for the M+ group. Readiness to address hearing difficulties predicted hearing aid outcomes for the M- group. Hearing sensitivity was not a predictor of outcomes. Conclusions: There were some positive results from motivational engagement early in the patient journey. Future research should consider using qualitative methods to explore whether there are longer-term benefits of motivational engagement in hearing aid users
A closer look at the uncertainty relation of position and momentum
We consider particles prepared by the von Neumann-L\"uders projection. For
those particles the standard deviation of the momentum is discussed. We show
that infinite standard deviations are not exceptions but rather typical. A
necessary and sufficient condition for finite standard deviations is given.
Finally, a new uncertainty relation is derived and it is shown that the latter
cannot be improved.Comment: 3 pages, introduction shortened, content unchange
The Muonium Atom as a Probe of Physics beyond the Standard Model
The observed interactions between particles are not fully explained in the
successful theoretical description of the standard model to date. Due to the
close confinement of the bound state muonium () can be used as
an ideal probe of quantum electrodynamics and weak interaction and also for a
search for additional interactions between leptons. Of special interest is the
lepton number violating process of sponteanous conversion of muonium to
antimuonium.Comment: 15 pages,6 figure
Gemini Observations of Disks and Jets in Young Stellar Objects and in Active Galaxies
We present first results from the Near-infrared Integral Field Spectrograph
(NIFS) located at Gemini North. For the active galaxies Cygnus A and Perseus A
we observe rotationally-supported accretion disks and adduce the existence of
massive central black holes and estimate their masses. In Cygnus A we also see
remarkable high-excitation ionization cones dominated by photoionization from
the central engine. In the T-Tauri stars HV Tau C and DG Tau we see
highly-collimated bipolar outflows in the [Fe II] 1.644 micron line, surrounded
by a slower molecular bipolar outflow seen in the H_2 lines, in accordance with
the model advocated by Pyo et al. (2002).Comment: Invited paper presented at the 5th Stromlo Symposium. 9 pages, 7
figures. Accepted for publication in Astrophysics & Space Scienc
Langevin Simulations of Two Dimensional Vortex Fluctuations: Anomalous Dynamics and a New -exponent
The dynamics of two dimensional (2D) vortex fluctuations are investigated
through simulations of the 2D Coulomb gas model in which vortices are
represented by soft disks with logarithmic interactions. The simulations
trongly support a recent suggestion that 2D vortex fluctuations obey an
intrinsic anomalous dynamics manifested in a long range 1/t-tail in the vortex
correlations. A new non-linear IV-exponent a, which is different from the
commonly used AHNS exponent, a_AHNS and is given by a = 2a_AHNS - 3, is
confirmed by the simulations. The results are discussed in the context of
earlier simulations, experiments and a phenomenological description.Comment: Submitted to PRB, RevTeX format, 28 pages and 13 figures, figures in
postscript format are available at http://www.tp.umu.se/~holmlund/papers.htm
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
This paper develops a general framework for solving a variety of convex cone
problems that frequently arise in signal processing, machine learning,
statistics, and other fields. The approach works as follows: first, determine a
conic formulation of the problem; second, determine its dual; third, apply
smoothing; and fourth, solve using an optimal first-order method. A merit of
this approach is its flexibility: for example, all compressed sensing problems
can be solved via this approach. These include models with objective
functionals such as the total-variation norm, ||Wx||_1 where W is arbitrary, or
a combination thereof. In addition, the paper also introduces a number of
technical contributions such as a novel continuation scheme, a novel approach
for controlling the step size, and some new results showing that the smooth and
unsmoothed problems are sometimes formally equivalent. Combined with our
framework, these lead to novel, stable and computationally efficient
algorithms. For instance, our general implementation is competitive with
state-of-the-art methods for solving intensively studied problems such as the
LASSO. Further, numerical experiments show that one can solve the Dantzig
selector problem, for which no efficient large-scale solvers exist, in a few
hundred iterations. Finally, the paper is accompanied with a software release.
This software is not a single, monolithic solver; rather, it is a suite of
programs and routines designed to serve as building blocks for constructing
complete algorithms.Comment: The TFOCS software is available at http://tfocs.stanford.edu This
version has updated reference
- …
