16,967 research outputs found
Estimating the Causal Effects of Marketing Interventions Using Propensity Score Methodology
Propensity score methods were proposed by Rosenbaum and Rubin [Biometrika 70
(1983) 41--55] as central tools to help assess the causal effects of
interventions. Since their introduction more than two decades ago, they have
found wide application in a variety of areas, including medical research,
economics, epidemiology and education, especially in those situations where
randomized experiments are either difficult to perform, or raise ethical
questions, or would require extensive delays before answers could be obtained.
In the past few years, the number of published applications using propensity
score methods to evaluate medical and epidemiological interventions has
increased dramatically. Nevertheless, thus far, we believe that there have been
few applications of propensity score methods to evaluate marketing
interventions (e.g., advertising, promotions), where the tradition is to use
generally inappropriate techniques, which focus on the prediction of an outcome
from background characteristics and an indicator for the intervention using
statistical tools such as least-squares regression, data mining, and so on.
With these techniques, an estimated parameter in the model is used to estimate
some global ``causal'' effect. This practice can generate grossly incorrect
answers that can be self-perpetuating: polishing the Ferraris rather than the
Jeeps ``causes'' them to continue to win more races than the Jeeps
visiting the high-prescribing doctors rather than the
low-prescribing doctors ``causes'' them to continue to write more
prescriptions. This presentation will take ``causality'' seriously, not just as
a casual concept implying some predictive association in a data set, and will
illustrate why propensity score methods are generally superior in practice to
the standard predictive approaches for estimating causal effects.Comment: Published at http://dx.doi.org/10.1214/088342306000000259 in the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A model for the anisotropic response of fibrous soft tissues using six discrete fibre bundles
The development of accurate constitutive models of fibrous soft-tissues is a challenging problem. Many consider the tissue to be a collection of fibres with a continuous distribution function representing their orientations. A novel discrete fibre model is presented consisting of six weighted fibre bundles. Each bundle is oriented such that they pass through opposing vertices of a regular icosahedron. A novel aspect of the model is the use of simple analytical distribution functions to simulate the undulated collagen fibres. This approach yields a closed form analytical expression for the strain energy function for the collagen fibre bundle that avoids the sometimes costly numerical integration of some statistical distribution functions. The elastin fibres are characterized by a neo-Hookean strain energy function. The model accurately simulates the biaxial stretching of rabbit-skin (error-of-fit 8.7%), the uniaxial stretching of pig-skin (error-of-fit 7.6%), equibiaxial loading of aortic valve cusp (error-of-fit 0.8%), and the simple shear of rat septal myocardium (error-of-fit 9.1%). The proposed model compares favourably with previously published soft-tissue models and alternative methods of representing undulated collagen fibres. The stiffness of collagen fibres predicted by the model ranges from 8.0 MPa to 0.93 GPa. The stiffness of elastin fibres ranges from 2.5 kPa to 154.4 kPa. The anisotropy of model resulting from the representation of the fibre field with a discrete number of fibres is also explored
Two-Particle Schroedinger Equation Animations of Wavepacket-Wavepacket Scattering (revised)
A simple and explicit technique for the numerical solution of the
two-particle, time-dependent Schr\"{o}dinger equation is assembled and tested.
The technique can handle interparticle potentials that are arbitrary functions
of the coordinates of each particle, arbitrary initial and boundary conditions,
and multi-dimensional equations. Plots and animations are given here and on the
World Wide Web of the scattering of two wavepackets in one dimension.Comment: 13 pages, 8 figures, animations at
http://nacphy.physics.orst.edu/ComPhys/PACKETS
ALLY: An operator's associate for satellite ground control systems
The key characteristics of an intelligent advisory system is explored. A central feature is that human-machine cooperation should be based on a metaphor of human-to-human cooperation. ALLY, a computer-based operator's associate which is based on a preliminary theory of human-to-human cooperation, is discussed. ALLY assists the operator in carrying out the supervisory control functions for a simulated NASA ground control system. Experimental evaluation of ALLY indicates that operators using ALLY performed at least as well as they did when using a human associate and in some cases even better
What College Students Should Know and Be Able to Do
This article discusses the issue of college students\u27 communications skill and knowledge. The end of the 20th century provides educators and administrators with an opportunity to reflect on how well they have accomplished their goals. The communication discipline, since its beginning, has been concerned with skill achievement and knowledge generation. But not until the latter part of the century have scholars and national associations attempted to identify and agree upon what it is that students should know and be able to do. These efforts reflect maturity of the discipline and generation of a body of knowledge that allows such conclusions with increased certainty. We have recently written about the nature and importance of communication skills training and knowledge development at the college level, arguing that instruction should be required for all college students. College students need to develop skills, accumulate knowledge, and increase motivation to communicate in effective and appropriate ways. Basic skills are best taught by communication faculty, whereas advanced skills might be taught jointly with faculty from the major discipline. College and community college graduates need to be able to communicate effectively
Evolution of Primordial Black Hole Mass Spectrum in Brans-Dicke Theory
We investigate the evolution of primordial black hole mass spectrum by
including both accretion of radiation and Hawking evaporation within
Brans-Dicke cosmology in radiation, matter and vacuum-dominated eras. We also
consider the effect of evaporation of primordial black holes on the expansion
dynamics of the universe. The analytic solutions describing the energy density
of the black holes in equilibrium with radiation are presented. We demonstrate
that these solutions act as attractors for the system ensuring stability for
both linear and nonlinear situations. We show, however, that inclusion of
accretion of radiation delays the onset of this equilibrium in all radiation,
matter and vacuum-dominated eras.Comment: 18 pages, one figur
Measurement and Compensation of Horizontal Crabbing at the Cornell Electron Storage Ring Test Accelerator
In storage rings, horizontal dispersion in the rf cavities introduces
horizontal-longitudinal (xz) coupling, contributing to beam tilt in the xz
plane. This coupling can be characterized by a "crabbing" dispersion term
{\zeta}a that appears in the normal mode decomposition of the 1-turn transfer
matrix. {\zeta}a is proportional to the rf cavity voltage and the horizontal
dispersion in the cavity. We report experiments at the Cornell Electron Storage
Ring Test Accelerator (CesrTA) where xz coupling was explored using three
lattices with distinct crabbing properties. We characterize the xz coupling for
each case by measuring the horizontal projection of the beam with a beam size
monitor. The three lattice configurations correspond to a) 16 mrad xz tilt at
the beam size monitor source point, b) compensation of the {\zeta}a introduced
by one of two pairs of RF cavities with the second, and c) zero dispersion in
RF cavities, eliminating {\zeta}a entirely. Additionally, intrabeam scattering
(IBS) is evident in our measurements of beam size vs. rf voltage.Comment: 5 figures, 10 page
Jet substructure as a new Higgs search channel at the LHC
It is widely considered that, for Higgs boson searches at the Large Hadron
Collider, WH and ZH production where the Higgs boson decays to b anti-b are
poor search channels due to large backgrounds. We show that at high transverse
momenta, employing state-of-the-art jet reconstruction and decomposition
techniques, these processes can be recovered as promising search channels for
the standard model Higgs boson around 120 GeV in mass.Comment: 4 pages, 3 figure
Open Questions in Classical Gravity
We discuss some outstanding open questions regarding the validity and
uniqueness of the standard second order Newton-Einstein classical gravitational
theory. On the observational side we discuss the degree to which the realm of
validity of Newton's Law of Gravity can actually be extended to distances much
larger than the solar system distance scales on which the law was originally
established. On the theoretical side we identify some commonly accepted but
actually still open to question assumptions which go into the formulating of
the standard second order Einstein theory in the first place. In particular, we
show that while the familiar second order Poisson gravitational equation (and
accordingly its second order covariant Einstein generalization) may be
sufficient to yield Newton's Law of Gravity they are not in fact necessary. The
standard theory thus still awaits the identification of some principle which
would then make it necessary too. We show that current observational
information does not exclusively mandate the standard theory, and that the
conformal invariant fourth order theory of gravity considered recently by
Mannheim and Kazanas is also able to meet the constraints of data, and in fact
to do so without the need for any so far unobserved non-luminous or dark
matter.Comment: UCONN-93-1, plain TeX format, 22 pages (plus 7 figures - send
requests to [email protected]). To appear in a special issue of
Foundations of Physics honoring Professor Fritz Rohrlich on the occasion of
his retirement, L. P. Horwitz and A. van der Merwe Editors, Plenum Publishing
Company, N.Y., Fall 199
A Sequence of Declining Outbursts from GX339-4
The flux and spectrum of the black hole candidate GX339-4 has been monitored
by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma-ray
Observatory (CGRO) since the observatory became operational in May 1991.
Between the summer of 1991 and the fall of 1996, eight outbursts from GX339-4
were observed. The history of these outbursts is one of declining fluence or
total energy release, as well as a shortening of the time between outbursts. A
rough linear correlation exists between the fluence emitted during an outburst
and the time elapsed between the end of the previous outburst and the beginning
of the current one. The peak flux is also roughly linearly correlated with
outburst fluence. The lightcurves of the earlier, more intense, outbursts
(except for the second one) can be modeled by a fast exponential (time constant
~ 10 days) followed by a slower exponential (~ 100 days) on the rise and a fast
exponential decay (~ 5 days) on the fall. The later, weaker, outbursts are
modeled with a single rising time constant (~ 20 days) and a longer decay on
the fall (~ 50 days). An exponential model gives a marginally better fit than a
power law to the rise/decay profiles. GX339-4 is a unique source in having more
frequent outbursts than other low mass x-ray binary black hole candidates.
These observations can be used to constrain models of the behavior of the
accretion disk surrounding the compact object.Comment: Accepted for Publication in the Astrophysical Journal Letters, AASTE
- …