71 research outputs found
Shareholder liability and bank failure
Does enhanced shareholder liability reduce bank failure? We compare the performance of around 4,200 state-regulated banks of similar size in neighboring U.S. states with different liability regimes during the Great Depression. The distress rate of limited liability banks was 29% higher than that of banks with enhanced liability. Results are robust to a diff-in-diff analysis incorporating nationally-regulated banks (which faced the same regulations everywhere) and are not driven by other differences in state regulations, Fed membership, local characteristics, or differential selection into state-regulated banks. Our results suggest that exposing shareholders to more downside risk can successfully reduce bank failure
Minimizing Flow Time in the Wireless Gathering Problem
We address the problem of efficient data gathering in a wireless network
through multi-hop communication. We focus on the objective of minimizing the
maximum flow time of a data packet. We prove that no polynomial time algorithm
for this problem can have approximation ratio less than \Omega(m^{1/3) when
packets have to be transmitted, unless . We then use resource
augmentation to assess the performance of a FIFO-like strategy. We prove that
this strategy is 5-speed optimal, i.e., its cost remains within the optimal
cost if we allow the algorithm to transmit data at a speed 5 times higher than
that of the optimal solution we compare to
The HST Key Project on the Extragalactic Distance Scale. XV. A Cepheid Distance to the Fornax Cluster and Its Implications
Using the Hubble Space Telescope (HST) 37 long-period Cepheid variables have
been discovered in the Fornax Cluster spiral galaxy NGC 1365. The resulting V
and I period-luminosity relations yield a true distance modulus of 31.35 +/-
0.07 mag, which corresponds to a distance of 18.6 +/- 0.6 Mpc. This measurement
provides several routes for estimating the Hubble Constant. (1) Assuming this
distance for the Fornax Cluster as a whole yields a local Hubble Constant of 70
+/-18_{random} [+/-7]_{systematic} km/s/Mpc. (2) Nine Cepheid-based distances
to groups of galaxies out to and including the Fornax and Virgo clusters yield
Ho = 73 (+/-16)_r [+/-7]_s km/s/Mpc. (3) Recalibrating the I-band Tully-Fisher
relation using NGC 1365 and six nearby spiral galaxies, and applying it to 15
galaxy clusters out to 100 Mpc gives Ho = 76 (+/-3)_r [+/-8]_s km/s/Mpc. (4)
Using a broad-based set of differential cluster distance moduli ranging from
Fornax to Abell 2147 gives Ho = 72 (+/-)_r [+/-6]_s km/s/Mpc. And finally, (5)
Assuming the NGC 1365 distance for the two additional Type Ia supernovae in
Fornax and adding them to the SnIa calibration (correcting for light curve
shape) gives Ho = 67 (+/-6)_r [+/-7]_s km/s/Mpc out to a distance in excess of
500 Mpc. All five of these Ho determinations agree to within their statistical
errors. The resulting estimate of the Hubble Constant combining all these
determinations is Ho = 72 (+/-5)_r [+/-12]_s km/s/Mpc.Comment: Accepted for publication in the Astrophysical Journal, Apr. 10 issue
28 pages, 3 tables, 12 figures (Correct figures and abstract
A very low mass of Ni-56 in the ejecta of SN 1994W
We present spectroscopic and photometric observations of the luminous narrow-
line Type IIP (plateau) supernova 1994W. After the plateau phase (t >120 days),
the light curve dropped by 3.5 mag in V in only 12 days. Between 125 and 197
days after explosion the supernova faded substantially faster than the decay
rate of Co-56, and by day 197 it was 3.6 magnitudes less luminous in R compared
to SN 1987A. The low R-luminosity could indicate less than 0.0026 {+0.0017}/
{-0.0011} Msun of Ni-56 ejected at the explosion, but the emission between 125
and 197 days must then have been dominated by an additional power source, pre-
sumably circumstellar interaction. Alternatively, the late light curve was
dominated by Co-56 decay. In this case, the mass of the ejected Ni-56 was 0.015
{+0.012}/{-0.008} Msun, and the rapid fading between 125 and 197 days was most
likely due to dust formation. Though this value of the mass is higher than in
the case with the additional power source, it is still lower than estimated for
any previous Type II supernova. Only progenitors with M(ZAMS) = 8-10 Msun and
M(ZAMS) > 25 Msun are expected to eject such low masses of Ni-56. If M(ZAMS) =
8-10 Msun, the plateau phase indicates a low explosion energy, while for a
progenitor with M(ZAMS) > 25 Msun the energy can be the canonical 1.0E{51}
ergs. As SN 1994W was unusually luminous, the low-mass explosion may require an
uncomfortably high efficiency in converting explosion energy into radiation.
This favors a M(ZAMS) > 25 Msun progenitor. The supernova's narrow (roughly
1000 km s^{-1}) emission lines were excited by the hot supernova spectrum,
rather than a circumstellar shock. The thin shell from which the lines origi-
nated was most likely accelerated by the radiation from the supernova.Comment: 19 pages AASTeX v.4.0, including 5 Postscript figures; ApJ, in pres
Math saves the forest
Wireless sensor networks are decentralised networks consisting of sensors that can detect events and transmit data to neighbouring sensors. Ideally, this data is eventually gathered in a central base station. Wireless sensor networks have many possible applications. For example, they can be used to detect gas leaks in houses or fires in a forest.\ud
In this report, we study data gathering in wireless sensor networks with the objective of minimising the time to send event data to the base station. We focus on sensors with a limited cache and take into account both node and transmission failures. We present two cache strategies and analyse the performance of these strategies for specific networks. For the case without node failures we give the expected arrival time of event data at the base station for both a line and a 2D grid network. For the case with node failures we study the expected arrival time on two-dimensional networks through simulation, as well as the influence of the broadcast range
Diffuse-interface model for rapid phase transformations in nonequilibrium systems
A thermodynamic approach to rapid phase transformations within a diffuse
interface in a binary system is developed. Assuming an extended set of
independent thermodynamic variables formed by the union of the classic set of
slow variables and the space of fast variables, we introduce finiteness of the
heat and solute diffusive propagation at the finite speed of the interface
advancing. To describe the transformation within the diffuse interface, we use
the phase-field model which allows us to follow the steep but smooth change of
phases within the width of diffuse interface. The governing equations of the
phase-field model are derived for the hyperbolic model, model with memory, and
for a model of nonlinear evolution of transformation within the
diffuse-interface. The consistency of the model is proved by the condition of
positive entropy production and by the outcomes of the fluctuation-dissipation
theorem. A comparison with the existing sharp-interface and diffuse-interface
versions of the model is given.Comment: 15 pages, regular article submitted to Physical Review
Modulational Instability in Equations of KdV Type
It is a matter of experience that nonlinear waves in dispersive media,
propagating primarily in one direction, may appear periodic in small space and
time scales, but their characteristics --- amplitude, phase, wave number, etc.
--- slowly vary in large space and time scales. In the 1970's, Whitham
developed an asymptotic (WKB) method to study the effects of small
"modulations" on nonlinear periodic wave trains. Since then, there has been a
great deal of work aiming at rigorously justifying the predictions from
Whitham's formal theory. We discuss recent advances in the mathematical
understanding of the dynamics, in particular, the instability of slowly
modulated wave trains for nonlinear dispersive equations of KdV type.Comment: 40 pages. To appear in upcoming title in Lecture Notes in Physic
Unenhanced CT imaging is highly sensitive to exclude pheochromocytoma: A multicenter study
Background: A substantial proportion of all pheochromocytomas is currently detected during the evaluation of an adrenal incidentaloma. Recently, it has been suggested that biochemical testing to rule out pheochromocytoma is unnecessary in case of an adrenal incidentaloma with an unenhanced attenuation value ≤10Hounsfield Units (HU) at computed tomography (CT). Objectives: We aimed to determine the sensitivity of the 10HU threshold value to exclude a pheochromocytoma. Methods: Retrospective multicenter study with systematic reassessment of preoperative unenhanced CT scans performed in patients in whom a histopathologically proven pheochromocytoma had been diagnosed. Unenhanced attenuation values were determined independently by two experienced radiologists. Sensitivity of the 10HU threshold was calculated, and interobserver consistency was assessed using the intraclass correlation coefficient (ICC). Results: 214 patients were identified harboring a total number of 222 pheochromocytomas. Maximum tumor diameter was 51 (39–74)mm. The mean attenuation value within the region of interest was 36±10HU. Only one pheochromocytoma demonstrated an attenuation value ≤10HU, resulting in a sensitivity of 99.6% (95% CI: 97.5–99.9). ICC was 0.81 (95% CI: 0.75–0.86) with a standard error of measurement of 7.3HU between observers. Conclusion: The likelihood of a pheochromocytoma with an unenhanced attenuation value ≤10HU on CT is very low. The interobserver consistency in attenuation measurement is excellent. Our study supports the recommendation that in patients with an adrenal incidentaloma biochemical testing for ruling out pheochromocytoma is only indicated in adrenal tumors with an unenhanced attenuation value >10HU
- …