3,677 research outputs found
Reconstruction of Koettlitz Glacier, Southern McMurdo Sound, Antarctica, During the Last Glacial Maximum and Termination
Accurate reconstructions of the Antarctic Ice Sheet (AIS) are important for evaluating past, present, and future sea-level change. Insight into future changes of the AIS and its tolerances to various climate variables can come from investigation of its past behavior. During the last glacial maximum (LGM), ice grounded in the Ross Sea, reaching close to the continental shelf edge. One hypothesis is that this event was caused largely by changing sea level that led to widespread grounding of floating portions of the ice sheet. This grounding buttressed the inflowing East Antarctic outlet glaciers and caused thickening on the lower reaches of these glaciers; interior ice remained the same or even thinned because of reduced accumulation. The Holocene was characterized by rapid recession of marine portions and possible thickening of interior ice and growth of local glaciers in response to accumulation increase. In contrast, an alternate hypothesis is that expansion of grounded Ross Sea ice was due to growth of local glaciers and East Antarctic outlets. These glaciers are thought to have receded to their present positions in the Holocene despite relatively high accumulation. These hypotheses have very different implications for the future of the ice sheet under global warming. Koettlitz Glacier, a large local glacier, flows from the Royal Society Range into McMurdo Sound (78°S, 163°E) and is ideal for testing these two hypotheses. Competing hypotheses as to how this glacier behaved during the LGM range from minor recession to significant expansion. Today, Koettlitz Glacier blocks the mouth of ice-free Pyramid Trough. However, based on surficial mapping, I infer that grounded Ross Sea ice blocked the valley mouth at the LGM. Radiocarbon dates of subfossil lacustrine algae from a lake dammed in Pyramid Trough by the Ross Sea ice date to 11-23 ka, suggesting the ice dam existed throughout that time period. The stratigraphic position and geometry of moraines indicates that Koettlitz Glacier was smaller at the LGM than it is at present. A single radiocarbon age suggests Koettlitz Glacier has advanced within the last 3 ka. Altogether, existing data suggest that Koettlitz Glacier, and by inference other local glaciers in the region, retreated during the LGM and advanced in the Holocene, probably because of fluctuations in accumulation. My work favors the first hypothesis of growth of local glaciers and at least terrestrial portions of the ice sheet during times of high accumulation, which correspond to warm periods in the Antarctic. In contrast, marine-based areas of the ice sheet, such as in the Ross Sea, appear to have advanced during the LGM and retreated in the Holocene, likely in response to changing sea level. This bimodal response of the ice sheet to climate change has implications for future ice-sheet behavior and implies that the future of the ice sheet will depend on the interaction between accumulation-caused thickening and retreat due to marine instabilities
Assessment of stochastic and deterministic models of 6304 quasar lightcurves from SDSS Stripe 82
The optical light curves of many quasars show variations of tenths of a
magnitude or more on time scales of months to years. This variation often
cannot be described well by a simple deterministic model. We perform a Bayesian
comparison of over 20 deterministic and stochastic models on 6304 QSO light
curves in SDSS Stripe 82. We include the damped random walk (or
Ornstein-Uhlenbeck [OU] process), a particular type of stochastic model which
recent studies have focused on. Further models we consider are single and
double sinusoids, multiple OU processes, higher order continuous autoregressive
processes, and composite models. We find that only 29 out of 6304 QSO
lightcurves are described significantly better by a deterministic model than a
stochastic one. The OU process is an adequate description of the vast majority
of cases (6023). Indeed, the OU process is the best single model for 3462 light
curves, with the composite OU process/sinusoid model being the best in 1706
cases. The latter model is the dominant one for brighter/bluer QSOs.
Furthermore, a non-negligible fraction of QSO lightcurves show evidence that
not only the mean is stochastic but the variance is stochastic, too. Our
results confirm earlier work that QSO light curves can be described with a
stochastic model, but place this on a firmer footing, and further show that the
OU process is preferred over several other stochastic and deterministic models.
Of course, there may well exist yet better (deterministic or stochastic) models
which have not been considered here.Comment: accepted by AA, 12 pages, 11 figures, 4 table
Modest policy interventions
This paper brings together identification and forecasting in a positive econometric analysis of policy. We contend that a broad range of important policy questions is consistent with the existing policy process and is not subject to Lucas's critique. We analyze the economics of "business as usual" and show that modest policy interventions, whose effects can be projected even if expectations are modeled as depending solely on past policy, can address routine questions like those raised at regular policy meetings. And modest interventions matter: they can shift the projected paths and probability distributions of macro variables in economically meaningful ways.Monetary policy ; Forecasting ; Vector autoregression ; Econometrics
Why Catastrophic Organizational Failures Happen
Excerpt from the introduction:
The purpose of this chapter is to examine the major streams of research about catastrophic failures, describing what we have learned about why these failures occur as well as how they can be prevented. The chapter begins by describing the most prominent sociological school of thought with regard to catastrophic failures, namely normal accident theory. That body of thought examines the structure of organizational systems that are most susceptible to catastrophic failures. Then, we turn to several behavioral perspectives on catastrophic failures, assessing a stream of research that has attempted to understand the cognitive, group and organizational processes that develop and unfold over time, leading ultimately to a catastrophic failure. For an understanding of how to prevent such failures, we then assess the literature on high reliability organizations (HRO). These scholars have examined why some complex organizations operating in extremely hazardous conditions manage to remain nearly error free. The chapter closes by assessing how scholars are trying to extend the HRO literature to develop more extensive prescriptions for managers trying to avoid catastrophic failures
Classification and Verification of Online Handwritten Signatures with Time Causal Information Theory Quantifiers
We present a new approach for online handwritten signature classification and
verification based on descriptors stemming from Information Theory. The
proposal uses the Shannon Entropy, the Statistical Complexity, and the Fisher
Information evaluated over the Bandt and Pompe symbolization of the horizontal
and vertical coordinates of signatures. These six features are easy and fast to
compute, and they are the input to an One-Class Support Vector Machine
classifier. The results produced surpass state-of-the-art techniques that
employ higher-dimensional feature spaces which often require specialized
software and hardware. We assess the consistency of our proposal with respect
to the size of the training sample, and we also use it to classify the
signatures into meaningful groups.Comment: Submitted to PLOS On
An Assessment to Benchmark the Seismic Performance of a Code-Conforming Reinforced-Concrete Moment-Frame Building
This report describes a state-of-the-art performance-based earthquake engineering methodology
that is used to assess the seismic performance of a four-story reinforced concrete (RC) office
building that is generally representative of low-rise office buildings constructed in highly seismic
regions of California. This “benchmark” building is considered to be located at a site in the Los
Angeles basin, and it was designed with a ductile RC special moment-resisting frame as its
seismic lateral system that was designed according to modern building codes and standards. The
building’s performance is quantified in terms of structural behavior up to collapse, structural and
nonstructural damage and associated repair costs, and the risk of fatalities and their associated
economic costs. To account for different building configurations that may be designed in
practice to meet requirements of building size and use, eight structural design alternatives are
used in the performance assessments.
Our performance assessments account for important sources of uncertainty in the ground
motion hazard, the structural response, structural and nonstructural damage, repair costs, and
life-safety risk. The ground motion hazard characterization employs a site-specific probabilistic
seismic hazard analysis and the evaluation of controlling seismic sources (through
disaggregation) at seven ground motion levels (encompassing return periods ranging from 7 to
2475 years). Innovative procedures for ground motion selection and scaling are used to develop
acceleration time history suites corresponding to each of the seven ground motion levels.
Structural modeling utilizes both “fiber” models and “plastic hinge” models. Structural
modeling uncertainties are investigated through comparison of these two modeling approaches,
and through variations in structural component modeling parameters (stiffness, deformation
capacity, degradation, etc.). Structural and nonstructural damage (fragility) models are based on
a combination of test data, observations from post-earthquake reconnaissance, and expert
opinion. Structural damage and repair costs are modeled for the RC beams, columns, and slabcolumn connections. Damage and associated repair costs are considered for some nonstructural
building components, including wallboard partitions, interior paint, exterior glazing, ceilings,
sprinkler systems, and elevators. The risk of casualties and the associated economic costs are
evaluated based on the risk of structural collapse, combined with recent models on earthquake
fatalities in collapsed buildings and accepted economic modeling guidelines for the value of
human life in loss and cost-benefit studies.
The principal results of this work pertain to the building collapse risk, damage and repair
cost, and life-safety risk. These are discussed successively as follows.
When accounting for uncertainties in structural modeling and record-to-record variability
(i.e., conditional on a specified ground shaking intensity), the structural collapse probabilities of
the various designs range from 2% to 7% for earthquake ground motions that have a 2%
probability of exceedance in 50 years (2475 years return period). When integrated with the
ground motion hazard for the southern California site, the collapse probabilities result in mean
annual frequencies of collapse in the range of [0.4 to 1.4]x10
-4
for the various benchmark
building designs. In the development of these results, we made the following observations that
are expected to be broadly applicable:
(1) The ground motions selected for performance simulations must consider spectral
shape (e.g., through use of the epsilon parameter) and should appropriately account for
correlations between motions in both horizontal directions;
(2) Lower-bound component models, which are commonly used in performance-based
assessment procedures such as FEMA 356, can significantly bias collapse analysis results; it is
more appropriate to use median component behavior, including all aspects of the component
model (strength, stiffness, deformation capacity, cyclic deterioration, etc.);
(3) Structural modeling uncertainties related to component deformation capacity and
post-peak degrading stiffness can impact the variability of calculated collapse probabilities and
mean annual rates to a similar degree as record-to-record variability of ground motions.
Therefore, including the effects of such structural modeling uncertainties significantly increases
the mean annual collapse rates. We found this increase to be roughly four to eight times relative
to rates evaluated for the median structural model;
(4) Nonlinear response analyses revealed at least six distinct collapse mechanisms, the
most common of which was a story mechanism in the third story (differing from the multi-story
mechanism predicted by nonlinear static pushover analysis);
(5) Soil-foundation-structure interaction effects did not significantly affect the structural
response, which was expected given the relatively flexible superstructure and stiff soils.
The potential for financial loss is considerable. Overall, the calculated expected annual
losses (EAL) are in the range of 97,000 for the various code-conforming benchmark
building designs, or roughly 1% of the replacement cost of the building (3.5M, the fatality rate translates to an EAL due to
fatalities of 5,600 for the code-conforming designs, and 66,000, the monetary value associated with life loss is small,
suggesting that the governing factor in this respect will be the maximum permissible life-safety
risk deemed by the public (or its representative government) to be appropriate for buildings.
Although the focus of this report is on one specific building, it can be used as a reference
for other types of structures. This report is organized in such a way that the individual core
chapters (4, 5, and 6) can be read independently. Chapter 1 provides background on the
performance-based earthquake engineering (PBEE) approach. Chapter 2 presents the
implementation of the PBEE methodology of the PEER framework, as applied to the benchmark
building. Chapter 3 sets the stage for the choices of location and basic structural design. The subsequent core chapters focus on the hazard analysis (Chapter 4), the structural analysis
(Chapter 5), and the damage and loss analyses (Chapter 6). Although the report is self-contained,
readers interested in additional details can find them in the appendices
JAPANESE FIRMS’ DEBT POLICY AND TAX POLICY
Understanding the effects of marginal tax rate on debt policy is crucial not only for considering various capital structure theories of firms but also for evaluating corporate tax reform proposals. In this empirical study, we have found a positive relation in most cases between the firm-specific marginal tax rates (simulated using the method of Shevlin (1990) and Graham (1996)) and the debt ratio increase of Japanese firms. This result shows that the marginal tax rates significantly affect the debt policies of Japanese firms. Corporate tax reform to produce equal treatment of equity and debt is desirable in Japan.debt, capital structure, marginal tax rate, corporate tax
- …