2,443 research outputs found
Junction conditions in General Relativity with spin sources
The junction conditions for General Relativity in the presence of domain
walls with intrinsic spin are derived in three and higher dimensions. A stress
tensor and a spin current can be defined just by requiring the existence of a
well defined volume element instead of an induced metric, so as to allow for
generic torsion sources. In general, when the torsion is localized on the
domain wall, it is necessary to relax the continuity of the tangential
components of the vielbein. In fact it is found that the spin current is
proportional to the jump in the vielbein and the stress-energy tensor is
proportional to the jump in the spin connection. The consistency of the
junction conditions implies a constraint between the direction of flow of
energy and the orientation of the spin. As an application, we derive the
circularly symmetric solutions for both the rotating string with tension and
the spinning dust string in three dimensions. The rotating string with tension
generates a rotating truncated cone outside and a flat space-time with
inevitable frame dragging inside. In the case of a string made of spinning
dust, in opposition to the previous case no frame dragging is present inside,
so that in this sense, the dragging effect can be "shielded" by considering
spinning instead of rotating sources. Both solutions are consistently lifted as
cylinders in the four-dimensional case.Comment: 24 pages, no figures, CECS style. References added and misprints
corrected. Published Versio
Inference about Non-Identified SVARs
We propose a method for conducting inference on impulse responses in structural vector autoregressions (SVARs) when the impulse response is not point identiļ¬ed because the number of equality restrictions one can credibly impose is not suļ¬cient for point identiļ¬cation and/or one imposes sign restrictions. We proceed in three steps. We ļ¬rst deļ¬ne the object of interest as the identiļ¬ed set for a given impulse response at a given horizon and discuss how inference is simple when the identiļ¬ed set is convex, as one can limit attention to the setās upper and lower bounds. We then provide easily veriļ¬able conditions on the type of equality and sign restrictions that guarantee convexity. These cover most cases of practical interest, with exceptions including sign restrictions on multiple shocks and equality restrictions that make the impulse response locally, but not globally, identiļ¬ed. Second, we show how to conduct inference on the identiļ¬ed set. We adopt a robust Bayes approach that considers the class of all possible priors for the non-identiļ¬ed aspects of the model and delivers a class of associated posteriors. We summarize the posterior class by reporting the āposterior mean boundsā, which can be interpreted as an estimator of the identiļ¬ed set. We also consider a ārobustiļ¬ed credible regionā which is a measure of the posterior uncertainty about the identiļ¬ed set. The two intervals can be obtained using a computationally convenient numerical procedure. Third, we show that the posterior bounds converge asymptotically to the identiļ¬ed set if the set is convex. If the identiļ¬ed set is not convex, our posterior bounds can be interpreted as an estimator of the convex hull of the identiļ¬ed set. Finally, a useful diagnostic tool delivered by our procedure is the posterior belief about the plausibility of the imposed identifying restrictions
Robust Bayesian Inference for Set-Identified Models
This paper reconciles the asymptotic disagreement between Bayesian and frequentist inference in setāidentified models by adopting a multipleāprior (robust) Bayesian approach. We propose new tools for Bayesian inference in setāidentified models and show that they have a wellādefined posterior interpretation in finite samples and are asymptotically valid from the frequentist perspective. The main idea is to construct a prior class that removes the source of the disagreement: the need to specify an unrevisable prior for the structural parameter given the reducedāform parameter. The corresponding class of posteriors can be summarized by reporting the āposterior lower and upper probabilitiesā of a given event and/or the āset of posterior meansā and the associated ārobust credible regionā. We show that the set of posterior means is a consistent estimator of the true identified set and the robust credible region has the correct frequentist asymptotic coverage for the true identified set if it is convex. Otherwise, the method provides posterior inference about the convex hull of the identified set. For impulseāresponse analysis in setāidentified Structural Vector Autoregressions, the new tools can be used to overcome or quantify the sensitivity of standard Bayesian inference to the choice of an unrevisable prior
Number and Amplitude of Limit Cycles emerging from {\it Topologically Equivalent} Perturbed Centers
We consider three examples of weekly perturbed centers which do not have {\it
geometrical equivalence}: a linear center, a degenerate center and a
non-hamiltonian center. In each case the number and amplitude of the limit
cycles emerging from the period annulus are calculated following the same
strategy: we reduce of all of them to locally equivalent perturbed integrable
systems of the form: , with
. This reduction allows us to find the Melnikov
function, , associated to each particular problem. We
obtain the information on the bifurcation curves of the limit cycles by solving
explicitly the equation in each case.Comment: 17 pages, 0 figure
Uncertain identification
Uncertainty about the choice of identifying assumptions is common in causal studies, but is often ignored in empirical practice. This paper considers uncertainty over models that impose different identifying assumptions, which can lead to a mix of point- and set- identified models. We propose performing inference in the presence of such uncertainty by generalizing Bayesian model averaging. The method considers multiple posteriors for the set-identified models and combines them with a single posterior for models that are either point-identified or that impose non-dogmatic assumptions. The output is a set of posteriors (post-averaging ambiguous belief ), which can be summarized by reporting the set of posterior means and the associated credible region. We clarify when the prior model probabilities are updated and characterize the asymptotic behavior of the posterior model probabilities. The method provides a formal framework for conducting sensitivity analysis of empirical findings to the choice of identifying assumptions. For example, we find that in a standard monetary model one would need to attach a prior probability greater than 0.28 to the validity of the assumption that prices do not react contemporaneously to a monetary policy shock, in order to obtain a negative response of output to the shock
Uncertain identification
Uncertainty about the choice of identifying assumptions is common in causal studies, but is often ignored in empirical practice. This paper considers uncertainty over models that impose different identifying assumptions, which, in general, leads to a mix of point- and set-identified models. We propose performing inference in the presence of such uncertainty by generalizing Bayesian model averaging. The method considers multiple posteriors for the set-identified models and combines them with a single posterior for models that are either point-identified or that impose non-dogmatic assumptions. The output is a set of posteriors (post-averaging ambiguous belief) that are mixtures of the single posterior and any element of the class of multiple posteriors, with weights equal to the posterior model probabilities. We suggest reporting the range of posterior means and the associated credible region in practice, and provide a simple algorithm to compute them. We establish that the prior model probabilities are updated when the models are "distinguishable" and/or they specify different priors for reduced-form parameters, and characterize the asymptotic behavior of the posterior model probabilities. The method provides a formal framework for conducting sensitivity analysis of empirical findings to the choice of identifying assumptions. In a standard monetary model, for example, we show that, in order to support a negative response of output to a contractionary monetary policy shock, one would need to attach a prior probability greater than 0.32 to the validity of the assumption that prices do not react contemporaneously to such a shock. The method is general and allows for dogmatic and non-dogmatic identifying assumptions, multiple point-identified models, multiple set-identified models, and nested or non-nested models
A SOA-Based Platform to Support Clinical Data Sharing
The eSource Data Interchange Group, part of the Clinical Data Interchange Standards Consortium, proposed five scenarios to guide stakeholders in the development of solutions for the capture of eSource data. The fifth scenario was subdivided into four tiers to adapt the functionality of electronic health records to support clinical research. In order to develop a system belonging to the \u201cInteroperable\u201d Tier, the authors decided to adopt the service-oriented architecture paradigm to support technical interoperability, Health Level Seven Version 3 messages combined with LOINC (Logical Observation Identifiers Names and Codes) vocabulary to ensure semantic interoperability, and Healthcare Services Specification Project standards to provide process interoperability. The developed architecture enhances the integration between patient-care practice and medical research, allowing clinical data sharing between two hospital information systems and four clinical data management systems/clinical registries. The core is formed by a set of standardized cloud services connected through standardized interfaces, involving client applications. The system was approved by a medical staff, since it reduces the workload for the management of clinical trials. Although this architecture can realize the \u201cInteroperable\u201d Tier, the current solution actually covers the \u201cConnected\u201d Tier, due to local hospital policy restrictions
Incentive-driven inattention
āRational inattentionā is becoming increasingly prominent in economic modeling, but there is little empirical evidence for its central premise-that the choice of attention results from a cost-benefit optimization. Observational data typically do not allow researchers to infer attention choices from observables. We fill this gap in the literature by exploiting a unique dataset of professional forecasters who update their inflation forecasts at days of their choice. In the data we observe how many forecasters update (extensive margin of updating), the magnitude of the update (intensive margin), and the objective of optimization (forecast accuracy). There are also āshiftersā in incentives: A contest that increases the benefit of accurate forecasting, and the release of official data that reduces the cost of processing information. These features allow us to link observables to attention and incentive parameters. We structurally estimate a model where the decision to update and the magnitude of the update are endogenous and the latter is the outcome of a rational inattention optimization. The empirical findings provide support for the key implication of rational inattention that information-processing efforts react to changing incentives. Counterfactuals reveal that accuracy is maximized if the contest date coincides with the release of information, aligning higher benefits with lower costs of attention
- ā¦