203,660 research outputs found
Towards a fully automated computation of RG-functions for the 3- O(N) vector model: Parametrizing amplitudes
Within the framework of field-theoretical description of second-order phase
transitions via the 3-dimensional O(N) vector model, accurate predictions for
critical exponents can be obtained from (resummation of) the perturbative
series of Renormalization-Group functions, which are in turn derived
--following Parisi's approach-- from the expansions of appropriate field
correlators evaluated at zero external momenta.
Such a technique was fully exploited 30 years ago in two seminal works of
Baker, Nickel, Green and Meiron, which lead to the knowledge of the
-function up to the 6-loop level; they succeeded in obtaining a precise
numerical evaluation of all needed Feynman amplitudes in momentum space by
lowering the dimensionalities of each integration with a cleverly arranged set
of computational simplifications. In fact, extending this computation is not
straightforward, due both to the factorial proliferation of relevant diagrams
and the increasing dimensionality of their associated integrals; in any case,
this task can be reasonably carried on only in the framework of an automated
environment.
On the road towards the creation of such an environment, we here show how a
strategy closely inspired by that of Nickel and coworkers can be stated in
algorithmic form, and successfully implemented on the computer. As an
application, we plot the minimized distributions of residual integrations for
the sets of diagrams needed to obtain RG-functions to the full 7-loop level;
they represent a good evaluation of the computational effort which will be
required to improve the currently available estimates of critical exponents.Comment: 54 pages, 17 figures and 4 table
Optimal Uncertainty Quantification
We propose a rigorous framework for Uncertainty Quantification (UQ) in which
the UQ objectives and the assumptions/information set are brought to the
forefront. This framework, which we call \emph{Optimal Uncertainty
Quantification} (OUQ), is based on the observation that, given a set of
assumptions and information about the problem, there exist optimal bounds on
uncertainties: these are obtained as values of well-defined optimization
problems corresponding to extremizing probabilities of failure, or of
deviations, subject to the constraints imposed by the scenarios compatible with
the assumptions and information. In particular, this framework does not
implicitly impose inappropriate assumptions, nor does it repudiate relevant
information. Although OUQ optimization problems are extremely large, we show
that under general conditions they have finite-dimensional reductions. As an
application, we develop \emph{Optimal Concentration Inequalities} (OCI) of
Hoeffding and McDiarmid type. Surprisingly, these results show that
uncertainties in input parameters, which propagate to output uncertainties in
the classical sensitivity analysis paradigm, may fail to do so if the transfer
functions (or probability distributions) are imperfectly known. We show how,
for hierarchical structures, this phenomenon may lead to the non-propagation of
uncertainties or information across scales. In addition, a general algorithmic
framework is developed for OUQ and is tested on the Caltech surrogate model for
hypervelocity impact and on the seismic safety assessment of truss structures,
suggesting the feasibility of the framework for important complex systems. The
introduction of this paper provides both an overview of the paper and a
self-contained mini-tutorial about basic concepts and issues of UQ.Comment: 90 pages. Accepted for publication in SIAM Review (Expository
Research Papers). See SIAM Review for higher quality figure
Nonlinear non-extensive approach for identification of structured information
The problem of separating structured information representing phenomena of
differing natures is considered. A structure is assumed to be independent of
the others if can be represented in a complementary subspace. When the
concomitant subspaces are well separated the problem is readily solvable by a
linear technique. Otherwise, the linear approach fails to correctly
discriminate the required information. Hence, a non extensive approach is
proposed. The resulting nonlinear technique is shown to be suitable for dealing
with cases that cannot be tackled by the linear one.Comment: Physica A, in pres
Recommended from our members
Conflicts of Interest in Derivatives Clearing
[Excerpt] The financial crisis implicated the over-the-counter (OTC) derivatives market as a source of systemic risk. In the wake of the crisis, lawmakers sought to reduce systemic risk to the financial system by regulating this market. One of the reforms that Congress introduced in the Dodd-Frank Act (P.L. 111-203) was mandatory clearing of OTC derivatives through clearinghouses, in an effort to remake the OTC market more in the image of the regulated futures exchanges. Clearinghouses require traders to put down cash or liquid assets, called margin, to cover potential losses and prevent any firm from building up a large uncapitalized exposure, as happened in the case of the American International Group (AIG). Clearinghouses thus limit the size of a cleared position based on a firm’s ability to post margin to cover its potential losses.
As lawmakers focused on clearing requirements to reduce systemic risk, concerns also arose as to whether the small number of large swaps dealers in existence—mostly the largest banks—might influence clearinghouses or trading platforms in ways that could undermine the efficacy of the approach. Concerns about conflicts of interest in clearing center around whether, if large swap dealers dominate a clearinghouse, they might directly or indirectly restrict access to the clearinghouse; whether they might limit the scope of derivatives products eligible for clearing; or whether they might influence a clearinghouse to lower margin requirements.
Trading in OTC derivatives is in fact concentrated around a dozen or so major dealers. The Office of the Comptroller of the Currency (OCC) estimated that, as of the third quarter of 2010, five large commercial banks in the United States represented 96% of the banking industry’s total notional amounts of all derivatives; and those five banks represented 81% of the industry’s net credit exposure to derivatives. The first group of Troubled Asset Relief Program (TARP) recipients included nearly all the large derivatives dealers. As a result of the high degree of market concentration, the failure of a large swaps dealer still has the potential to result in the nullification of tens of billions of dollars worth of contracts, which could pose a systemic threat.
A 2009-proposed amendment proposed to H.R. 4173, which passed the House, would have limited ownership interest and governance of the new derivatives clearinghouses by certain large financial institutions and major swap participants. Sections 726 and 765 in the final version of the Dodd-Frank Act mandate that the Commodity Futures Trading Commission (CFTC) and Securities and Exchange Commission (SEC), respectively, must adopt rules to mitigate conflicts of interest. However, it allowed the agencies to decide whether those rules include strict numerical limits on ownership or control. In the CFTC’s proposed rules to mitigate conflicts of interest, published on October 18, 2010, and on January 6, 2011, the CFTC did choose to adopt strict ownership limits, along the lines of the Lynch amendment. The SEC’s proposed rule, published on October 13, 2010, does the same.
This report examines how conflicts of interest may arise and analyzes the measures that the CFTC and SEC proposed to address them. It discusses what effect, if any, ownership and control limits may have on derivatives clearing; and whether such limits effectively address the types of conflicts of interest that are of concern to some in the 112th Congress. These rulemakings may interest the 112th Congress as part of its oversight authority for the CFTC and SEC. Trends in clearing and trading derivatives, and the ownership of swap clearinghouses, are discussed in the Appendix
Optimal Uncertainty Quantification
We propose a rigorous framework for Uncertainty Quantification (UQ) in which
the UQ objectives and the assumptions/information set are brought to the forefront.
This framework, which we call Optimal Uncertainty Quantification (OUQ), is based
on the observation that, given a set of assumptions and information about the problem,
there exist optimal bounds on uncertainties: these are obtained as extreme
values of well-defined optimization problems corresponding to extremizing probabilities
of failure, or of deviations, subject to the constraints imposed by the scenarios
compatible with the assumptions and information. In particular, this framework
does not implicitly impose inappropriate assumptions, nor does it repudiate relevant
information.
Although OUQ optimization problems are extremely large, we show that under
general conditions, they have finite-dimensional reductions. As an application,
we develop Optimal Concentration Inequalities (OCI) of Hoeffding and McDiarmid
type. Surprisingly, contrary to the classical sensitivity analysis paradigm, these results
show that uncertainties in input parameters do not necessarily propagate to
output uncertainties.
In addition, a general algorithmic framework is developed for OUQ and is tested
on the Caltech surrogate model for hypervelocity impact, suggesting the feasibility
of the framework for important complex systems
Identifiability of parameters in latent structure models with many observed variables
While hidden class models of various types arise in many statistical
applications, it is often difficult to establish the identifiability of their
parameters. Focusing on models in which there is some structure of independence
of some of the observed variables conditioned on hidden ones, we demonstrate a
general approach for establishing identifiability utilizing algebraic
arguments. A theorem of J. Kruskal for a simple latent-class model with finite
state space lies at the core of our results, though we apply it to a diverse
set of models. These include mixtures of both finite and nonparametric product
distributions, hidden Markov models and random graph mixture models, and lead
to a number of new results and improvements to old ones. In the parametric
setting, this approach indicates that for such models, the classical definition
of identifiability is typically too strong. Instead generic identifiability
holds, which implies that the set of nonidentifiable parameters has measure
zero, so that parameter inference is still meaningful. In particular, this
sheds light on the properties of finite mixtures of Bernoulli products, which
have been used for decades despite being known to have nonidentifiable
parameters. In the nonparametric setting, we again obtain identifiability only
when certain restrictions are placed on the distributions that are mixed, but
we explicitly describe the conditions.Comment: Published in at http://dx.doi.org/10.1214/09-AOS689 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …