318 research outputs found
A computer code for forward calculation and inversion of the H/V spectral ratio under the diffuse field assumption
During a quarter of a century, the main characteristics of the
horizontal-to-vertical spectral ratio of ambient noise HVSRN have been
extensively used for site effect assessment. In spite of the uncertainties
about the optimum theoretical model to describe these observations, several
schemes for inversion of the full HVSRN curve for near surface surveying have
been developed over the last decade.
In this work, a computer code for forward calculation of H/V spectra based on
the diffuse field assumption (DFA) is presented and tested.It takes advantage
of the recently stated connection between the HVSRN and the elastodynamic
Green's function which arises from the ambient noise interferometry theory.
The algorithm allows for (1) a natural calculation of the Green's functions
imaginary parts by using suitable contour integrals in the complex wavenumber
plane, and (2) separate calculation of the contributions of Rayleigh, Love,
P-SV and SH waves as well. The stability of the algorithm at high frequencies
is preserved by means of an adaptation of the Wang's orthonormalization method
to the calculation of dispersion curves, surface-waves medium responses and
contributions of body waves.
This code has been combined with a variety of inversion methods to make up a
powerful tool for passive seismic surveying.Comment: Published in Computers & Geosciences 97, 67-7
On the Power and Limitations of Branch and Cut
The Stabbing Planes proof system [Paul Beame et al., 2018] was introduced to model the reasoning carried out in practical mixed integer programming solvers. As a proof system, it is powerful enough to simulate Cutting Planes and to refute the Tseitin formulas - certain unsatisfiable systems of linear equations od 2 - which are canonical hard examples for many algebraic proof systems. In a recent (and surprising) result, Dadush and Tiwari [Daniel Dadush and Samarth Tiwari, 2020] showed that these short refutations of the Tseitin formulas could be translated into quasi-polynomial size and depth Cutting Planes proofs, refuting a long-standing conjecture. This translation raises several interesting questions. First, whether all Stabbing Planes proofs can be efficiently simulated by Cutting Planes. This would allow for the substantial analysis done on the Cutting Planes system to be lifted to practical mixed integer programming solvers. Second, whether the quasi-polynomial depth of these proofs is inherent to Cutting Planes.
In this paper we make progress towards answering both of these questions. First, we show that any Stabbing Planes proof with bounded coefficients (SP*) can be translated into Cutting Planes. As a consequence of the known lower bounds for Cutting Planes, this establishes the first exponential lower bounds on SP*. Using this translation, we extend the result of Dadush and Tiwari to show that Cutting Planes has short refutations of any unsatisfiable system of linear equations over a finite field. Like the Cutting Planes proofs of Dadush and Tiwari, our refutations also incur a quasi-polynomial blow-up in depth, and we conjecture that this is inherent. As a step towards this conjecture, we develop a new geometric technique for proving lower bounds on the depth of Cutting Planes proofs. This allows us to establish the first lower bounds on the depth of Semantic Cutting Planes proofs of the Tseitin formulas
Recommended from our members
Clustering Scatter Plots Using Data Depth Measures.
Clustering is rapidly becoming a powerful data mining technique, and has been broadly applied to many domains such as bioinformatics and text mining. However, the existing methods can only deal with a data matrix of scalars. In this paper, we introduce a hierarchical clustering procedure that can handle a data matrix of scatter plots. To more accurately reflect the nature of data, we introduce a dissimilarity statistic based on "data depth" to measure the discrepancy between two bivariate distributions without oversimplifying the nature of the underlying pattern. We then combine hypothesis testing with hierarchical clustering to simultaneously cluster the rows and columns of the data matrix of scatter plots. We also propose novel painting metrics and construct heat maps to allow visualization of the clusters. We demonstrate the utility and power of our new clustering method through simulation studies and application to a microbe-host-interaction study
Complexity of optimizing over the integers
In the first part of this paper, we present a unified framework for analyzing
the algorithmic complexity of any optimization problem, whether it be
continuous or discrete in nature. This helps to formalize notions like "input",
"size" and "complexity" in the context of general mathematical optimization,
avoiding context dependent definitions which is one of the sources of
difference in the treatment of complexity within continuous and discrete
optimization. In the second part of the paper, we employ the language developed
in the first part to study information theoretic and algorithmic complexity of
{\em mixed-integer convex optimization}, which contains as a special case
continuous convex optimization on the one hand and pure integer optimization on
the other. We strive for the maximum possible generality in our exposition.
We hope that this paper contains material that both continuous optimizers and
discrete optimizers find new and interesting, even though almost all of the
material presented is common knowledge in one or the other community. We see
the main merit of this paper as bringing together all of this information under
one unifying umbrella with the hope that this will act as yet another catalyst
for more interaction across the continuous-discrete divide. In fact, our
motivation behind Part I of the paper is to provide a common language for both
communities
The Convex Hull Problem in Practice : Improving the Running Time of the Double Description Method
The double description method is a simple but widely used algorithm for computation of extreme points in polyhedral sets. One key aspect of its implementation is the question of how to efficiently test extreme points for adjacency. In this dissertation, two significant contributions related to adjacency testing are presented. First, the currently used data structures are revisited and various optimizations are proposed. Empirical evidence is provided to demonstrate their competitiveness. Second, a new adjacency test is introduced. It is a refinement of the well known algebraic test featuring a technique for avoiding redundant computations. Its correctness is formally proven. Its superiority in multiple degenerate scenarios is demonstrated through experimental results. Parallel computation is one further aspect of the double description method covered in this work. A recently introduced divide-and-conquer technique is revisited and considerable practical limitations are demonstrated
The Hilbert-Hankel transform and its application to shallow water ocean acoustics
Originally presented as author's thesis (Sc. D.--Massachusetts Institute of Technology), 1986.Includes bibliographies.Supported in part by the Advanced Research Projects Agency monitored by ONR under contract no. N00014-81-K-0742 Supported in part by the National Science Foundation under grant ECS-8407285Michael S. Wengorvitz
- …