233 research outputs found
Smoothing of orbital tracking data: Mission planning, mission analysis and software formulation
The problem created by the presence of wild or outlying data points among orbital tracking data, is addressed. Consideration is given to the effects of such outliers on the orbit determination process, and methods for minimizing or even eliminating these effects are proposed. Some preliminary efforts implementing these new methods are described, and the results thus far obtained are summarized. Based on these ideas and results, recommendations are made for future investigation
Measurement-driven Quality Assessment of Nonlinear Systems by Exponential Replacement
We discuss the problem how to determine the quality of a nonlinear system
with respect to a measurement task. Due to amplification, filtering,
quantization and internal noise sources physical measurement equipment in
general exhibits a nonlinear and random input-to-output behaviour. This usually
makes it impossible to accurately describe the underlying statistical system
model. When the individual operations are all known and deterministic, one can
resort to approximations of the input-to-output function. The problem becomes
challenging when the processing chain is not exactly known or contains
nonlinear random effects. Then one has to approximate the output distribution
in an empirical way. Here we show that by measuring the first two sample
moments of an arbitrary set of output transformations in a calibrated setup,
the output distribution of the actual system can be approximated by an
equivalent exponential family distribution. This method has the property that
the resulting approximation of the statistical system model is guaranteed to be
pessimistic in an estimation theoretic sense. We show this by proving that an
equivalent exponential family distribution in general exhibits a lower Fisher
information measure than the original system model. With various examples and a
model matching step we demonstrate how this estimation theoretic aspect can be
exploited in practice in order to obtain a conservative measurement-driven
quality assessment method for nonlinear measurement systems.Comment: IEEE International Instrumentation and Measurement Technology
Conference (I2MTC), Taipei, Taiwan, 201
Application of ARMA modeling to multicomponent signals
This paper investigates the problem of estimating the parameters of a multicomponent signal observed in noise. The process is modeled las a special nonstationary autoregressive moving average (ARMA) process. The parameters of the multicomponent signal are determined from the spectral estimate of the ARMA model The spectral lines are closely spaced and the ARMA model must be determined from very short data records. Two high-resolution ARMA algorithms are developed for determining the spectral estimates. The first ARMA algorithm modifies the extended Prony method to account for the nonstationary aspects of noise in the model.For comPonents signals with good signal to noise ratio (SNR) this algorithm provides excellent results, but for a lower SNR the performance degrades resulting in a loss in resolution. The second algorithm is based on the work of Cadzow. The algorithm presented overcomes the difficulties of Cadzow's and Kaye's algorithms and provides the coefficients for the complete model not just the spen ral estimate. This algorithm performs well in resolving multicomponent signals when the SNR is low
The Power of Quantum Fourier Sampling
A line of work initiated by Terhal and DiVincenzo and Bremner, Jozsa, and
Shepherd, shows that quantum computers can efficiently sample from probability
distributions that cannot be exactly sampled efficiently on a classical
computer, unless the PH collapses. Aaronson and Arkhipov take this further by
considering a distribution that can be sampled efficiently by linear optical
quantum computation, that under two feasible conjectures, cannot even be
approximately sampled classically within bounded total variation distance,
unless the PH collapses.
In this work we use Quantum Fourier Sampling to construct a class of
distributions that can be sampled by a quantum computer. We then argue that
these distributions cannot be approximately sampled classically, unless the PH
collapses, under variants of the Aaronson and Arkhipov conjectures.
In particular, we show a general class of quantumly sampleable distributions
each of which is based on an "Efficiently Specifiable" polynomial, for which a
classical approximate sampler implies an average-case approximation. This class
of polynomials contains the Permanent but also includes, for example, the
Hamiltonian Cycle polynomial, and many other familiar #P-hard polynomials.
Although our construction, unlike that proposed by Aaronson and Arkhipov,
likely requires a universal quantum computer, we are able to use this
additional power to weaken the conjectures needed to prove approximate sampling
hardness results
Coastal Altimetry and Applications
This report was prepared by Dr. Michael Anzenhofer of the Geo-Forschungs-Zentrum (GFZ) Potsdam, Germany, while visiting the Department of Civil and Environmental Engineering and Geodetic Science (CEEGS), Ohio State University, during 1997-1998. The visit was hosted by Prof. C.K. Shum of the Department of Civil and Environmental Engineering and Geodetic Science.This work was partially supported by NASA Grant No.735366, Improved Ocean Radar Altimeter and Scatterometer Data Products for Global Change Studies and Coastal Application, and by a grant from GFZ, Prof. Christoph Reigber, Director
Faster Algorithms for the Geometric Transportation Problem
Let R, B be a set of n points in R^d, for constant d, where the points of R have integer supplies, points of B have integer demands, and the sum of supply is equal to the sum of demand. Let d(.,.) be a suitable distance function such as the L_p distance. The transportation problem asks to find a map tau : R x B --> N such that sum_{b in B}tau(r,b) = supply(r), sum_{r in R}tau(r,b) = demand(b), and sum_{r in R, b in B} tau(r,b) d(r,b) is minimized. We present three new results for the transportation problem when d(.,.) is any L_p metric:
* For any constant epsilon > 0, an O(n^{1+epsilon}) expected time randomized algorithm that returns a transportation map with expected cost O(log^2(1/epsilon)) times the optimal cost.
* For any epsilon > 0, a (1+epsilon)-approximation in O(n^{3/2}epsilon^{-d}polylog(U)polylog(n)) time, where U is the maximum supply or demand of any point.
* An exact strongly polynomial O(n^2 polylog n) time algorithm, for d = 2
When do Models Generalize? A Perspective from Data-Algorithm Compatibility
One of the major open problems in machine learning is to characterize
generalization in the overparameterized regime, where most traditional
generalization bounds become inconsistent (Nagarajan and Kolter, 2019). In many
scenarios, their failure can be attributed to obscuring the crucial interplay
between the training algorithm and the underlying data distribution. To address
this issue, we propose a concept named compatibility, which quantitatively
characterizes generalization in a both data-relevant and algorithm-relevant
manner. By considering the entire training trajectory and focusing on
early-stopping iterates, compatibility exploits the data and the algorithm
information and is therefore a more suitable notion for generalization. We
validate this by theoretically studying compatibility under the setting of
solving overparameterized linear regression with gradient descent.
Specifically, we perform a data-dependent trajectory analysis and derive a
sufficient condition for compatibility in such a setting. Our theoretical
results demonstrate that in the sense of compatibility, generalization holds
with significantly weaker restrictions on the problem instance than the
previous last iterate analysis
- …