6 research outputs found
Generalization of polynomial chaos for estimation of angular random variables
“The state of a dynamical system will rarely be known perfectly, requiring the variable elements in the state to become random variables. More accurate estimation of the uncertainty in the random variable results in a better understanding of how the random variable will behave at future points in time. Many methods exist for representing a random variable within a system including a polynomial chaos expansion (PCE), which expresses a random variable as a linear combination of basis polynomials.
Polynomial chaos expansions have been studied at length for the joint estimation of states that are purely translational (i.e. described in Cartesian space); however, many dynamical systems also include non-translational states, such as angles. Many methods of quantifying the uncertainty in a random variable are not capable of representing angular random variables on the unit circle and instead rely on projections onto a tangent line. Any element of any space V can be quantified with a PCE if V is spanned by the expansion’s basis polynomials. This implies that, as long as basis polynomials span the unit circle, an angular random variable (either real or complex) can be quantified using a PCE.
A generalization of the PCE is developed allowing for the representation of complex valued random variables, which includes complex representations of angles. Additionally, it is proposed that real valued polynomials that are orthogonal with respect to measures on the real valued unit circle can be used as basis polynomials in a chaos expansion, which reduces the additional numerical burden imposed by complex valued polynomials. Both complex and real unit circle PCEs are shown to accurately estimate angular random variables in independent and correlated multivariate dynamical systems”--Abstract, page iii
Recommended from our members
Uncertainty propagation and conjunction assessment for resident space objects
Presently, the catalog of Resident Space Objects (RSOs) in Earth orbit tracked by the U.S. Space Surveillance Network (SSN) is greater than 21,000 objects. The size of the catalog continues to grow due to an increasing number of launches, improved tracking capabilities, and in some cases, collisions. Simply propagating the states of these RSOs is a computational burden, while additionally propagating the uncertainty distributions of the RSOs and computing collision probabilities increases the computational burden by at least an order of magnitude.
Tools are developed that propagate the uncertainty of RSOs with Gaussian initial uncertainty from epoch until a close approach. The number of possible elements in the form of a precomputed library, in a Gaussian Mixture Model (GMM) has been increased and the strategy for multivariate problems has been formalized. The accuracy of a GMM is increased by propagating each element by a Polynomial Chaos Expansion (PCE). Both techniques reduce the number of function evaluations required for uncertainty propagation and result in a sliding scale where accuracy can be improved at the cost of increased computation time. A parallel implementation of the accurate benchmark Monte Carlo (MC) technique has been developed on the Graphics Processing Unit (GPU) that is capable of using samples from any uncertainty propagation technique to compute the collision probability. The GPU MC tool delivers up to two orders of magnitude speedups compared to a serial CPU implementation. Finally, a CPU implementation of the collision probability computations using Cartesian coordinates requires orders of magnitude fewer function evaluations compared to a MC run.
Fast computation of the inherent nonlinear growth of the uncertainty distribution in orbital mechanics and accurately computing the collision probability is essential for maintaining a future space catalog and for preventing an uncontrolled growth in the debris population. The uncertainty propagation and collision probability computation methods and algorithms developed here are capable of running on personal workstations and stand to benefit users ranging from national space surveillance agencies to private satellite operators. The developed techniques are also applicable for many general uncertainty quantification and nonlinear estimation problems.Aerospace Engineerin
Mathematics & Statistics 2017 APR Self-Study & Documents
UNM Mathematics & Statistics APR self-study report, review team report, response report, and initial action plan for Spring 2017, fulfilling requirements of the Higher Learning Commission
Handbook of Mathematical Geosciences
This Open Access handbook published at the IAMG's 50th anniversary, presents a compilation of invited path-breaking research contributions by award-winning geoscientists who have been instrumental in shaping the IAMG. It contains 45 chapters that are categorized broadly into five parts (i) theory, (ii) general applications, (iii) exploration and resource estimation, (iv) reviews, and (v) reminiscences covering related topics like mathematical geosciences, mathematical morphology, geostatistics, fractals and multifractals, spatial statistics, multipoint geostatistics, compositional data analysis, informatics, geocomputation, numerical methods, and chaos theory in the geosciences
Better predictions when models are wrong or underspecified
Many statistical methods rely on models of reality in order to learn from data and to make predictions about future data. By necessity, these models usually do not match reality exactly, but are either wrong (none of the hypotheses in the model provides an accurate description of reality) or underspecified (the hypotheses in the model describe only part of the data). In this thesis, we discuss three scenarios involving models that are wrong or underspecified. In each case, we find that standard statistical methods may fail, sometimes dramatically, and present different methods that continue to perform well even if the models are wrong or underspecified. The first two of these scenarios involve regression problems and investigate AIC (Akaike's Information Criterion) and Bayesian statistics. The third scenario has the famous Monty Hall problem as a special case, and considers the question how we can update our belief about an unknown outcome given new evidence when the precise relation between outcome and evidence is unknown.UBL - phd migration 201
Better predictions when models are wrong or underspecified
Many statistical methods rely on models of reality in order to learn from data and to make predictions about future data. By necessity, these models usually do not match reality exactly, but are either wrong (none of the hypotheses in the model provides an accurate description of reality) or underspecified (the hypotheses in the model describe only part of the data). In this thesis, we discuss three scenarios involving models that are wrong or underspecified. In each case, we find that standard statistical methods may fail, sometimes dramatically, and present different methods that continue to perform well even if the models are wrong or underspecified. The first two of these scenarios involve regression problems and investigate AIC (Akaike's Information Criterion) and Bayesian statistics. The third scenario has the famous Monty Hall problem as a special case, and considers the question how we can update our belief about an unknown outcome given new evidence when the precise relation between outcome and evidence is unknown.UBL - phd migration 201