117,195 research outputs found
Statistical analysis of SSME system data
A statistical methodology to enhance the Space Shuttle Main Engine (SSME) performance prediction accuracy is proposed. This methodology was to be used in conjunction with existing SSME performance prediction computer codes to improve parameter prediction accuracy and to quantify that accuracy. However, after a review of related literature, researchers concluded that the proposed problem required a coverage of areas such as linear and nonlinear system theory, measurement theory, statistics, and stochastic estimation. Since state space theory is the foundation for a more complete study of each of the before mentioned areas, these researchers chose to refocus emphasis to cover the more specialized topic of state vector estimation procedures. State vector estimation was also selected because of current and future concerns by NASA for SSME performance evaluation; i.e., there is a current interest in an improved evaluation procedure for actual SSME post flight performance as well as for post static test performance of a single SSME. A current investigation of analytical methods may be used to improve test stand failure detection. This paper considers the issue of post flight/test state variable reconstruction through the application of observations made on the output of the Space Shuttle propulsion system. Rogers used the Kalman filtering procedure to reconstruct the state variables of the Space Shuttle propulsion system. An objective of this paper is to give the general setup of the Kalman filter and its connection to linear regression. A second objective is to examine the reconstruction methodology for application to the reconstruction of the state vector of a single Space Shuttle Main Engine (SSME) by using static test firing data
A Dynamic Approach to Linear Statistical Calibration with an Application in Microwave Radiometry
The problem of statistical calibration of a measuring instrument can be
framed both in a statistical context as well as in an engineering context. In
the first, the problem is dealt with by distinguishing between the 'classical'
approach and the 'inverse' regression approach. Both of these models are static
models and are used to estimate exact measurements from measurements that are
affected by error. In the engineering context, the variables of interest are
considered to be taken at the time at which you observe it. The Bayesian time
series analysis method of Dynamic Linear Models (DLM) can be used to monitor
the evolution of the measures, thus introducing an dynamic approach to
statistical calibration. The research presented employs the use of Bayesian
methodology to perform statistical calibration. The DLM's framework is used to
capture the time-varying parameters that maybe changing or drifting over time.
Two separate DLM based models are presented in this paper. A simulation study
is conducted where the two models are compared to some well known 'static'
calibration approaches in the literature from both the frequentist and Bayesian
perspectives. The focus of the study is to understand how well the dynamic
statistical calibration methods performs under various signal-to-noise ratios,
r. The posterior distributions of the estimated calibration points as well as
the 95% coverage intervals are compared by statistical summaries. These dynamic
methods are applied to a microwave radiometry data set.Comment: 26 pages, 10 figure
MOMA: Visual Mobile Marker Odometry
In this paper, we present a cooperative odometry scheme based on the
detection of mobile markers in line with the idea of cooperative positioning
for multiple robots [1]. To this end, we introduce a simple optimization scheme
that realizes visual mobile marker odometry via accurate fixed marker-based
camera positioning and analyse the characteristics of errors inherent to the
method compared to classical fixed marker-based navigation and visual odometry.
In addition, we provide a specific UAV-UGV configuration that allows for
continuous movements of the UAV without doing stops and a minimal
caterpillar-like configuration that works with one UGV alone. Finally, we
present a real-world implementation and evaluation for the proposed UAV-UGV
configuration
Semiparametric Estimation of Task-Based Dynamic Functional Connectivity on the Population Level
Dynamic functional connectivity (dFC) estimates time-dependent associations between pairs of brain region time series as typically acquired during functional MRI. dFC changes are most commonly quantified by pairwise correlation coefficients between the time series within a sliding window. Here, we applied a recently developed bootstrap-based technique (Kudela et al., 2017) to robustly estimate subject-level dFC and its confidence intervals in a task-based fMRI study (24 subjects who tasted their most frequently consumed beer and Gatorade as an appetitive control). We then combined information across subjects and scans utilizing semiparametric mixed models to obtain a group-level dFC estimate for each pair of brain regions, flavor, and the difference between flavors. The proposed approach relies on the estimated group-level dFC accounting for complex correlation structures of the fMRI data, multiple repeated observations per subject, experimental design, and subject-specific variability. It also provides condition-specific dFC and confidence intervals for the whole brain at the group level. As a summary dFC metric, we used the proportion of time when the estimated associations were either significantly positive or negative. For both flavors, our fully-data driven approach yielded regional associations that reflected known, biologically meaningful brain organization as shown in prior work, as well as closely resembled resting state networks (RSNs). Specifically, beer flavor-potentiated associations were detected between several reward-related regions, including the right ventral striatum (VST), lateral orbitofrontal cortex, and ventral anterior insular cortex (vAIC). The enhancement of right VST-vAIC association by a taste of beer independently validated the main activation-based finding (Oberlin et al., 2016). Most notably, our novel dFC methodology uncovered numerous associations undetected by the traditional static FC analysis. The data-driven, novel dFC methodology presented here can be used for a wide range of task-based fMRI designs to estimate the dFC at multiple levels-group-, individual-, and task-specific, utilizing a combination of well-established statistical methods
Dynamic VaR models and the Peaks over Threshold method for market risk measurement: an empirical investigation during a financial crisis
This paper presents a backtesting exercise involving several VaR models for measuring market risk in a dynamic context. The focus is on the comparison of standard dynamic VaR models, ad hoc fat-tailed models and the dynamic Peaks over Threshold (POT) procedure for VaR estimation with different volatility specifications. We introduce three different stochastic processes for the losses: two of them are of the GARCH-type and one is of the EWMA-type. In order to assess the performance of the models, we implement a backtesting procedure using the log-losses of a diversified sample of 15 financial assets. The backtesting analysis covers the period March 2004 - May 2009, thus including the turmoil period corresponding to the subprime crisis. The results show that the POT approach and a Dynamic Historical Simulation method, both combined with the EWMA volatility specification, are particularly effective at high VaR coverage probabilities and outperform the other models under consideration. Moreover, VaR measures estimated with these models react quickly to the turmoil of the last part of the backtesting period, so that they seem to be efficient in high-risk periods as well.Market risk, Extreme Value Theory, Peaks over Threshold, Value at Risk, Fat tails
- …