460 research outputs found
Numerical Computation of Multivariate Normal and Multivariate t Probabilities over Ellipsoidal Regions
An algorithm for the computation of multivariate normal and multivariate t probabilities over general hyperellipsoidal regions is given. A special case is the calculation of probabilities for central and noncentral F and x^2 distributions. A FORTRAN 90 program MVELPS.FOR incorporates the algorithm.
Calculation of Critical Values for Somerville's FDR Procedures
A Fortran 95 program has been written to calculate critical values for the step-up and step-down FDR procedures developed by Somerville (2004). The program allows for arbitrary selection of number of hypotheses, FDR rate, one- or two-sided hypotheses, common correlation coefficient of the test statistics and degrees of freedom. An MCV (minimum critical value) may be specified, or the program will calculate a specified number of critical values or steps in an FDR procedure. The program can also be used to efficiently ascertain an upper bound to the number of hypotheses which the procedure will reject, given either the values of the test statistics, or their p values. Limiting the number of steps in an FDR procedure can be used to control the number or proportion of false discoveries (Somerville and Hemmelmann 2007). Using the program to calculate the largest critical values makes possible efficient use of the FDR procedures for very large numbers of hypotheses.
FORTRAN 90 and SAS-IML Programs for Computation of Critical Values for Multiple Testing and Simultaneous Confidence Intervals
See paper for mathematical introduction.
Numerical Computation of Multivariate Normal and Multivariate t Probabilities over Ellipsoidal Regions
An algorithm for the computation of multivariate normal and multivariate t probabilities over general hyperellipsoidal regions is given. A special case is the calculation of probabilities for central and noncentral F and x2 distributions. A FORTRAN 90 program MVELPS.FOR incorporates the algorithm
A Fortran 90 Program for Evaluation of Multivariate Normal and Multivariate t Integrals Over Convex Regions
Let X' = (X1,X2, ... ,Xk) have the multivariate normal distribution f(X) = MVN(μ, ∑σ2) where ∑ is a known positive definite matrix, and σ2 is a constant. There are many problems in statistics which require the evaluation of f(x) over some convex region A. That is P = ∫A f(X) dX.
If σ2 is known, then without loss of generality, set μ = 0, σ =1 and let ∑ be the correlation matrix. For the case where the region A is rectangular, the problem has been addressed by many authors. They include Gupta (1963), Milton (1972), Schervish (1984), Deak (1986), Wang and Kennedy (1990,1992), Olson and Weissfeld (1991), Drezner (1992) and Genz (1992,1993). However, regions of integration for many statistical applications, for example multiple comparisons, are not rectangular
A Package to Study the Performance of Step-Up and Step-Down Test Procedures
The package can be used to analyze the performance of step-up and step-down procedures. It can be used to compare powers, calculate the “false discovery rate”, to study the effects of reduced step procedures, and to calculate P [U ≤ k], where U is the number of rejected true hypotheses. It can be used to determine the maximum number of steps that can be made and still guarantee (with a given probability) that the number of false rejections will not exceed some specified number. The test statistics are assumed to have a multivariate-t distribution. Examples are included
Calculation of critical values for Somerville\u27s FDR procedures
A Fortran 95 program has been written to calculate critical values for the step-up and step-down FDR procedures developed by Somerville (2004). The program allows for arbitrary selection of number of hypotheses, FDR rate, one- or two-sided hypotheses, common correlation coefficient of the test statistics and degrees of freedom. An MCV (minimum critical value) may be specified, or the program will calculate a specified number of critical values or steps in an FDR procedure. The program can also be used to efficiently ascertain an upper bound to the number of hypotheses which the procedure will reject, given either the values of the test statistics, or their p values. Limiting the number of steps in an FDR procedure can be used to control the number or proportion of false discoveries (Somerville and Hemmelmann 2007). Using the program to calculate the largest critical values makes possible efficient use of the FDR procedures for very large numbers of hypotheses
Earthquake Cycle Modelling of Multi-segmented Faults: Dynamic Rupture and Ground Motion Simulation of the 1992 M_w 7.3 Landers Earthquake
We perform earthquake cycle simulations with the goal of studying the characteristics of source scaling relations and strong ground motions in multi-segmented fault ruptures. The 1992 M_w 7.3 Landers earthquake is chosen as a target earthquake to validate our methodology. The model includes the fault geometry for the three-segmented Landers rupture from the SCEC community fault model, extended at both ends to a total length of 200 km, and limited to a depth to 15 km. We assume the faults are governed by rate-and-state (RS) friction, with a heterogeneous, correlated spatial distribution of characteristic weakening distance Dc. Multiple earthquake cycles on this non-planar fault system are modeled with a quasi-dynamic solver based on the boundary element method, substantially accelerated by implementing a hierarchical-matrix method. The resulting seismic ruptures are recomputed using a fully-dynamic solver based on the spectral element method, with the same RS friction law. The simulated earthquakes nucleate on different sections of the fault, and include events similar to the M_w 7.3 Landers earthquake. We obtain slip velocity functions, rupture times and magnitudes that can be compared to seismological observations. The simulated ground motions are validated by comparison of simulated and recorded response spectra
Earthquake Cycle Modelling of Multi-segmented Faults: Dynamic Rupture and Ground Motion Simulation of the 1992 M_w 7.3 Landers Earthquake
We perform earthquake cycle simulations with the goal of studying the characteristics of source scaling relations and strong ground motions in multi-segmented fault ruptures. The 1992 M_w 7.3 Landers earthquake is chosen as a target earthquake to validate our methodology. The model includes the fault geometry for the three-segmented Landers rupture from the SCEC community fault model, extended at both ends to a total length of 200 km, and limited to a depth to 15 km. We assume the faults are governed by rate-and-state (RS) friction, with a heterogeneous, correlated spatial distribution of characteristic weakening distance Dc. Multiple earthquake cycles on this non-planar fault system are modeled with a quasi-dynamic solver based on the boundary element method, substantially accelerated by implementing a hierarchical-matrix method. The resulting seismic ruptures are recomputed using a fully-dynamic solver based on the spectral element method, with the same RS friction law. The simulated earthquakes nucleate on different sections of the fault, and include events similar to the M_w 7.3 Landers earthquake. We obtain slip velocity functions, rupture times and magnitudes that can be compared to seismological observations. The simulated ground motions are validated by comparison of simulated and recorded response spectra
A Skeleton-based Approach For Rock Crack Detection Towards A Climbing Robot Application
Conventional wheeled robots are unable to traverse scientifically
interesting, but dangerous, cave environments. Multi-limbed climbing robot
designs, such as ReachBot, are able to grasp irregular surface features and
execute climbing motions to overcome obstacles, given suitable grasp locations.
To support grasp site identification, we present a method for detecting rock
cracks and edges, the SKeleton Intersection Loss (SKIL). SKIL is a loss
designed for thin object segmentation that leverages the skeleton of the label.
A dataset of rock face images was collected, manually annotated, and augmented
with generated data. A new group of metrics, LineAcc, has been proposed for
thin object segmentation such that the impact of the object width on the score
is minimized. In addition, the metric is less sensitive to translation which
can often lead to a score of zero when computing classical metrics such as Dice
on thin objects. Our fine-tuned models outperform previous methods on similar
thin object segmentation tasks such as blood vessel segmentation and show
promise for integration onto a robotic system
- …