633,522 research outputs found

    An investigation of DTNS2D for use as an incompressible turbulence modelling test-bed

    Get PDF
    This paper documents an investigation of a two dimensional, incompressible Navier-Stokes solver for use as a test-bed for turbulence modelling. DTNS2D is the code under consideration for use at the Center for Modelling of Turbulence and Transition (CMOTT). This code was created by Gorski at the David Taylor Research Center and incorporates the pseudo compressibility method. Two laminar benchmark flows are used to measure the performance and implementation of the method. The classical solution of the Blasius boundary layer is used for validating the flat plate flow, while experimental data is incorporated in the validation of backward facing step flow. Velocity profiles, convergence histories, and reattachment lengths are used to quantify these calculations. The organization and adaptability of the code are also examined in light of the role as a numerical test-bed

    Fuzzy uncertainty modelling for project planning; application to helicopter maintenance

    Get PDF
    Maintenance is an activity of growing interest specially for critical systems. Particularly, aircraft maintenance costs are becoming an important issue in the aeronautical industry. Managing an aircraft maintenance center is a complex activity. One of the difficulties comes from the numerous uncertainties that affect the activity and disturb the plans at short and medium term. Based on a helicopter maintenance planning and scheduling problem, we study in this paper the integration of uncertainties into tactical and operational multiresource, multi-project planning (respectively Rough Cut Capacity Planning and Resource Constraint Project Scheduling Problem). Our main contributions are in modelling the periodic workload on tactical level considering uncertainties in macro-tasks work contents, and modelling the continuous workload on operational level considering uncertainties in tasks durations. We model uncertainties by a fuzzy/possibilistic approach instead of a stochastic approach since very limited data are available. We refer to the problems as the Fuzzy RoughCut Capacity Problem (FRCCP) and the Fuzzy Resource Constraint Project Scheduling Problem (RCPSP).We apply our models to helicopter maintenance activity within the frame of the Helimaintenance project, an industrial project approved by the French Aerospace Valley cluster which aims at building a center for civil helicopter maintenance

    The test-retest reliability of different ankle joint center location techniques

    Get PDF
    Accurate and reliable joint identification is imperative for the collection of meaningful kinetic and kinematic data. Of the lower kinetic chain both the hip and knee joints have received a considerable amount of attention in 3D modelling. However, the reliability of methods to define the ankle joint center have received very little attention. This study investigated the reliability of the two marker method (TMM) and the functional ankle method (FAM) on estimating the ankle joint center. Furthermore, the effects of the two-marker method reliability for defining the ankle joint center when the ankle was covered with a brace or protector was investigated. 3D kinematic data was collected from ten participants (8 female and 2 male) whilst walking. The ankle joint center was defined twice using each test condition; TMM (WITHOUT), FAM (FUNCTIONAL), TMM when the ankle was covered with a brace (BRACE), and TMM when the ankle was covered with a protector (PROTECTOR). Intraclass correlations (ICC) were utilised to compare test and retest waveforms and paired samples t-tests were used to compare angular parameters. Significant differences were found in the test-retest angular parameters in the transverse and sagittal planes for the WITHOUT, BRACE, and FUNCTIONAL conditions. The strongest test-retest ICC’s were observed in the WITHOUT and PROTECTOR conditions. The findings of the current investigation indicate that there are fewer errors using the TMM when the ankle is uncovered or when covered with soft foam that is easy to palpate through

    Machine Learning in Falls Prediction; A cognition-based predictor of falls for the acute neurological in-patient population

    Get PDF
    Background Information: Falls are associated with high direct and indirect costs, and significant morbidity and mortality for patients. Pathological falls are usually a result of a compromised motor system, and/or cognition. Very little research has been conducted on predicting falls based on this premise. Aims: To demonstrate that cognitive and motor tests can be used to create a robust predictive tool for falls. Methods: Three tests of attention and executive function (Stroop, Trail Making, and Semantic Fluency), a measure of physical function (Walk-12), a series of questions (concerning recent falls, surgery and physical function) and demographic information were collected from a cohort of 323 patients at a tertiary neurological center. The principal outcome was a fall during the in-patient stay (n = 54). Data-driven, predictive modelling was employed to identify the statistical modelling strategies which are most accurate in predicting falls, and which yield the most parsimonious models of clinical relevance. Results: The Trail test was identified as the best predictor of falls. Moreover, addition of any others variables, to the results of the Trail test did not improve the prediction (Wilcoxon signed-rank p < .001). The best statistical strategy for predicting falls was the random forest (Wilcoxon signed-rank p < .001), based solely on results of the Trail test. Tuning of the model results in the following optimized values: 68% (+- 7.7) sensitivity, 90% (+- 2.3) specificity, with a positive predictive value of 60%, when the relevant data is available. Conclusion: Predictive modelling has identified a simple yet powerful machine learning prediction strategy based on a single clinical test, the Trail test. Predictive evaluation shows this strategy to be robust, suggesting predictive modelling and machine learning as the standard for future predictive tools

    Galaxy-galaxy weak-lensing measurement from SDSS: II. host halo properties of galaxy groups

    Get PDF
    As the second paper of a series on studying galaxy-galaxy lensing signals using the Sloan Digital Sky Survey Data Release 7 (SDSS DR7), we present our measurement and modelling of the lensing signals around groups of galaxies. We divide the groups into four halo mass bins, and measure the signals around four different halo-center tracers: brightest central galaxy (BCG), luminosity-weighted center, number-weighted center and X-ray peak position. For X-ray and SDSS DR7 cross identified groups, we further split the groups into low and high X-ray emission subsamples, both of which are assigned with two halo-center tracers, BCGs and X-ray peak positions. The galaxy-galaxy lensing signals show that BCGs, among the four candidates, are the best halo-center tracers. We model the lensing signals using a combination of four contributions: off-centered NFW host halo profile, sub-halo contribution, stellar contribution, and projected 2-halo term. We sample the posterior of 5 parameters i.e., halo mass, concentration, off-centering distance, sub halo mass, and fraction of subhalos via a MCMC package using the galaxy-galaxy lensing signals. After taking into account the sampling effects (e.g. Eddington bias), we found the best fit halo masses obtained from lensing signals are quite consistent with those obtained in the group catalog based on an abundance matching method, except in the lowest mass bin. Subject headings: (cosmology:) gravitational lensing, galaxies: clusters: generalComment: 12 pages, 7 figures, submitted to Ap

    NARX-based nonlinear system identification using orthogonal least squares basis hunting

    No full text
    An orthogonal least squares technique for basis hunting (OLS-BH) is proposed to construct sparse radial basis function (RBF) models for NARX-type nonlinear systems. Unlike most of the existing RBF or kernel modelling methods, whichplaces the RBF or kernel centers at the training input data points and use a fixed common variance for all the regressors, the proposed OLS-BH technique tunes the RBF center and diagonal covariance matrix of individual regressor by minimizing the training mean square error. An efficient optimization method isadopted for this basis hunting to select regressors in an orthogonal forward selection procedure. Experimental results obtained using this OLS-BH technique demonstrate that it offers a state-of-the-art method for constructing parsimonious RBF models with excellent generalization performance
    corecore