776,505 research outputs found

    Parametric optimization of the femoropopliteal artery stent design based on numerical analysis

    Get PDF
    High-failure rates of Peripheral Arterial Disease (PAD) stenting were reported due to the inability of certain stent strut configuration to accommodate severe biomechanical environment of the Femoro-Popliteal Artery (FPA) such as bends, twists, and axially compresses during limb flexion. The unique of mechanical deformation environment in FPA has been considered one of main factors affecting the durability of the FPA stent and reducing the stent life. Consequently, various optimization techniques have been developed to improve the mechanical performance of the FPA stent. The present work shown that, the first-two of twelve FPA resemble stent models stent models have been selected with a net score of 3.65 Model I and, with a net score of 3.55 Model II via applying Pictorial Selection Method. Finite Element Method (FEM) of optimization study based-parameterization has been conducted for stent strut dimensions, stents were compared in terms of force-stress behavior. Multi Criteria Decision Making (MCDM) method has been utilized to identify the best combination of strut dimensions. The strut thickness parameterization results were in relation T α 1/σ (T is strut thickness) for both models with all mechanical loading modes. Moreover, the strut width parameterization results were in relation W α 1/σ (W is strut width) for both models with all mechanical loading modes. Whereas, the strut length parameterization results were in relation L α σ in case of Model I and, L α 1/σ (L is strut length) in case of Model II, under axial loads, while under three-point bending and torsion loading modes L α σ for both models, under radial compression the relations were L α 1/σ in case of Model I and, L α σ in case of Model II. The best combination of strut dimension in the thickness case was t4 = 230 ”m for both models, in strut width were w3=0.180, and w4= 0.250 mm for Model I and Model II, respectively, and in strut length were l2= 1.40, and l2= 1.75 mm for Model I and Model II, respectively. In conclusions, the mathematical selection approach and the consistent mathematical approach of MCDM has been proposed, also the mechanical performance has been improved for parameterized stent models

    Condition assessment of timber utility poles based on a hierarchical data fusion model

    Full text link
    © 2016 American Society of Civil Engineers. This paper proposes a novel hierarchical data fusion technique for the non-destructive testing (NDT) and condition assessment of timber utility poles. The new method analyzes stress wave data from multisensor and multiexcitation guided wave testing using a hierarchical data fusion model consisting of feature extraction, data compression, pattern recognition, and decision fusion algorithms. The researchers validate the proposed technique using guided wave tests of a sample of in situ timber poles. The actual health states of these poles are known from autopsies conducted after the testing, forming a ground-truth for supervised classification. In the proposed method, a data fusion level extracts the main features from the sampled stress wave signals using power spectrum density (PSD) estimation, wavelet packet transform (WPT), and empirical mode decomposition (EMD). These features are then compiled to a feature vector via real-number encoding and sent to the next level for further processing. Principal component analysis (PCA) is also adopted for feature compression and to minimize information redundancy and noise interference. In the feature fusion level, two classifiers based on support vector machine (SVM) are applied to sensor separated data of the two excitation types and the pole condition is identified. In the decision making fusion level, the Dempster-Shafer (D-S) evidence theory is employed to integrate the results from the individual sensors obtaining a final decision. The results of the in situ timber pole testing show that the proposed hierarchical data fusion model was able to distinguish between healthy and faulty poles, demonstrating the effectiveness of the new method

    Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: The Henry Ford ExercIse Testing (FIT) project

    Get PDF
    Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, NaĂŻve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE). The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (NaĂŻve Bayes Tree, Random Forest, and Logistic Model Tree) and achieved high accuracy of prediction (AUC = 0.92). The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data

    Conversational Agents and their Influence on the Well-being of Clinicians

    Get PDF
    An increasing number of clinicians (i.e., nurses and physicians) suffer from mental health-related issues like depression and burnout. These, in turn, stress communication, collaboration, and decision- making—areas in which Conversational Agents (CAs) have shown to be useful. Thus, in this work, we followed a mixed-method approach and systematically analysed the literature on factors affecting the well-being of clinicians and CAs’ potential to improve said well-being by relieving support in communication, collaboration, and decision-making in hospitals. In this respect, we are guided by Brigham et al. (2018)’s model of factors influencing well-being. Based on an initial number of 840 articles, we further analysed 52 papers in more detail and identified the influences of CAs’ fields of application on external and individual factors affecting clinicians’ well-being. As our second method, we will conduct interviews with clinicians and experts on CAs to verify and extend these influencing factors

    How can model comparison help improving species distribution models?

    Get PDF
    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagus sylvatica L., Quercus robur L. and Pinus sylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes

    Work Roll Cooling System Design Optimisation in Presence of Uncertainty

    Get PDF
    Organised by: Cranfield UniversityThe paper presents a framework to optimise the design of work roll based on the cooling performance. The framework develops Meta models from a set of Finite Element Analysis (FEA) of the roll cooling. A design of experiment technique is used to identify the FEA runs. The research also identifies sources of uncertainties in the design process. A robust evolutionary multi-objective algorithm is applied to the design optimisation I order to identify a set of good solutions in the presence of uncertainties both in the decision and objective spaces.Mori Seiki – The Machine Tool Compan

    Sales Performance and Emotional Intelligence of Technology Sales Professionals

    Get PDF
    United States business leaders spend $15 billion per year on sales training, but approximately 50% of salespeople still fail to reach their annual sales targets. Business leaders have limited understanding of the relationship between emotional intelligence and its central constructs (self-perception, self-expression, interpersonal, decision making, and stress management) and sales performance of sales professionals based in the United States. The purpose of this correlational research study was to examine the relationship between emotional intelligence and sales performance via an online pre-existing emotional intelligence assessment. The theoretical framework incorporated emotional intelligence theory and job performance theory. The sample included 86 technology sales professionals working in the United States who were recruited through a nonrandom purposive sampling method. The correlation results showed an association exists between decision making and sales performance (r = .310, n = 73; p Ë? .01). For all 6 predictor variables, the regression model was not a significant predictor of sales performance, F(6,66) = 1.295, p = .272, R-ÂČ = .105. By including only decision making, the linear regression model was a significant predictor of sales performance, F(1,71) = 7.550, p Ë? .01, R-ÂČ = .096. The results were not generalizable, but suggest that decision making is significant in achieving sales performance. These results suggest that higher decision making skills lead to higher sales performance. Social implications for sales and business leaders include using these results to seek and hire emotionally intelligent sales professionals and training existing sales professionals about emotional intelligence competencies to improve company-wide sales performance

    Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research

    Get PDF
    In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of random forest in circumstances where the regression assumptions are often violated in big data applications. Random forest is a model averaging procedure where each tree is constructed based on a bootstrap sample of the data set. In particular, we emphasize the ease of application, low computational cost, high predictive accuracy, flexibility, and interpretability of random forest machinery. Our overall recommendation is that institutional researchers look beyond classical regression and single decision tree analytics tools, and consider random forest as the predominant method for prediction tasks. The proposed points of view are detailed and illustrated through a simulation experiment and analyses of data from real institutional research projects. Accessed 3,712 times on https://pareonline.net from January 13, 2018 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right
    • 

    corecore