10 research outputs found

    Burnout among surgeons before and during the SARS-CoV-2 pandemic: an international survey

    Get PDF
    Background: SARS-CoV-2 pandemic has had many significant impacts within the surgical realm, and surgeons have been obligated to reconsider almost every aspect of daily clinical practice. Methods: This is a cross-sectional study reported in compliance with the CHERRIES guidelines and conducted through an online platform from June 14th to July 15th, 2020. The primary outcome was the burden of burnout during the pandemic indicated by the validated Shirom-Melamed Burnout Measure. Results: Nine hundred fifty-four surgeons completed the survey. The median length of practice was 10 years; 78.2% included were male with a median age of 37 years old, 39.5% were consultants, 68.9% were general surgeons, and 55.7% were affiliated with an academic institution. Overall, there was a significant increase in the mean burnout score during the pandemic; longer years of practice and older age were significantly associated with less burnout. There were significant reductions in the median number of outpatient visits, operated cases, on-call hours, emergency visits, and research work, so, 48.2% of respondents felt that the training resources were insufficient. The majority (81.3%) of respondents reported that their hospitals were included in the management of COVID-19, 66.5% felt their roles had been minimized; 41% were asked to assist in non-surgical medical practices, and 37.6% of respondents were included in COVID-19 management. Conclusions: There was a significant burnout among trainees. Almost all aspects of clinical and research activities were affected with a significant reduction in the volume of research, outpatient clinic visits, surgical procedures, on-call hours, and emergency cases hindering the training. Trial registration: The study was registered on clicaltrials.gov "NCT04433286" on 16/06/2020

    ADAPTIVE CNN ENSEMBLE TO HANDLE CONCEPT DRIFT IN ONLINE IMAGE CLASSIFICATION

    No full text
    The analysis from the data streams is an essential requirement in the current era of digitalization. However, the critical features of many real-world data streams (imagery streams) such as high-dimensionality, large size, and nonstationary nature lead to concept drift, cause the characteristics of the data streams can change arbitrarily over time. The presence of concept drift renders many classical machine learning approaches unsuitable, hence research community must address this critical issue and contribute towards new adaptive approaches in their place

    An Adaptive Federated Machine Learning-Based Intelligent System for Skin Disease Detection: A Step toward an Intelligent Dermoscopy Device

    No full text
    The prevalence of skin diseases has increased dramatically in recent decades, and they are now considered major chronic diseases globally. People suffer from a broad spectrum of skin diseases, whereas skin tumors are potentially aggressive and life-threatening. However, the severity of skin tumors can be managed (by treatment) if diagnosed early. Health practitioners usually apply manual or computer vision-based tools for skin tumor diagnosis, which may cause misinterpretation of the disease and lead to a longer analysis time. However, cutting-edge technologies such as deep learning using the federated machine learning approach have enabled health practitioners (dermatologists) in diagnosing the type and severity level of skin diseases. Therefore, this study proposes an adaptive federated machine learning-based skin disease model (using an adaptive ensemble convolutional neural network as the core classifier) in a step toward an intelligent dermoscopy device for dermatologists. The proposed federated machine learning-based architecture consists of intelligent local edges (dermoscopy) and a global point (server). The proposed architecture can diagnose the type of disease and continuously improve its accuracy. Experiments were carried out in a simulated environment using the International Skin Imaging Collaboration (ISIC) 2019 dataset (dermoscopy images) to test and validate the model’s classification accuracy and adaptability. In the future, this study may lead to the development of a federated machine learning-based (hardware) dermoscopy device to assist dermatologists in skin tumor diagnosis

    Adaptive CNN Ensemble for Complex Multispectral Image Analysis

    No full text
    Multispectral image classification has long been the domain of static learning with nonstationary input data assumption. The prevalence of Industrial Revolution 4.0 has led to the emergence to perform real-time analysis (classification) in an online learning scenario. Due to the complexities (spatial, spectral, dynamic data sources, and temporal inconsistencies) in online and time-series multispectral image analysis, there is a high occurrence probability in variations of spectral bands from an input stream, which deteriorates the classification performance (in terms of accuracy) or makes them ineffective. To highlight this critical issue, firstly, this study formulates the problem of new spectral band arrival as virtual concept drift. Secondly, an adaptive convolutional neural network (CNN) ensemble framework is proposed and evaluated for a new spectral band adaptation. The adaptive CNN ensemble framework consists of five (05) modules, including dynamic ensemble classifier (DEC) module. DEC uses the weighted voting ensemble approach using multiple optimized CNN instances. DEC module can increase dynamically after new spectral band arrival. The proposed ensemble approach in the DEC module (individual spectral band handling by the individual classifier of the ensemble) contributes the diversity to the ensemble system in the simple yet effective manner. The results have shown the effectiveness and proven the diversity of the proposed framework to adapt the new spectral band during online image classification. Moreover, the extensive training dataset, proper regularization, optimized hyperparameters (model and training), and more appropriate CNN architecture significantly contributed to retaining the performance accuracy

    Automatic Image Annotation for Small and Ad hoc Intelligent Applications using Raspberry Pi

    No full text
    The cutting-edge technology Machine Learning (ML) is successfully applied for Business Intelligence. Among the various pre-processing steps of ML, Automatic Image Annotation (also known as automatic image tagging or linguistic indexing) is the process in which a computer system automatically assigns metadata in the form of captioning or keywords to a digital image. Automatic Image Annotation (AIA) methods (which have appeared during the last several years) make a large use of many ML approaches. Clustering and classification methods are most frequently applied to annotate images. In addition, these proposed solutions require a high computational infrastructure. However, certain real-time applications (small and ad-hoc intelligent applications) for example, autonomous small robots, gadgets, drone etc. have limited computational processing capacity. These small and ad-hoc applications demand a more dynamic and portable way to automatically annotate data and then perform ML tasks (Classification, clustering etc.) in real time using limited computational power and hardware resources. Through a comprehensive literature study we found that most image pre-processing algorithms and ML tasks are computationally intensive, and it can be challenging to run them on an embedded platform with acceptable frame rates. However, Raspberry Pi is sufficient for AIA and ML tasks that are relevant to small and ad-hoc intelligent applications. In addition, few critical intelligent applications (which require high computational resources, for example, Deep Learning using huge dataset) are only feasible to run on more powerful hardware resources. In this study, we present the framework of “Automatic Image Annotation for Small and Ad-hoc Intelligent Application using Raspberry Pi” and propose the low-cost infrastructures (single node and multi node using Raspberry Pi) and software module (for Raspberry Pi) to perform AIA and ML tasks in real time for small and ad-hoc intelligent applications. The integration of both AIA and ML tasks in a single software module (with in Raspberry Pi) is challenging. This study will helpful towards the improvement in various practical applications areas relevant to small intelligent autonomous systems

    Deterioration of Electrical Load Forecasting Models in a Smart Grid Environment

    No full text
    Smart Grid (S.G.) is a digitally enabled power grid with an automatic capability to control electricity and information between utility and consumer. S.G. data streams are heterogenous and possess a dynamic environment, whereas the existing machine learning methods are static and stand obsolete in such environments. Since these models cannot handle variations posed by S.G. and utilities with different generation modalities (D.G.M.), a model with adaptive features must comply with the requirements and fulfill the demand for new data, features, and modality. In this study, we considered two open sources and one real-world dataset and observed the behavior of ARIMA, ANN, and LSTM concerning changes in input parameters. It was found that no model observed the change in input parameters until it was manually introduced. It was observed that considered models experienced performance degradation and deterioration from 5 to 15% in terms of accuracy relating to parameter change. Therefore, to improve the model accuracy and adapt the parametric variations, which are dynamic in nature and evident in S.G. and D.G.M. environments. The study has proposed a novel adaptive framework to overcome the existing limitations in electrical load forecasting models

    Anomaly detection in laser powder bed fusion using machine learning: A review

    No full text
    Metal Additive Manufacturing (MAM) applications are growing rapidly in high-tech industries such as biomedical and aerospace, and in many other industries including tooling, casting, automotive, oil and gas for production and prototyping. The onset of Laser Powder Bed Fusion (L-PBF) technology proved to be an efficient technique that can convert metal additive manufacturing into a reformed process if anomalies occurred during this process are eliminated. Industrial applications demand high accuracy and risk-free products whereas prototyping using MAM demand lower process and product development time. In order to address these challenges, Machine Learning (ML) experts and researchers are trying to adopt an efficient method for anomaly detection in L-PBF so that the MAM process can be optimized and desired final part properties can be achieved. This review provides an overview of L-PBF and outlines the ML methods used for anomaly detection in L-PBF. The paper also explains how ML methods are being used as a step forward toward enabling the real-time process control of MAM and the process can be optimized for higher accuracy, lower production time, and less material waste. Authors have a strong believe that ML techniques can reform MAM process, whereas research concerned to the anomaly detection using ML techniques is limited and needs attention.This review has been done with a hope that ML experts can easily find a direction and contribute in this field

    Machine Learning Approach to Predict the Performance of a Stratified Thermal Energy Storage Tank at a District Cooling Plant Using Sensor Data

    No full text
    In the energy management of district cooling plants, the thermal energy storage tank is critical. As a result, it is essential to keep track of TES results. The performance of the TES has been measured using a variety of methodologies, both numerical and analytical. In this study, the performance of the TES tank in terms of thermocline thickness is predicted using an artificial neural network, support vector machine, and k-nearest neighbor, which has remained unexplored. One year of data was collected from a district cooling plant. Fourteen sensors were used to measure the temperature at different points. With engineering judgement, 263 rows of data were selected and used to develop the prediction models. A total of 70% of the data were used for training, whereas 30% were used for testing. K-fold cross-validation were used. Sensor temperature data was used as the model input, whereas thermocline thickness was used as the model output. The data were normalized, and in addition to this, moving average filter and median filter data smoothing techniques were applied while developing KNN and SVM prediction models to carry out a comparison. The hyperparameters for the three machine learning models were chosen at optimal condition, and the trial-and-error method was used to select the best hyperparameter value: based on this, the optimum architecture of ANN was 14-10-1, which gives the maximum R-Squared value, i.e., 0.9, and minimum mean square error. Finally, the prediction accuracy of three different techniques and results were compared, and the accuracy of ANN is 0.92%, SVM is 89%, and KNN is 96.3%, concluding that KNN has better performance than others

    A review on Bayesian modeling approach to quantify failure risk assessment of oil and gas pipelines due to corrosion

    No full text
    Funding Information: The authors would like to thank Universiti Teknologi PETRONAS (UTP) Malaysia for giving the opportunity to conduct research under grant number 015LC0-381 for the project “Failure Prediction Model for Stress Corrosion Cracking Using Deep Learning Approach." Publisher Copyright: © 2022 Elsevier LtdTo forecast safety and security measures, it is vital to evaluate the integrity of a pipeline used to carry oil and gas that has been subjected to corrosion. Corrosion is unavoidable, yet neglecting it might have serious personal, economic, and environmental repercussions. To predict the unanticipated behavior of corrosion, most of the research relies on probabilistic models (petri net, markov chain, monte carlo simulation, fault tree, and bowtie), even though such models have significant drawbacks, such as spatial state explosion, dependence on unrealistic assumptions, and static nature. For deteriorating oil and gas pipelines, machine learning-based models such as supervised learning models are preferred. Nevertheless, these models are incapable of simulating corrosion parameter uncertainties and the dynamic nature of the process. In this case, Bayesian network approaches proved to be a preferable choice for evaluating the integrity of oil and gas pipeline models that have been corroded. The literature has no compilations of Bayesian modeling approaches for evaluating the integrity of hydrocarbon pipelines subjected to corrosion. Therefore, the objective of this study is to evaluate the current state of the Bayesian network approach, which includes methodology, influential parameters, and datasets for risk analysis, and to provide industry experts and academics with suggestions for future enhancements using content analysis. Although the study focuses on corroded oil and gas pipelines, the acquired knowledge may be applied to several other sectors.Peer reviewe
    corecore