407 research outputs found

    "Enhancing Power System Stability with Fuzzy-PI Cascade Controllers and TCSC: A Comprehensive Study"

    Get PDF
    This study endeavours to develop cascade controllers employing Fuzzy-PI methodology to enhance power system stability, with a specific focus on generation control. The primary goal is to mitigate system instabilities and improve the reliability and performance of contemporary power grids. The efficacy of the Fuzzy-PI controller is assessed through extensive simulations involving significant fluctuations in wind energy generation, with comparative analysis against traditional PI controllers. Furthermore, the research emphasizes fortifying the stability of modern power systems by deploying optimized controllers, specifically targeting challenges related to frequency and voltage stability through the application of tailored controllers for Thyristor Controlled Series Capacitor (TCSC). The overarching objectives include enhancing both frequency and voltage stability through the utilization of TCSC units. Moreover, the research integrates hybrid optimization techniques, such as Improved meta-heuristic MPR-RSA, to fine-tune controller parameters, ensuring optimal performance across diverse operating conditions. The selection of controller gains relies on the integral absolute time error function, serving as the cost function for problem optimization

    Regional optimum frequency analysis of resting-state fMRI data for early detection of Alzheimer’s disease biomarkers

    Get PDF
    The blood-oxygen label dependent (BOLD) signal obtained from functional magnetic resonance images (fMRI) varies significantly among populations. Yet, there is some agreement among researchers over the pace of the blood flow within several brain regions relative to the subject’s age and cognitive ability. Our analysis further suggested that regional coherence among the BOLD fMRI voxels belonging to the individual region of the brain has some correlation with underlying pathology as well as cognitive performance, which can suggest potential biomarkers to the early onset of the disease. To capitalise on this we propose a method, called Regional Optimum Frequency Analysis (ROFA), which is based on finding the optimum synchrony frequency observed at each brain region for each of the resting-state BOLD frequency bands (Slow 5 (0.01–0.027 Hz), Slow 4 (0.027–0.073 Hz) and slow 3 (0.073 to 0.198 Hz)), and the whole frequency band (0.01–0.167 Hz) respectively. The ROFA is carried out on fMRI data of total 310 scans, i.e., 26, 175 and 109 scans from 21 young-healthy (YH), 69 elderly-healthy (EH) and 33 Alzheimer’s disease (AD) patients respectively, where these scans include repeated scans from some subjects acquired at 3 to 6 months intervals. A 10-fold cross-validation procedure evaluated the performance of ROFA for classification between the YH vs EH, YH vs AD and EH vs AD subjects. Based on the confusion-matrix parameters; accuracy, precision, sensitivity and Matthew’s correlation coefficient (MCC), the proposed ROFA classification outperformed the state-of-the-art Group-independent component analysis (Group-ICA), Functional-connectivity, Graph metrics, Eigen-vector centrality, Amplitude of low-frequency fluctuation (ALFF) and fractional amplitude of low-frequency fluctuations (fALFF) based methods with more than 94.99% precision and 95.67% sensitivity for different subject groups. The results demonstrate the effectiveness of the proposed ROFA parameters (frequencies) as adequate biomarkers of Alzheimer’s disease

    Regional optimum frequency analysis of resting-state fMRI data for early detection of Alzheimer’s disease biomarkers

    Get PDF
    The blood-oxygen label dependent (BOLD) signal obtained from functional magnetic resonance images (fMRI) varies significantly among populations. Yet, there is some agreement among researchers over the pace of the blood flow within several brain regions relative to the subject’s age and cognitive ability. Our analysis further suggested that regional coherence among the BOLD fMRI voxels belonging to the individual region of the brain has some correlation with underlying pathology as well as cognitive performance, which can suggest potential biomarkers to the early onset of the disease. To capitalise on this we propose a method, called Regional Optimum Frequency Analysis (ROFA), which is based on finding the optimum synchrony frequency observed at each brain region for each of the resting-state BOLD frequency bands (Slow 5 (0.01–0.027 Hz), Slow 4 (0.027–0.073 Hz) and slow 3 (0.073 to 0.198 Hz)), and the whole frequency band (0.01–0.167 Hz) respectively. The ROFA is carried out on fMRI data of total 310 scans, i.e., 26, 175 and 109 scans from 21 young-healthy (YH), 69 elderly-healthy (EH) and 33 Alzheimer’s disease (AD) patients respectively, where these scans include repeated scans from some subjects acquired at 3 to 6 months intervals. A 10-fold cross-validation procedure evaluated the performance of ROFA for classification between the YH vs EH, YH vs AD and EH vs AD subjects. Based on the confusion-matrix parameters; accuracy, precision, sensitivity and Matthew’s correlation coefficient (MCC), the proposed ROFA classification outperformed the state-of-the-art Group-independent component analysis (Group-ICA), Functional-connectivity, Graph metrics, Eigen-vector centrality, Amplitude of low-frequency fluctuation (ALFF) and fractional amplitude of low-frequency fluctuations (fALFF) based methods with more than 94.99% precision and 95.67% sensitivity for different subject groups. The results demonstrate the effectiveness of the proposed ROFA parameters (frequencies) as adequate biomarkers of Alzheimer’s disease

    A Probabilistic Adaptive Cerebral Cortex Segmentation Algorithm for Magnetic Resonance Human Head Scan Images

    Get PDF
    The total efficiency of Magnetic Resonance Imaging (MRI) results in the need for human involvement in order to appropriately detect information contained in the image. Currently, there has been a surge in interest in automated algorithms that can more precisely divide medical image structures into substructures than prior attempts. Instant segregation of cerebral cortex width from MRI scanned images is difficult due to noise, Intensity Non-Uniformity (INU), Partial Volume Effects (PVE), MRI's low resolution, and the very complicated architecture of the cortical folds. In this paper, a Probabilistic Adaptive Cerebral Cortex Segmentation (PACCS) approach is proposed for segmenting brain areas of T1 weighted MRI of human head images. Skull Stripping (SS), Brain Hemisphere Segmentation (BHS) and CCS are the three primary processes in the suggested technique. In step 1, Non-Brain Cells (NBC) is eliminated by a Contour-Based Two-Stage Brain Extraction Method (CTS-BEM). Step 2 details a basic BHS technique for Curve Fitting (CF) detection in MRI human head images. The left and right hemispheres are divided using the discovered Mid-Sagittal Plane (MSP). At last, to enhance a probabilistic CCS structure with adjustments such as prior facts change to remove segmentation bias; the creation of express direct extent training; and a segmentation version based on a regionally various Gaussian Mixture Model- Hidden Markov Random Field – Expectation Maximization (GMM-HMRF-EM). The underlying partial extent categorization and its interplay with found image intensities are represented as a spatially correlated HMRF within the GMM-HMRF-EM method. The proposed GMM-HMRF method estimates HMRF parameters using the EM technique. Finally, the outcomes of segmentation are evaluated in terms of precision, recall, specificity, Jaccard Similarity (JS), and Dice Similarity (DS). The proposed method works better and more consistently than the present locally Varying MRF (LV-MRF), according to the experimental findings obtained by using the suggested GMM-HMRF-EM methodology to 18 individuals' brain images

    Optimizing Firefly Algorithm for Directional Overcurrent Relay Coordination: A case study on the Impact of Parameter Settings

    Get PDF
    This paper investigates the application of the Firefly Algorithm for solving the coordination problem in the IEEE 3-bus network. It analyzes the impact of key parameters, including the number of generations, population size, absorption coefficient (γ), and randomization parameter (α), on the algorithms performance. Through extensive experimentation, the study demonstrates the impact on solution quality, feasibility, computational requirements, and efficiency. Results indicate that increasing the number of generations improves solution quality, but benefits diminish beyond a certain point. Feasibility improves with higher generations, but a balance between solution quality and feasibility becomes apparent at very high generations. Objective function evaluations and computation time increase linearly with generations. Larger population sizes yield better solution quality and feasibility, but a balance is observed at very high population sizes. Objective function evaluations and computation time scale proportionally with population size. The randomization parameter has a modest influence on performance, with no significant changes observed. However, extreme values impact solution quality, feasibility, and computation time. The absorption coefficient significantly affects convergence and solution quality. Lower values expedite convergence but may lead to suboptimal solutions, while higher values enhance exploration at the cost of increased computational effort. This study provides a comprehensive understanding of parameter selection and optimization in the Firefly Algorithm for solving the coordination problem of the IEEE 3-bus network, offering valuable guidance for future research in enhancing performance through parameter refinement and adaptive techniques

    Deep Belief Neural Network Framework for an Effective Scalp Detection System Through Optimization

    Get PDF
    In an era where technology rapidly enhances various sectors, medical services have greatly benefited, particularly in tackling the prevalent issue of hair loss, which affects individuals' self-esteem and social interactions. Acknowledging the need for advanced hair and scalp care, this paper introduces a cost-effective, tech-driven solution for diagnosing scalp conditions. Utilizing the power of deep learning, we present the Grey Wolf-based Enhanced Deep Belief Neural (GW-EDBN) method, a novel approach trained on a vast array of internet-derived scalp images. This technique focuses on accurately identifying key symptoms like dandruff, oily scalp, folliculitis, and hair loss. Through initial data cleansing with Adaptive Gradient Filtering (AGF) and subsequent feature extraction methods, the GW-EDBN isolates critical indicators of scalp health. By incorporating these features into its Enhanced Deep Belief Network (EDBN) and applying Grey Wolf Optimization (GWO), the system achieves unprecedented precision in diagnosing scalp ailments. This model not only surpasses existing alternatives in accuracy but also offers a more affordable option for individuals seeking hair and scalp analysis, backed by experimental validation across several performance metrics including precision, recall, and execution time. This advancement signifies a leap forward in accessible, high-accuracy medical diagnostics for hair and scalp health, potentially revolutionizing personal care practices

    Bayesian inference about outputs of computationally expensive algorithms with uncertainty on the inputs

    Get PDF
    In the field of radiation protection, complex computationally expensive algorithms are used to predict radiation doses, to organs in the human body from exposure to internally deposited radionuclides. These algorithms contain many inputs, the true values of which are uncertain. Current methods for assessing the effects of the input uncertainties on the output of the algorithms are based on Monte Carlo analyses, i.e. sampling from subjective prior distributions that represent the uncertainty on each input, evaluating the output of the model and calculating sample statistics. For complex computationally expensive algorithms, it is often not possible to get a large enough sample for a meaningful uncertainty analysis. This thesis presents an alternative general theory for uncertainty analysis, based on the use of stochastic process models, in a Bayesian context. The measures provided by the Monte Carlo analysis are obtained, plus extra more informative measures, but using a far smaller sample. The theory is initially developed in a general form and then specifically for algorithms with inputs whose uncertainty can be characterised by independent normal distributions. The Monte Carlo and Bayesian methodologies are then compared using two practical examples. The first example, is based on a simple model developed to calculate doses due to radioactive iodine. This model has two normally distributed uncertain parameters and due to its simplicity an independent measurement of the true uncertainty on the output is available for comparison. This exercise appears to show that the Bayesian methodology is superior in this simple case. The purpose of the second example is to determine if the methodology is practical in a 'real-life' situation and to compare it with a Monte Carlo analysis. A model for calculating doses due to plutonium contamination is used. This model is computationally expensive and has fourteen uncertain inputs. The Bayesian analysis compared favourably to the Monte Carlo, indicating that it has the potential to provide more accurate uncertainty analyses for the parameters of computationally expensive algorithms

    Model evaluation in relation to soil N2O emissions: An algorithmic method which accounts for variability in measurements and possible time lags

    Get PDF
    AbstractThe loss of nitrogen from fertilised soils in the form of nitrous oxide (N2O) is a side effect of modern agriculture and the focus of many model-based studies. Due to the spatial and temporal heterogeneity of soil N2O emissions, the measured data can introduce limitations to the use of those statistical methods that are most commonly employed in the evaluation of model performance. In this paper, we describe these limitations and present an algorithm developed to address them. We implement the algorithm using simulated and measured N2O data from two UK arable sites. We show that possible time lags between the measured and simulated data can affect model evaluation and that their consideration in the evaluation process can reduce measures such as the Mean Squared Error (MSE) by 30%. We also analyse the algorithm's results to identify patterns in the estimated lags and to narrow down their possible causes
    • …
    corecore