27 research outputs found

    Neural network and genetic algorithm techniques for energy efficient relay node placement in smart grid

    Get PDF
    Smart grid (SG) is an intelligent combination of computer science and electricity system whose main characteristics are measurement and real-time monitoring for utility and consumer behavior. SG is made of three main parts: Home Area Network (HAN), Field Area Network (FAN) and Wide Area Network (WAN). There are several techniques used for monitoring SG such as fiber optic but very costly and difficult to maintain. One of the ways to solve the monitoring problem is use of Wireless Sensor Network (WSN). WSN is widely researched because of its easy deployment, low maintenance requirements, small hardware and low costs. However, SG is a harsh environment with high level of magnetic field and background noise and deploying WSN in this area is challenging since it has a direct effect on WSN link quality. An optimal relay node placement which has not yet worked in a smart grid can improve the link quality significantly. To solve the link quality problem and achieve optimum relay node placement, network life-time must be calculated because a longer life-time indicates better relay placement. To calculate this life-time, it is necessary to estimate packet reception rate (PRR). In this research, to achieve optimal relay node placement, firstly, a mathematical formula to measure link quality of the network in smart grid environment is proposed. Secondly, an algorithm based on neural network to estimate the network life-time has been developed. Thirdly, an algorithm based on genetic algorithm for efcient positioning of relay nodes under different conditions to increase the life-time of neural network has also been developed. Results from simulation showed that life-time prediction of neural network has a 91% accuracy. In addition, there was an 85% improvement of life-time compared to binary integer linear programming and weight binary integer linear programming. The research has shown that relay node placement based on the developed genetic algorithms have increased the network life-time, addressed the link quality problem and achieved optimum relay node placement

    Outlier detection in wireless sensor network based on time series approach

    Get PDF
    Sensory data inWireless Sensor Network (WSN) is not always reliable because of open environmental factors such as noise, weak received signal strength or intrusion attacks. The process of detecting highly noisy data and noisy sensor node is called outlier detection. Outlier detection is one of the fundamental tasks of time series analysis that relates to predictive modeling, cluster analysis and association analysis. It has been widely researched in various disciplines besides WSN. The challenge of noise detection in WSN is when it has to be done inside a sensor with limited computational and communication capabilities. Furthermore, there are only a few outlier detection techniques in WSNs and there are no algorithms to detect outliers on real data with high level of accuracy locally and select the most effective neighbors for collaborative detection globally. Hence, this research designed a local and global time series outlier detection in WSN. The Local Outlier Detection Algorithm (LODA) as a decentralized noise detection algorithm runs on each sensor node by identifying intrinsic features, determining the memory size of data histogram to accomplish effective available memory, and making classification for predicting outlier data was developed. Next, the Global Outlier Detection Algorithm (GODA)was developed using adaptive Gray Coding and Entropy techniques for best neighbor selection for spatial correlation amongst sensor nodes. Beside GODA also adopts Adaptive Random Forest algorithm for best results. Finally, this research developed a Compromised SensorNode Detection Algorithm (CSDA) as a centralized algorithm processed at the base station for detecting compromised sensor nodes regardless of specific cause of the anomalies. To measure the effectiveness and accuracy of these algorithms, a comprehensive scenario was simulated. Noisy data were injected into the data randomly and the sensor nodes. The results showed that LODA achieved 89% accuracy in the prediction of the outliers, GODA detected anomalies up to 99% accurately and CSDA identified accurately up to 80% of the sensor nodes that have been compromised. In conclusion, the proposed algorithms have proven the anomaly detection locally and globally, and compromised sensor node detection in WSN

    Heart failure: a prevalence-based and model-based cost analysis

    Get PDF
    IntroductionHeart failure (HF) imposes a heavy economic burden on patients, their families, and society as a whole. Therefore, it is crucial to quantify the impact and dimensions of the disease in order to prioritize and allocate resources effectively.MethodsThis study utilized a prevalence-based, bottom-up, and incidence-based Markov model to assess the cost of illness. A total of 502 HF patients (classes I–IV) were recruited from Madani Hospital in Tabriz between May and October 2022. Patients were followed up every two months for a minimum of two and a maximum of six months using a person-month measurement approach. The perspective of the study was societal, and both direct and indirect costs were estimated. Indirect costs were calculated using the Human Capital (HC) method. A two-part regression model, consisting of the Generalized Linear Model (GLM) and Probit model, was used to analyze the relationship between HF costs and clinical and demographic variables.ResultsThe total cost per patient in one year was 261,409,854.9 Tomans (21,967.21 PPP). Of this amount, 207,147,805.8 Tomans (17,407.38 PPP) (79%) were indirect costs, while 54,262,049.09 Tomans (4,559.84 PPP) (21%) were direct costs. The mean lifetime cost was 2,173,961,178 Tomans. Premature death accounted for the highest share of lifetime costs (48%), while class III HF had the lowest share (2%). Gender, having basic insurance, and disease class significantly influenced the costs of HF, while comorbidity and age did not have a significant impact. The predicted amount closely matched the observed amount, indicating good predictive power.ConclusionThis study revealed that HF places a significant economic burden on patients in terms of both direct and indirect costs. The substantial contribution of indirect costs, which reflect the impact of the disease on other sectors of the economy, highlights the importance of unpaid work. Given the significant variation in HF costs among assessed variables, social and financial support systems should consider these variations to provide efficient and fair support to HF patients

    Determinants of Intention to Use Simulation-Based Learning in Computers and Networking Courses: An ISM and MICMAC Analysis

    Get PDF
    Simulation-based learning (SBL) presents a wide variety of opportunities to practice complex computer and networking skills in higher education, employing various platforms to enhance educational outcomes. The integration of SBL tools in teaching computer networking courses is useful for both instructors and learners. Furthermore, the increasing importance of SBL in higher education highlights the necessity to further explore the factors that affect the adoption of SBL technologies, particularly in the field of computer networking courses. Despite these advantages, minimal effort has been made to examine the factors that impact instructors’ intentions to use SBL tools for computers and networking courses. The main objective of this study is to examine the factors that affect instructors' intentions to utilize SBL tools in computer networking courses offered by higher education institutions. By employing Interpretive structural modeling (ISM) and Matriced’ Impacts Croise’s Multiplication Appliquee a UN Classement (MICMAC) analysis, the research attempts to provide an in-depth understanding of the interdependencies and hierarchical associations among twelve identified factors. Results showed that system quality, self-efficacy, technological knowledge, and information quality have high driving power. This study offers valuable perspectives for higher education institutions and for upcoming empirical studies and aids in comprehending the advantages of using SBL tools in teaching and higher education

    Investigating factors influencing decision-makers’ intention to adopt green IT in Malaysian manufacturing industry

    Get PDF
    Green IT has attracted policy makers and IT managers within organizations to use IT resources in cost-effective and energy-efficient ways. Investigating the factors that influence decision-makers’ intention towards the adoption of Green IT is important in the development of strategies that promote the organizations to use Green IT. Therefore, the objective of this study stands to understand potential factors that drive decisions makers in Malaysian manufacturing sector to adopt Green IT. This research accordingly developed a model by integrating two theoretical models, Theory of Planned Behavior and Norm Activation Theory, to explore individual factors that influence decision’ makers in manufacturing sector in Malaysia to adopt Green IT via the mediation of personal norms. Accordingly, to determine predictive factors that influence managerial intention toward Green IT adoption, the researchers conducted a comprehensive literature review. The data was collected from 183 decision-makers from Malaysian manufacturing sector and analyzed by Structural Equation Modelling. This research provides important preliminary insights in understanding the most significant factors that determined managerial intention towards Green IT adoption. The model of Green IT adoption explained factors which encourages individual decision-makers in the Malaysian organizations to adopt Green IT initiatives for environment sustainability

    Non-invasive diagnostic tests for Helicobacter pylori infection

    Get PDF
    BACKGROUND: Helicobacter pylori (H pylori) infection has been implicated in a number of malignancies and non-malignant conditions including peptic ulcers, non-ulcer dyspepsia, recurrent peptic ulcer bleeding, unexplained iron deficiency anaemia, idiopathic thrombocytopaenia purpura, and colorectal adenomas. The confirmatory diagnosis of H pylori is by endoscopic biopsy, followed by histopathological examination using haemotoxylin and eosin (H & E) stain or special stains such as Giemsa stain and Warthin-Starry stain. Special stains are more accurate than H & E stain. There is significant uncertainty about the diagnostic accuracy of non-invasive tests for diagnosis of H pylori. OBJECTIVES: To compare the diagnostic accuracy of urea breath test, serology, and stool antigen test, used alone or in combination, for diagnosis of H pylori infection in symptomatic and asymptomatic people, so that eradication therapy for H pylori can be started. SEARCH METHODS: We searched MEDLINE, Embase, the Science Citation Index and the National Institute for Health Research Health Technology Assessment Database on 4 March 2016. We screened references in the included studies to identify additional studies. We also conducted citation searches of relevant studies, most recently on 4 December 2016. We did not restrict studies by language or publication status, or whether data were collected prospectively or retrospectively. SELECTION CRITERIA: We included diagnostic accuracy studies that evaluated at least one of the index tests (urea breath test using isotopes such as13C or14C, serology and stool antigen test) against the reference standard (histopathological examination using H & E stain, special stains or immunohistochemical stain) in people suspected of having H pylori infection. DATA COLLECTION AND ANALYSIS: Two review authors independently screened the references to identify relevant studies and independently extracted data. We assessed the methodological quality of studies using the QUADAS-2 tool. We performed meta-analysis by using the hierarchical summary receiver operating characteristic (HSROC) model to estimate and compare SROC curves. Where appropriate, we used bivariate or univariate logistic regression models to estimate summary sensitivities and specificities. MAIN RESULTS: We included 101 studies involving 11,003 participants, of which 5839 participants (53.1%) had H pylori infection. The prevalence of H pylori infection in the studies ranged from 15.2% to 94.7%, with a median prevalence of 53.7% (interquartile range 42.0% to 66.5%). Most of the studies (57%) included participants with dyspepsia and 53 studies excluded participants who recently had proton pump inhibitors or antibiotics.There was at least an unclear risk of bias or unclear applicability concern for each study.Of the 101 studies, 15 compared the accuracy of two index tests and two studies compared the accuracy of three index tests. Thirty-four studies (4242 participants) evaluated serology; 29 studies (2988 participants) evaluated stool antigen test; 34 studies (3139 participants) evaluated urea breath test-13C; 21 studies (1810 participants) evaluated urea breath test-14C; and two studies (127 participants) evaluated urea breath test but did not report the isotope used. The thresholds used to define test positivity and the staining techniques used for histopathological examination (reference standard) varied between studies. Due to sparse data for each threshold reported, it was not possible to identify the best threshold for each test.Using data from 99 studies in an indirect test comparison, there was statistical evidence of a difference in diagnostic accuracy between urea breath test-13C, urea breath test-14C, serology and stool antigen test (P = 0.024). The diagnostic odds ratios for urea breath test-13C, urea breath test-14C, serology, and stool antigen test were 153 (95% confidence interval (CI) 73.7 to 316), 105 (95% CI 74.0 to 150), 47.4 (95% CI 25.5 to 88.1) and 45.1 (95% CI 24.2 to 84.1). The sensitivity (95% CI) estimated at a fixed specificity of 0.90 (median from studies across the four tests), was 0.94 (95% CI 0.89 to 0.97) for urea breath test-13C, 0.92 (95% CI 0.89 to 0.94) for urea breath test-14C, 0.84 (95% CI 0.74 to 0.91) for serology, and 0.83 (95% CI 0.73 to 0.90) for stool antigen test. This implies that on average, given a specificity of 0.90 and prevalence of 53.7% (median specificity and prevalence in the studies), out of 1000 people tested for H pylori infection, there will be 46 false positives (people without H pylori infection who will be diagnosed as having H pylori infection). In this hypothetical cohort, urea breath test-13C, urea breath test-14C, serology, and stool antigen test will give 30 (95% CI 15 to 58), 42 (95% CI 30 to 58), 86 (95% CI 50 to 140), and 89 (95% CI 52 to 146) false negatives respectively (people with H pylori infection for whom the diagnosis of H pylori will be missed).Direct comparisons were based on few head-to-head studies. The ratios of diagnostic odds ratios (DORs) were 0.68 (95% CI 0.12 to 3.70; P = 0.56) for urea breath test-13C versus serology (seven studies), and 0.88 (95% CI 0.14 to 5.56; P = 0.84) for urea breath test-13C versus stool antigen test (seven studies). The 95% CIs of these estimates overlap with those of the ratios of DORs from the indirect comparison. Data were limited or unavailable for meta-analysis of other direct comparisons. AUTHORS' CONCLUSIONS: In people without a history of gastrectomy and those who have not recently had antibiotics or proton ,pump inhibitors, urea breath tests had high diagnostic accuracy while serology and stool antigen tests were less accurate for diagnosis of Helicobacter pylori infection.This is based on an indirect test comparison (with potential for bias due to confounding), as evidence from direct comparisons was limited or unavailable. The thresholds used for these tests were highly variable and we were unable to identify specific thresholds that might be useful in clinical practice.We need further comparative studies of high methodological quality to obtain more reliable evidence of relative accuracy between the tests. Such studies should be conducted prospectively in a representative spectrum of participants and clearly reported to ensure low risk of bias. Most importantly, studies should prespecify and clearly report thresholds used, and should avoid inappropriate exclusions

    A Systematic Literature Review on Outlier Detection in Wireless Sensor Networks

    No full text
    A wireless sensor network (WSN) is defined as a set of spatially distributed and interconnected sensor nodes. WSNs allow one to monitor and recognize environmental phenomena such as soil moisture, air pollution, and health data. Because of the very limited resources available in sensors, the collected data from WSNs are often characterized as unreliable or uncertain. However, applications using WSNs demand precise readings, and uncertainty in data reading can cause serious damage (e.g., health monitoring data). Therefore, an efficient local/distributed data processing algorithm is needed to ensure: (1) the extraction of precise and reliable values from noisy readings; (2) the detection of anomalies from data reported by sensors; and (3) the identification of outlier sensors in a WSN. Several works have been conducted to achieve these objectives using several techniques such as machine learning algorithms, mathematical modeling, and clustering. The purpose of this paper is to conduct a systematic literature review to report the available works on outlier and anomaly detection in WSNs. The paper highlights works conducted from January 2004 to October 2018. A total of 3520 papers are reviewed in the initial search process. Later, these papers are filtered by title, abstract, and contents, and a total of 117 papers are selected. These papers are examined to answer the defined research questions. The current paper presents an improved taxonomy of outlier detection techniques. This will help researchers and practitioners to find the most relevant and recent studies related to outlier detection in WSNs. Finally, the paper identifies existing gaps that future studies can fill

    A DFT/TDDFT study on dual doped bilayer graphene containing Se and X (Ga,P,S)

    No full text
    The electronic, magnetic and optical properties of dual doped bilayer graphene (BLG), containing the impurity of Se in one monolayer, and X (= Ga, P, S) in other one, were calculated by DFT approach. Given the band structures and DOS diagrams, for the Se-P doped BLG the half-metallic, for the case of Se-Ga the metallic, and for the Se-S doped BLG the semiconductor behavior was observed. These properties were analyzed by plotting the diagrams of equipotential surfaces, spin polarization and PDOS. By calculating the optical properties using TDDFT approach, we found some evidences for the formation of surface plasmons in the Se-Ga doped BLG. Due to their impurities, the absorption spectrums of the three structures in the UV range have significant differences with respect to each other. Finally, in the most optical variables, according to the similarity of Se and Ga atoms in their closed-shell cores, in Se-Ga doped BLG the peaks are stronger and sharper

    Global outliers detection in wireless sensor networks: A novel approach integrating time-series analysis, entropy, and random forest-based classification

    Get PDF
    Wireless sensor networks (WSNs) have recently attracted greater attention worldwide due to their practicality in monitoring, communicating, and reporting specific physical phenomena. The data collected by WSNs is often inaccurate as a result of unavoidable environmental factors, which may include noise, signal weakness, or intrusion attacks depending on the specific situation. Sending high-noise data has negative effects not just on data accuracy and network reliability, but also regarding the decision-making processes in the base station. Anomaly detection, or outlier detection, is the process of detecting noisy data amidst the contexts thus described. The literature contains relatively few noise detection techniques in the context of WSNs, particularly for outlier-detection algorithms applying time series analysis, which considers the effective neighbors to ensure a global-collaborative detection. Hence, the research presented in this article is intended to design and implement a global outlier-detection approach, which allows us to find and select appropriate neighbors to ensure an adaptive collaborative detection based on time-series analysis and entropy techniques. The proposed approach applies a random forest algorithm for identifying the best results. To measure the effectiveness and efficiency of the proposed approach, a comprehensive and real scenario provided by the Intel Berkeley Research Laboratory has been simulated. Noisy data have been injected into the collected data randomly. The results obtained from the experiment then conducted experimentation demonstrate that our approach can detect anomalies with up to 99% accuracy

    Deep learning algorithm for supervision process in production using acoustic signal

    No full text
    In an industrial environment, accurate fault diagnosis of machines is crucial to prevent shutdowns, failures, maintenance costs, and production downtime. Existing methods for system failure prevention are often unsatisfactory and expensive, prompting the need for alternative approaches. Acoustic signals have emerged as a new method for predicting machine component lifespan, but recognizing relevant features and distinguishing them from noise remains challenging. To address the aforementioned challenges, we present a comprehensive model that integrates various components to enhance the accuracy and effectiveness of machine process identification. The proposed model incorporates a deep learning algorithm, which enables the forecasting of machine operation based on acoustic signals. In addition, we employ a customized Continuous Wavelet Transformation (CWT) technique to convert the acoustic signals into CWT images, preserving vital information such as signal amplitude. This transformation allows for a more comprehensive analysis and representation of the acoustic data. Furthermore, a Convolutional Neural Network (CNN) is utilized as a powerful classifier to accurately classify and differentiate between different machine processes based on the extracted features from the CWT images. By combining these elements, our model provides a robust and efficient framework for machine process identification using acoustic signals. Testing our model on a dataset generated from the Institute for Manufacturing Technology and Machine Tools (IFW) for the Gildemeister machine (CTX420 linear), we achieve over 97% accuracy in discovering and early detecting emerging faults and machine processes based on acoustic signals
    corecore