18 research outputs found

    Predicting the security threats of internet rumors and spread of false information based on sociological principle

    Get PDF
    With the fast-growing IoT, regular connectivity through a range of heterogeneous intelligent devices across the Social Online Networks (SON) is feasible and effective to analyze sociological principles. Therefore, Increased user contributions, including web posts, videos and reviews slowly impact the lives of people in the recent past, which triggers volatile knowledge dissemination and undermine protection through gossip dissemination, disinformation, and offensive online debate. Based on the early diffusion status, the goal of this research is to forecast the popularity of online content reliably in the future. Though conventional prediction models are focused primarily on the discovery or integration of a network functionality into a changing time mechanism has been considered as unresolved issues and it has been resolved using Predicting The Security Threats of Internet Rumors (PSTIR) and Spread of False Information Based On Sociological (SFIBS) model with sociology concept. In this paper, the proportion of trustworthy Facebook fans who post regularly in early and future popularity has been analyzed linearly using PSTIR and SFIBS methods. Facebook statistics remind us that mainstream fatigue is an important prediction principle and The mainstream fatigue principle, Besides, it shows the effectiveness of the PSTIR and SFIBS based on experimental stud

    MTEDS: Multivariant Time Series-Based Encoder-Decoder System for Anomaly Detection

    Get PDF
    Intrusion detection systems examine the computer or network for potential security vulnerabilities. Time series data is real-valued. The nature of the data influences the type of anomaly detection. As a result, network anomalies are operations that deviate from the norm. These anomalies can cause a wide range of device malfunctions, overloads, and network intrusions. As a result of this, the network\u27s normal operation and services will be disrupted. The paper proposes a new multi-variant time series-based encoder-decoder system for dealing with anomalies in time series data with multiple variables. As a result, to update network weights via backpropagation, a radical loss function is defined. Anomaly scores are used to evaluate performance. The anomaly score, according to the findings, is more stable and traceable, with fewer false positives and negatives. The proposed system\u27s efficiency is compared to three existing approaches: Multiscaling Convolutional Recurrent Encoder-Decoder, Autoregressive Moving Average, and Long Short Term Medium-Encoder-Decoder. The results show that the proposed technique has the highest precision of 1 for a noise level of 0.2. Thus, it demonstrates greater precision for noise factors of 0.25, 0.3, 0.35, and 0.4, and its effectiveness

    Opportunities of IoT in Fog Computing for High Fault Tolerance and Sustainable Energy Optimization

    Get PDF
    Today, the importance of enhanced quality of service and energy optimization has promoted research into sensor applications such as pervasive health monitoring, distributed computing, etc. In general, the resulting sensor data are stored on the cloud server for future processing. For this purpose, recently, the use of fog computing from a real-world perspective has emerged, utilizing end-user nodes and neighboring edge devices to perform computation and communication. This paper aims to develop a quality-of-service-based energy optimization (QoS-EO) scheme for the wireless sensor environments deployed in fog computing. The fog nodes deployed in specific geographical areas cover the sensor activity performed in those areas. The logical situation of the entire system is informed by the fog nodes, as portrayed. The implemented techniques enable services in a fog-collaborated WSN environment. Thus, the proposed scheme performs quality-of-service placement and optimizes the network energy. The results show a maximum turnaround time of 8 ms, a minimum turnaround time of 1 ms, and an average turnaround time of 3 ms. The costs that were calculated indicate that as the number of iterations increases, the path cost value decreases, demonstrating the efficacy of the proposed technique. The CPU execution delay was reduced to a minimum of 0.06 s. In comparison, the proposed QoS-EO scheme has a lower network usage of 611,643.3 and a lower execution cost of 83,142.2. Thus, the results show the best cost estimation, reliability, and performance of data transfer in a short time, showing a high level of network availability, throughput, and performance guarantee

    NIPUNA: A Novel Optimizer Activation Function for Deep Neural Networks

    Get PDF
    In recent years, various deep neural networks with different learning paradigms have been widely employed in various applications, including medical diagnosis, image analysis, self-driving vehicles and others. The activation functions employed in deep neural networks have a huge impact on the training model and the reliability of the model. The Rectified Linear Unit (ReLU) has recently emerged as the most popular and extensively utilized activation function. ReLU has some flaws, such as the fact that it is only active when the units are positive during back-propagation and zero otherwise. This causes neurons to die (dying ReLU) and a shift in bias. However, unlike ReLU activation functions, Swish activation functions do not remain stable or move in a single direction. This research proposes a new activation function named NIPUNA for deep neural networks. We test this activation by training on customized convolutional neural networks (CCNN). On benchmark datasets (Fashion MNIST images of clothes, MNIST dataset of handwritten digits), the contributions are examined and compared to various activation functions. The proposed activation function can outperform traditional activation functions

    Exploring shared work values and work collaboration with a network approach: A case study from Italy

    No full text
    A real-life case study presented in this chapter reports on how organizational network analysis approach was used in a medium-sized Italian company with circa 100 employees to examine how the company employees were connected by shared values at work, what these values are, and whether and how their value connectedness impacted the quality of their collaboration. The findings indicate that there was a positive correlation between shared work values and work collaboration, present benchmarks for network parameters, as well as propose macro-categories of work values. To the best of the authors' knowledge, this is the first study to use the network-analysis approach to explore shared values and employee collaboration at work. The chapter should be of substantial interest not only to academic scholars but also to organizational leaders and HR practitioners

    An application for the earthquake spectral and source parameters and prediction using adaptive neuro fuzzy inference system and machine learning

    No full text
    Parameters related to earthquake origins can be broken down into two broad classes: source location and source dimension. Scientists use distance curves versus average slowness to approximate the epicentre of an earthquake. The shape of curves is the complex function to the epicentral distance, the geological structures of Earth, and the path taken by seismic waves. Brune\u27s model for source is fitted to the measured seismic wave\u27s displacement spectrum in order to estimate the source\u27s size by optimising spectral parameters. The use of ANFIS to determine earthquake magnitude has the potential to significantly alter the playing field. ANFIS can learn like a person using only the data that has already been collected, which improves predictions without requiring elaborate infrastructure. For this investigation\u27s FIS development, we used a machine with Python 3x running on a core i5 from the 11th generation and an NVIDIA GEFORCE RTX 3050ti GPU processor. Moreover, the research demonstrates that presuming a large number of inputs to the membership function is not necessarily the best option. The quality of inferences generated from data might vary greatly depending on how that data is organised. Subtractive clustering, which does not necessitate any type of normalisation, can be used for prediction of earthquakes magnitude with a high degree of accuracy. This study has the potential to improve our ability to foresee quakes larger than magnitude 5. A solution is not promised to the practitioner, but the research is expected to lead in the right direction. Using Brune\u27s source model and high cut-off frequency factor, this article suggests using machine learning techniques and a Brune Based Application (BBA) in Python. Application accept input in the Sesame American Standard Code for Information Interchange Format (SAF). An application calculates the spectral level of low frequency displacement (Ω0), the corner frequency at which spectrum decays with a rate of 2(fc), the cut-off frequency at which spectrum again decays (fmax), and the rate of decay above fmax on its own (N). Seismic moment, stress drop, source dimension, etc. have all been estimated using spectral characteristics, and scaling laws. As with the maximum frequency, fmax, its origin can be determined through careful experimentation and study. At some sites, the moment magnitude was 4.7 0.09, and the seismic moment was in the order of (107 0.19) 1023. (dyne.cm). The stress reduction is 76.3 11.5 (bars) and the source-radius is (850.0 38.0) (m). The ANFIS method predicted pretty accurately as the residuals were distributed uniformly near to the centrelines. The ANFIS approach made fairly accurate predictions, as evidenced by the fact that the residuals were distributed consistently close to the centerlines. The R2, RMSE, and MAE indices demonstrate that the ANFIS accuracy level is superior to that of the ANN

    Effects of Integrated Fuzzy Logic PID Controller on Satellite Antenna Tracking System

    No full text
    An electrical device that transforms the electricity into the waves of radio and vice versa is termed the antenna. Its main deployment is in the transmitter and receiver of the antenna. While transmission, the transmitter of radio at the extremities of the antenna furnishes the electricity which oscillates at the frequency of radio wave and energy is released as current as em waves. Some of the voltage is formed from the em wave that is invaded at the point of receiving to amplify the receiver. This study focuses on the analysis of the satellite system to aid in mobile antenna tracking. It also examines the techniques for fuzzy control which make up traditional networks that are used. Initially, a basic idea of tracking loops with stabilized antennas was suggested in light of the requirement for the margin of phase and bandwidth. If the gain of the track is reduced due to changes in attributes and throughput, it will be reduced. In addition, fuzzy regulators and PID constituents are used to enhance the loop. The results indicate that the higher and lower antenna tracking gains within the loop were the best fit and the loop\u27s fluctuations are reduced. A controller based on fuzzy logic can be most efficient due to its simplicity and robustness. It is also discovered that fuzzy logic controllers are evaluated by their behavior in relation. This paper presents an evaluation of the controllers in fuzzy logic, which is based on its integration with conventional controllers. There are three gains in PID\u27s regulator PID and every gain can be used to control the variables of inputs and outcomes. The effects of the responses were analyzed and were compared. The commonality was discovered in the results according to the increase in time for II/6 and II/3 based on PID\u27s regulator PID stability, it can be improved by this system, and there is a reduction in the duration of stability. Furthermore, the period of stability may be reduced through the fusion of PID and fuzzy. The effectiveness of the system could be enhanced by the implementation of the neural network. It is also possible to design the two types of control that could be used to control the proposed solid platform

    Clinical Data Analysis for Prediction of Cardiovascular Disease Using Machine Learning Techniques

    No full text
    Cardiovascular disease is difficult to detect due to several risk factors, including high blood pressure, cholesterol, and an abnormal pulse rate. Accurate decision-making and optimal treatment are required to address cardiac risk. As machine learning technology advances, the healthcare industry’s clinical practice is likely to change. As a result, researchers and clinicians must recognize the importance of machine learning techniques. The main objective of this research is to recommend a machine learning-based cardiovascular disease prediction system that is highly accurate. In contrast, modern machine learning algorithms such as REP Tree, M5P Tree, Random Tree, Linear Regression, Naive Bayes, J48, and JRIP are used to classify popular cardiovascular datasets. The proposed CDPS’s performance was evaluated using a variety of metrics to identify the best suitable machine learning model. When it came to predicting cardiovascular disease patients, the Random Tree model performed admirably, with the highest accuracy of 100%, the lowest MAE of 0.0011, the lowest RMSE of 0.0231, and the fastest prediction time of 0.01 seconds

    A new median-average round Robin scheduling algorithm: An optimal approach for reducing turnaround and waiting time

    No full text
    A variety of algorithms handles processes on the CPU. The round-robin algorithm is an efficient CPU scheduling mechanism for a time-sharing operating system. The system processes the methods based on the time slice; however, determining the time slice has proven highly challenging for the researchers. As a result, a variety of dynamic time quantum scheduling techniques are presented by various academics to address this challenge. This study aims to determine how to best schedule resources to maximize efficiency. It is important to note that this scheduling mechanism rotates between the processes after the static quantum time is complete. However, the quantum decision affects how effectively and efficiently the procedures may be scheduled. Additionally, the quantum decision has an impact on the scheduling of processes. The average waiting time, turnaround time, and context switch times of the Round Robin scheduling algorithm are high enough to influence the system\u27s performance. To get over the round-drawbacks, robin\u27s the authors in this study suggest using the improved algorithm Median-Average Round Robin (MARR). Using the median and average of the burst time of each process, the author proposes a dynamic time quantum for the system. The authors compared the proposed model with four other scheduling algorithms. The results vividly depict that the proposed algorithms successfully give effective results with reduced average turnaround time and waiting time. In the future, cost and RAM utilization will be considered to enhance the algorithm

    Intelligent Decision-Making of Load Balancing Using Deep Reinforcement Learning and Parallel PSO in Cloud Environment

    Get PDF
    Intelligent Decision-Making of Load Balancing Using Deep Reinforcement Learning and Parallel PSO in Cloud Environmen
    corecore