15 research outputs found
Recommended from our members
Modelling the Spread of Botnet Malware in IoT-Based Wireless Sensor Networks
The propagation approach of a botnet largely dictates its formation, establishing a foundation of bots for future exploitation. The chosen propagation method determines the attack surface, and consequently, the degree of network penetration, as well as the overall size and the eventual attack potency. It is therefore essential to understand propagation behaviours and influential factors in order to better secure vulnerable systems. Whilst botnet propagation is generally well-studied, newer technologies like IoT have unique characteristics which are yet to be thoroughly explored. In this paper, we apply the principles of epidemic modelling to IoT networks consisting of wireless sensor nodes. We build IoT-SIS, a novel propagation model which considers the impact of IoT-specific characteristics like limited processing power, energy restrictions, and node density on the formation of a botnet. Focusing on worm-based propagation, this model is used to explore the dynamics of spread using numerical simulations and the Monte Carlo method, and to discuss the real-life implications of our findings
Benchmarking Change Detector Algorithms from Different Concept Drift Perspectives
The stream mining paradigm has become increasingly popular due to the vast number of algorithms and methodologies it provides to address the current challenges of Internet of Things (IoT) and modern machine learning systems. Change detection algorithms, which focus on identifying drifts in the data distribution during the operation of a machine learning solution, are a crucial aspect of this paradigm. However, selecting the best change detection method for different types of concept drift can be challenging. This work aimed to provide a benchmark for four drift detection algorithms (EDDM, DDM, HDDMW, and HDDMA) for abrupt, gradual, and incremental drift types. To shed light on the capacity and possible trade-offs involved in selecting a concept drift algorithm, we compare their detection capability, detection time, and detection delay. The experiments were carried out using synthetic datasets, where various attributes, such as stream size, the amount of drifts, and drift duration can be controlled and manipulated on our generator of synthetic stream. Our results show that HDDMW provides the best trade-off among all performance indicators, demonstrating superior consistency in detecting abrupt drifts, but has suboptimal time consumption and a limited ability to detect incremental drifts. However, it outperforms other algorithms in detection delay for both abrupt and gradual drifts with an efficient detection performance and detection time performance
Detecting and mitigating adversarial examples in regression tasks: A photovoltaic power generation forecasting case study
With data collected by Internet of Things sensors, deep learning (DL) models can forecast the generation capacity of photovoltaic (PV) power plants. This functionality is especially relevant for PV power operators and users as PV plants exhibit irregular behavior related to environmental conditions. However, DL models are vulnerable to adversarial examples, which may lead to increased predictive error and wrong operational decisions. This work proposes a new scheme to detect adversarial examples and mitigate their impact on DL forecasting models. This approach is based on one-class classifiers and features extracted from the data inputted to the forecasting models. Tests were performed using data collected from a real-world PV power plant along with adversarial samples generated by the Fast Gradient Sign Method under multiple attack patterns and magnitudes. One-class Support Vector Machine and Local Outlier Factor were evaluated as detectors of attacks to Long-Short Term Memory and Temporal Convolutional Network forecasting models. According to the results, the proposed scheme showed a high capability of detecting adversarial samples with an average F1-score close to 90%. Moreover, the detection and mitigation approach strongly reduced the prediction error increase caused by adversarial samples
Time series segmentation based on stationarity analysis to improve new samples prediction
A wide range of applications based on sequential data, named time series, have become increasingly popular in recent years, mainly those based on the Internet of Things (IoT). Several different machine learning algorithms exploit the patterns extracted from sequential data to support multiple tasks. However, this data can suffer from unreliable readings that can lead to low accuracy models due to the low-quality training sets available. Detecting the change point between high representative segments is an important ally to find and thread biased subsequences. By constructing a framework based on the Augmented Dickey-Fuller (ADF) test for data stationarity, two proposals to automatically segment subsequences in a time series were developed. The former proposal, called Change Detector segmentation, relies on change detection methods of data stream mining. The latter, called ADF-based segmentation, is constructed on a new change detector derived from the ADF test only. Experiments over real-file IoT databases and benchmarks showed the improvement provided by our proposals for prediction tasks with traditional Autoregressive integrated moving average (ARIMA) and Deep Learning (Long short-term memory and Temporal Convolutional Networks) methods. Results obtained by the Long short-term memory predictive model reduced the relative prediction error from 1 to 0.67, compared to time series without segmentation
Recommended from our members
A privacy-aware authentication and usage-controlled access protocol for IIoT decentralized data marketplace
Data is ubiquitous, powerful and valuable today. With vast instalments of Industrial Internet-of-Things (IIoT) infrastructure, data is in abundance albeit sitting in organizational silos. Data Marketplaces have emerged to allow monetization of data by trading it with interested buyers. While centralized marketplaces are common, they are controlled by few and are non-transparent. Decentralized data marketplaces allow the democratization of rates, trading terms and fine control to participants. However, in such a marketplace, ensuring privacy and security is crucial. Existing data exchange schemes depend on a trusted third party for key management during authentication and rely on a ‘one-time-off’ approach to authorization. This paper proposes a user-empowered, privacy-aware, authentication and usage-controlled access protocol for IIoT data marketplace. The proposed protocol leverages the concept of Self-Sovereign Identity (SSI) and is based on the standards of Decentralized Identifier (DID) and Verifiable Credential (VC). DIDs empower buyers and give them complete control over their identities. The buyers authenticate and prove claims to access data securely using VC. The proposed protocol also implements a dynamic user-revocation policy. Usage-controlled based access provides secure ongoing authorization during data exchange. A detailed performance and security analysis is provided to show its feasibility
Information and telecommunications project for a digital city: a brazilian case study
Making information and telecommunications available is a permanent challenge for cities concerned to their social, urban and local planning and development, focused on life quality of their citizens and on the effectiveness of public management. Such a challenge requires the involvement of everyone in the city. The objective is to describe the information and telecommunications project from the planning of a digital city carried out in Vinhedo-SP, Brazil. It was built as a telecommunications infrastructure of the kind of "open access metropolitan area networks" which enables the integration of citizens in a single telecommunications environment. The research methodology was emphasized by a case study which turned to be a research-action, comprising the municipal administration and its local units. The results achieved describe, by means of a methodology, the phases, sub-phases, activities, approval points and resulting products, and formalize their respective challenges and difficulties. The contributions have to do with the practical feasibility of the project and execution of its methodology. The conclusion reiterates the importance of the project, collectively implemented and accepted, as a tool to help the management of cities, in the implementation of Strategic Digital City Projects, in the decisions of public administration managers, and in the quality of life of their citizens3119811
Recommended from our members
Local Differential Privacy-Based Data-Sharing Scheme for Smart Utilities
The manufacturing sector is a vital component of most economies, which leads to a large number of cyberattacks on organisations, whereas disruption in operation may lead to significant economic consequences. Adversaries aim to disrupt the production processes of manufacturing companies, gain financial advantages, and steal intellectual property by getting unauthorised access to sensitive data. Access to sensitive data helps organisations to enhance the production and management processes. However, majority of the existing data-sharing mechanisms are either susceptible to different cyber-attacks or heavy in terms of computation overhead. In this paper, a privacy-preserving data-sharing scheme for smart utilities is proposed. First, a customer’s privacy adjustment mechanism is proposed to make sure that end-users have control over their privacy, which is required by the latest government regulations, such as the General Data Protection Regulation. Secondly, a local differential privacy-based mechanism is proposed to ensure privacy of the end-users by hiding real data based on the end-user preferences. The proposed scheme may be applied for different industrial control systems, whereas in this study, it is validated for energy utility use case consisting of smart intelligent devices. The results show that the proposed scheme may guarantee the required level of privacy with an expected relative error in utility
Recommended from our members
A Privacy-Preserving User-Centric Data-Sharing Scheme
Using raw sensitive data of end-users helps service providers manage their operations efficiently and provide high-quality services to end-users. Although access to sensitive information benefits both parties, it poses several challenges concerning end-user privacy. Most data-sharing schemes based on differential privacy allow control of the level of privacy, which is not straightforward for end-users and leads to unpredictable utility. To address this issue, a novel local differentially private data-sharing scheme is proposed featuring a bimodal probability distribution that allows determining the range of random variables from which the noise is drawn with high probability. Additionally, a local differentially private mechanism is introduced to regulate the amount of noise injected into the data to control data utility. These components are combined to make up a user-centric data-sharing scheme which provides the end-user with control over the utility of their data, with the level of privacy being calculated from individual utility preferences. The simulation results show that the proposed scheme allows keeping the utility within the boundaries defined by the end-user, while providing the maximum possible level of privacy. Furthermore, it allows injecting more noise into the data for the same error in utility compared to the Laplace mechanism
Evaluating the Four-Way Performance Trade-Off for Data Stream Classification in Edge Computing
Edge computing (EC) is a promising technology capable of bridging the gap between Cloud computing services and the demands of emerging technologies such as the Internet of Things (IoT). Most EC-based solutions, from wearable devices to smart cities architectures, benefit from Machine Learning (ML) methods to perform various tasks, such as classification. In these cases, ML solutions need to deal efficiently with a huge amount of data, while balancing predictive performance, memory and time costs, and energy consumption. The fact that these data usually come in the form of a continuous and evolving data stream makes the scenario even more challenging. Many algorithms have been proposed to cope with data stream classification, e.g., Very Fast Decision Tree (VFDT) and Strict VFDT (SVFDT). Recently, Online Local Boosting (OLBoost) has also been introduced to improve predictive performance without modifying the underlying structure of the decision tree produced by these algorithms. In this work, we compared the four-way relationship among time efficiency, energy consumption, predictive performance, and memory costs, tuning the hyperparameters of VFDT and the two versions of SVFDT with and without OLBoost. Experiments over 6 benchmark datasets using an EC device revealed that VFDT and SVFDT-I were the most energy-friendly algorithms, with SVFDT-I also significantly reducing memory consumption. OLBoost, as expected, improved the predictive performance, but caused a deterioration in memory and energy consumption