150 research outputs found

    Elliptic curve cryptosystem over optimal extension fields for computationally constrained devices

    Get PDF
    Data security will play a central role in the design of future IT systems. The PC has been a major driver of the digital economy. Recently, there has been a shift towards IT applications realized as embedded systems, because they have proved to be good solutions for many applications, especially those which require data processing in real time. Examples include security for wireless phones, wireless computing, pay-TV, and copy protection schemes for audio/video consumer products and digital cinemas. Most of these embedded applications will be wireless, which makes the communication channel vulnerable. The implementation of cryptographic systems presents several requirements and challenges. For example, the performance of algorithms is often crucial, and guaranteeing security is a formidable challenge. One needs encryption algorithms to run at the transmission rates of the communication links at speeds that are achieved through custom hardware devices. Public-key cryptosystems such as RSA, DSA and DSS have traditionally been used to accomplish secure communication via insecure channels. Elliptic curves are the basis for a relatively new class of public-key schemes. It is predicted that elliptic curve cryptosystems (ECCs) will replace many existing schemes in the near future. The main reason for the attractiveness of ECC is the fact that significantly smaller parameters can be used in ECC than in other competitive system, but with equivalent levels of security. The benefits of having smaller key size include faster computations, and reduction in processing power, storage space and bandwidth. This makes ECC ideal for constrained environments where resources such as power, processing time and memory are limited. The implementation of ECC requires several choices, such as the type of the underlying finite field, algorithms for implementing the finite field arithmetic, the type of the elliptic curve, algorithms for implementing the elliptic curve group operation, and elliptic curve protocols. Many of these selections may have a major impact on overall performance. In this dissertation a finite field from a special class called the Optimal Extension Field (OEF) is chosen as the underlying finite field of implementing ECC. OEFs utilize the fast integer arithmetic available on modern microcontrollers to produce very efficient results without resorting to multiprecision operations or arithmetic using polynomials of large degree. This dissertation discusses the theoretical and implementation issues associated with the development of this finite field in a low end embedded system. It also presents various improvement techniques for OEF arithmetic. The main objectives of this dissertation are to --Implement the functions required to perform the finite field arithmetic operations. -- Implement the functions required to generate an elliptic curve and to embed data on that elliptic curve. -- Implement the functions required to perform the elliptic curve group operation. All of these functions constitute a library that could be used to implement any elliptic curve cryptosystem. In this dissertation this library is implemented in an 8-bit AVR Atmel microcontroller.Dissertation (MEng (Computer Engineering))--University of Pretoria, 2006.Electrical, Electronic and Computer Engineeringunrestricte

    Accurate and efficient localisation in wireless sensor networks using a best-reference selection

    Get PDF
    Many wireless sensor network (WSN) applications depend on knowing the position of nodes within the network if they are to function efficiently. Location information is used, for example, in item tracking, routing protocols and controlling node density. Configuring each node with its position manually is cumbersome, and not feasible in networks with mobile nodes or dynamic topologies. WSNs, therefore, rely on localisation algorithms for the sensor nodes to determine their own physical location. The basis of several localisation algorithms is the theory that the higher the number of reference nodes (called “references”) used, the greater the accuracy of the estimated position. However, this approach makes computation more complex and increases the likelihood that the location estimation may be inaccurate. Such inaccuracy in estimation could be due to including data from nodes with a large measurement error, or from nodes that intentionally aim to undermine the localisation process. This approach also has limited success in networks with sparse references, or where data cannot always be collected from many references (due for example to communication obstructions or bandwidth limitations). These situations require a method for achieving reliable and accurate localisation using a limited number of references. Designing a localisation algorithm that could estimate node position with high accuracy using a low number of references is not a trivial problem. As the number of references decreases, more statistical weight is attached to each reference’s location estimate. The overall localisation accuracy therefore greatly depends on the robustness of the selection method that is used to eliminate inaccurate references. Various localisation algorithms and their performance in WSNs were studied. Information-fusion theory was also investigated and a new technique, rooted in information-fusion theory, was proposed for defining the best criteria for the selection of references. The researcher chose selection criteria to identify only those references that would increase the overall localisation accuracy. Using these criteria also minimises the number of iterations needed to refine the accuracy of the estimated position. This reduces bandwidth requirements and the time required for a position estimation after any topology change (or even after initial network deployment). The resultant algorithm achieved two main goals simultaneously: accurate location discovery and information fusion. Moreover, the algorithm fulfils several secondary design objectives: self-organising nature, simplicity, robustness, localised processing and security. The proposed method was implemented and evaluated using a commercial network simulator. This evaluation of the proposed algorithm’s performance demonstrated that it is superior to other localisation algorithms evaluated; using fewer references, the algorithm performed better in terms of accuracy, robustness, security and energy efficiency. These results confirm that the proposed selection method and associated localisation algorithm allow for reliable and accurate location information to be gathered using a minimum number of references. This decreases the computational burden of gathering and analysing location data from the high number of references previously believed to be necessary.Thesis (PhD(Eng))--University of Pretoria, 2011.Electrical, Electronic and Computer Engineeringunrestricte

    An energy-efficient sensing matrix for wireless multimedia sensor networks

    Get PDF
    DATA AVAILABILITY STATEMENT : There were no datasets created during this study and all relevant datasets are already publicly available.A measurement matrix is essential to compressed sensing frameworks. The measurement matrix can establish the fidelity of a compressed signal, reduce the sampling rate demand, and enhance the stability and performance of the recovery algorithm. Choosing a suitable measurement matrix for Wireless Multimedia Sensor Networks (WMSNs) is demanding because there is a sensitive weighing of energy efficiency against image quality that must be performed. Many measurement matrices have been proposed to deliver low computational complexity or high image quality, but only some have achieved both, and even fewer have been proven beyond doubt. A Deterministic Partial Canonical Identity (DPCI) matrix is proposed that has the lowest sensing complexity of the leading energy-efficient sensing matrices while offering better image quality than the Gaussian measurement matrix. The simplest sensing matrix is the basis of the proposed matrix, where random numbers were replaced with a chaotic sequence, and the random permutation was replaced with random sample positions. The novel construction significantly reduces the computational complexity as well time complexity of the sensing matrix. The DPCI has lower recovery accuracy than other deterministic measurement matrices such as the Binary Permuted Block Diagonal (BPBD) and Deterministic Binary Block Diagonal (DBBD) but offers a lower construction cost than the BPBD and lower sensing cost than the DBBD. This matrix offers the best balance between energy efficiency and image quality for energy-sensitive applications.https://www.mdpi.com/journal/sensorsam2024Electrical, Electronic and Computer EngineeringNon

    Real time security assessment of the power system using a hybrid support vector machine and multilayer perceptron neural network algorithms

    Get PDF
    Abstract : In today’s grid, the technological based cyber-physical systems have continued to be plagued with cyberattacks and intrusions. Any intrusive action on the power system’s Optimal Power Flow (OPF) modules can cause a series of operational instabilities, failures, and financial losses. Real time intrusion detection has become a major challenge for the power community and energy stakeholders. Current conventional methods have continued to exhibit shortfalls in tackling these security issues. In order to address this security issue, this paper proposes a hybrid Support Vector Machine and Multilayer Perceptron Neural Network (SVMNN) algorithm that involves the combination of Support Vector Machine (SVM) and multilayer perceptron neural network (MPLNN) algorithms for predicting and detecting cyber intrusion attacks into power system networks. In this paper, a modified version of the IEEE Garver 6-bus test system and a 24-bus system were used as case studies. The IEEE Garver 6-bus test system was used to describe the attack scenarios, whereas load flow analysis was conducted on real time data of a modified Nigerian 24-bus system to generate the bus voltage dataset that considered several cyberattack events for the hybrid algorithm. Sising various performance metricion and load/generator injections, en included in the manuscriptmulation results showed the relevant influences of cyberattacks on power systems in terms of voltage, power, and current flows. To demonstrate the performance of the proposed hybrid SVMNN algorithm, the results are compared with other models in related studies. The results demonstrated that the hybrid algorithm achieved a detection accuracy of 99.6%, which is better than recently proposed schemes

    Computational and experimental study for the desalination of petrochemical industrial effluents using direct contact membrane distillation

    Get PDF
    Abstract The petrochemical, mining and power industries have reacted to the recent South African water crisis by focussing on improved brine treatment for water and salt recovery with the aim of achieving zero liquid effluent discharge. The purpose of this novel study was to compare experimentally obtained results from the treatment of synthetic NaCl solutions and petrochemical industrial brines such as spent ion exchange regenerant brines and reverse osmosis (RO) brines to the classical well-known Knudsen diffusion, molecular diffusion and transition predictive models. The predictive models were numerically solved using a developed mathematical algorithm that was coded using MATLAB® software. The impact of experimentally varying the inlet feed temperature on process performance of the system is presented here and compared to simulated results. It was found that there was good agreement between the experimentally obtained results, for both the synthetic NaCl solution and the industrial brines. The mean average percentage error (MAPE) was found to be 7.9% for the synthetic NaCl solutions when compared to the Knudsen model. The Knudsen/molecular diffusion transition theoretical model best predicted the performance of the membrane for the industrial spent ion exchange regenerant brine with a mean absolute percentage error (MAPE) of 13.3%. The Knudsen model best predicted the performance of the membrane (MAPE of 10.5%) for the industrial RO brine. Overall, the models were able to successfully predict the water flux and can be used as potential process design tools

    Connect2NFT : a web-based, blockchain enabled NFT application with the aim of reducing fraud and ensuring authenticated social, non-human verified digital identity

    Get PDF
    As of 2022, non-fungible tokens, or NFTs, the smart contract powered tokens that represent ownership in a specific digital asset, have become a popular investment vehicle. In 2021, NFT trading reached USD 17.6 billion and entered mainstream media with several celebrities and major companies launching tokens within the space. The rapid rise in popularity of NFTs has brought with it a number of risks and concerns, two of which will be discussed and addressed in this technical paper. Data storage of the underlying digital asset connected to an NFT is held off-chain in most cases and is therefore out of the NFT holders’ control. This issue will be discussed and addressed using a theoretical workflow developed and presented for a system that converges NFTs and verifiable credentials with the aim of storing underlying NFT digital assets in a decentralized manner. The second issue focuses on the rise of NFT infringements and fraud within the overall NFT space. This will be discussed and addressed through the development of a practical application, named “Connect2NFT”. The main functionality of this practical application will enable users to connect their Twitter social media accounts to the NFTs they own, thus ensuring that potential buyers or viewers of the NFT can comprehensively conclude who is the authentic owner of a specific NFT. An individual performance analysis of the proposed solution will be conducted in addition to being compared and evaluated against similar applications. Thorough development, implementation, and testing has been performed in order to establish a practical solution that can be tested and applied to current NFT use cases. The theoretical NFT storage solution is a minor but equally important contribution in comparison.https://www.mdpi.com/journal/mathematicsam2023Electrical, Electronic and Computer Engineerin

    Long short term memory water quality predictive model discrepancy mitigation through genetic algorithm optimisation and ensemble modeling

    Get PDF
    A predictive long short-term memory (LSTM) model developed on a particular water quality dataset will only apply to the dataset and may fail to make an accurate prediction on another dataset. This paper focuses on improving LSTM model tolerance by mitigating discrepancies in model prediction capability that arises when a model is applied to different datasets. Two predictive LSTM models are developed from the water quality datasets, Baffle and Burnett, and are optimised using the metaheuristic genetic algorithm (GA) to create hybrid GA-optimised LSTM models that are subsequently combined with a linear weight-based technique to develop a tolerant predictive ensemble model. The models successfully predict river water quality in terms of dissolved oxygen concentration. After GA-optimisation, the RMSE values of the Baffle and Burnett models decrease by 42.42% and 10.71%, respectively. Furthermore, two ensemble models are developed from the GA-hybrid models, namely the average ensemble and the optimal weighted ensemble. The GA-Baffle RMSE values decrease by 5.05% for the average ensemble and 6.06% for the weighted ensemble, and the GA-Burnett RMSE values decrease by 7.84% and 8.82%, respectively. When tested on unseen and unrelated datasets, the models make accurate predictions, indicating the applicability of the models in domains outside the water sector. The consistent and similar performance of the models on any dataset illustrates the successful mitigation of discrepancies in the predictive capacity of individual LSTM models by the proposed ensemble scheme. The observed model performance highlights the datasets on which the models could potentially make accurate predictions.In part by the Department of Science and Innovation-Council for Scientific and Industrial Research (DSI-CSIR)-Inter-bursary Support Programme; and in part by the National Research Foundation (NRF), South Africa.https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6287639Electrical, Electronic and Computer Engineerin

    An updated survey on the convergence of distributed ledger technology and artificial intelligence : current state, major challenges and future direction

    Get PDF
    In recent times, Artificial Intelligence (AI) and Distributed Ledger Technology (DLT) have become two of the most discussed sectors in Information Technology, with each having made a major impact. This has generated space for further innovation to occur in the convergence of the two technologies. In this paper, we gather, analyse, and present a detailed review of the convergence of AI and DLT in a vice versa manner. We review how AI is impacts DLT by focusing on AI-based consensus algorithms, smart contract security, selfish mining, decentralized coordination, DLT fairness, non-fungible tokens, decentralized finance, decentralized exchanges, decentralized autonomous organizations, and blockchain oracles. In terms of the impact DLT has on AI, the areas covered include AI data privacy, explainable AI, smart contract-based AIs, parachains, decentralized neural networks, Internet of Things, 5G technology and data markets, and sharing. Furthermore, we identify research gaps and discuss open research challenges in developing future directions.https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6287639Electrical, Electronic and Computer Engineerin

    A fuzzy-logic based adaptive data rate scheme for energy-efficient LoRaWAN communication

    Get PDF
    Long RangeWide Area Network (LoRaWAN) technology is rapidly expanding as a technology with long distance connectivity, low power consumption, low data rates and a large number of end devices (EDs) that connect to the Internet of Things (IoT) network. Due to the heterogeneity of several applications with varying Quality of Service (QoS) requirements, energy is expended as the EDs communicate with applications. The LoRaWAN Adaptive Data Rate (ADR) manages the resource allocation to optimize energy efficiency. The performance of the ADR algorithm gradually deteriorates in dense networks and efforts have been made in various studies to improve the algorithm’s performance. In this paper, we propose a fuzzy-logic based adaptive data rate (FL-ADR) scheme for energy efficient LoRaWAN communication. The scheme is implemented on the network server (NS), which receives sensor data from the EDs via the gateway (GW) node and computes network parameters (such as the spreading factor and transmission power) to optimize the energy consumption of the EDs in the network. The performance of the algorithm is evaluated in ns-3 using a multi-gateway LoRa network with EDs sending data packets at various intervals. Our simulation results are analyzed and compared to the traditional ADR and the ns-3 ADR. The proposed FL-ADR outperforms the traditional ADR algorithm and the ns-3 ADR minimizing the interference rate and energy consumption.In part by TelkomSA.https://www.mdpi.com/journal/jsanam2023Electrical, Electronic and Computer Engineerin

    Blockchain-Enabled Vaccination Registration and Verification System in Healthcare Management

    Get PDF
    Client-server-based healthcare systems are unable to manipulate a high data volume, prone to a single failure point, limited scalability, and data integrity. Particularly, several measures introduced to help curb the spread of Covid-19 were not effective and patient records were not adequately managed and maintained. Most vaccination-proof certificates were forged by unauthorized parties and no standard verification medium exists. Therefore, this paper proposes a blockchain-enabled vaccination management system (VMS). VMS utilizes smart contracts to store encrypted patients record, generate vaccination certificates, and verify the legitimacy of the certificate using a QR code. VMS prototype is implemented using Ethereum, a public blockchain and simulations performed based on Apache JMeter and Hyperledger Caliper to assess its performance in terms of throughput, latency and response time, and the average time per transaction. Results show VMS achieved an average: response time of 132.24 ms, the throughput of 379.89 tps, latency of 204.60 ms, and time of transactions is 10s-12s for 1000 transactions. Also, its comparison with the centralized database shows the traditional database’s effectiveness in transaction processing but lacks data privacy and security strengths. We, therefore, recommend the use of blockchain in the healthcare system and other related sectors such as elections, and student records management to ensure data privacy and security and rid the system of a single point of failure
    • …
    corecore