11 research outputs found

    A Unification of Fog-Cloud Computing for Data Agitation and Guard Intensification in Industrial Health Care Security

    Get PDF
    The development of Fog computing is a decentralized environment in which the data processing, storage and applications are processed between the located server in a cloud environment. By increasing the Internet of things (IoT) and remote storage, the communication in cloud become more sophisticated by processing the data safely and securely.  Healthcare data is important which contains medical information and processed centrality in the public environment through IoT, due to increasing security breaches they need to protect depends on the security applicants. All over the centralized data computing are accessed by virtual environment through remote protocol doesn’t provide safety in healthcare industry. To resolve this problem, to propose as Unification of fog-cloud computing for data agitation and guard intensification (DA-GI) in Industrial Health Care Security.  The Medical Data Health-Care (MDHC) records are stored in Cloud datacenters and the Fog layer based on the guard intensity and the key is provoked for ingress the file. The activity logs are controlled and monitoring from cloud serves sustains the Activity Log, Risk Table and Health Records. To introduce a cryptographic approach based on advanced encryption standards (AES) to protect data and authenticity verification server. The key verification process based on security gateway, Fog cloud server depends on user access rights provided to the user. During the key validation, role of permission to the user s verified and agitate to allow access rights to the verified service access. The proposed system produce high security compared to the other system as well in all security concerns of role, authentication and verification to process data safely

    Machine learning for Internet of Things data analysis: A survey

    Get PDF
    Rapid developments in hardware, software, and communication technologies have allowed the emergence of Internet-connected sensory devices that provide observation and data measurement from the physical world. By 2020, it is estimated that the total number of Internet-connected devices being used will be between 25 and 50 billion. As the numbers grow and technologies become more mature, the volume of data published will increase. Internet-connected devices technology, referred to as Internet of Things (IoT), continues to extend the current Internet by providing connectivity and interaction between the physical and cyber worlds. In addition to increased volume, the IoT generates Big Data characterized by velocity in terms of time and location dependency, with a variety of multiple modalities and varying data quality. Intelligent processing and analysis of this Big Data is the key to developing smart IoT applications. This article assesses the different machine learning methods that deal with the challenges in IoT data by considering smart cities as the main use case. The key contribution of this study is presentation of a taxonomy of machine learning algorithms explaining how different techniques are applied to the data in order to extract higher level information. The potential and challenges of machine learning for IoT data analytics will also be discussed. A use case of applying Support Vector Machine (SVM) on Aarhus Smart City traffic data is presented for a more detailed exploration.Comment: Digital Communications and Networks (2017

    An anomaly mitigation framework for IoT using fog computing

    Get PDF
    The advancement in IoT has prompted its application in areas such as smart homes, smart cities, etc., and this has aided its exponential growth. However, alongside this development, IoT networks are experiencing a rise in security challenges such as botnet attacks, which often appear as network anomalies. Similarly, providing security solutions has been challenging due to the low resources that characterize the devices in IoT networks. To overcome these challenges, the fog computing paradigm has provided an enabling environment that offers additional resources for deploying security solutions such as anomaly mitigation schemes. In this paper, we propose a hybrid anomaly mitigation framework for IoT using fog computing to ensure faster and accurate anomaly detection. The framework employs signature- and anomaly-based detection methodologies for its two modules, respectively. The signature-based module utilizes a database of attack sources (blacklisted IP addresses) to ensure faster detection when attacks are executed from the blacklisted IP address, while the anomaly-based module uses an extreme gradient boosting algorithm for accurate classification of network traffic flow into normal or abnormal. We evaluated the performance of both modules using an IoT-based dataset in terms response time for the signature-based module and accuracy in binary and multiclass classification for the anomaly-based module. The results show that the signature-based module achieves a fast attack detection of at least six times faster than the anomaly-based module in each number of instances evaluated. The anomaly-based module using the XGBoost classifier detects attacks with an accuracy of 99% and at least 97% for average recall, average precision, and average F1 score for binary and multiclass classification. Additionally, it recorded 0.05 in terms of false-positive rates

    Machine Learning for Internet of Things Data Analysis: A Survey

    Get PDF
    Rapid developments in hardware, software, and communication technologies have facilitated the emergence of Internet-connected sensory devices that provide observations and data measurements from the physical world. By 2020, it is estimated that the total number of Internet-connected devices being used will be between 25 and 50 billion. As these numbers grow and technologies become more mature, the volume of data being published will increase. The technology of Internet-connected devices, referred to as Internet of Things (IoT), continues to extend the current Internet by providing connectivity and interactions between the physical and cyber worlds. In addition to an increased volume, the IoT generates big data characterized by its velocity in terms of time and location dependency, with a variety of multiple modalities and varying data quality. Intelligent processing and analysis of this big data are the key to developing smart IoT applications. This article assesses the various machine learning methods that deal with the challenges presented by IoT data by considering smart cities as the main use case. The key contribution of this study is the presentation of a taxonomy of machine learning algorithms explaining how different techniques are applied to the data in order to extract higher level information. The potential and challenges of machine learning for IoT data analytics will also be discussed. A use case of applying a Support Vector Machine (SVM) to Aarhus smart city traffic data is presented for a more detailed exploration

    Enhanced non-parametric sequence learning scheme for internet of things sensory data in cloud infrastructure

    Get PDF
    The Internet of Things (IoT) Cloud is an emerging technology that enables machine-to-machine, human-to-machine and human-to-human interaction through the Internet. IoT sensor devices tend to generate sensory data known for their dynamic and heterogeneous nature. Hence, it makes it elusive to be managed by the sensor devices due to their limited computation power and storage space. However, the Cloud Infrastructure as a Service (IaaS) leverages the limitations of the IoT devices by making its computation power and storage resources available to execute IoT sensory data. In IoT-Cloud IaaS, resource allocation is the process of distributing optimal resources to execute data request tasks that comprise data filtering operations. Recently, machine learning, non-heuristics, multi-objective and hybrid algorithms have been applied for efficient resource allocation to execute IoT sensory data filtering request tasks in IoT-enabled Cloud IaaS. However, the filtering task is still prone to some challenges. These challenges include global search entrapment of event and error outlier detection as the dimension of the dataset increases in size, the inability of missing data recovery for effective redundant data elimination and local search entrapment that leads to unbalanced workloads on available resources required for task execution. In this thesis, the enhancement of Non-Parametric Sequence Learning (NPSL), Perceptually Important Point (PIP) and Efficient Energy Resource Ranking- Virtual Machine Selection (ERVS) algorithms were proposed. The Non-Parametric Sequence-based Agglomerative Gaussian Mixture Model (NPSAGMM) technique was initially utilized to improve the detection of event and error outliers in the global space as the dimension of the dataset increases in size. Then, Perceptually Important Points K-means-enabled Cosine and Manhattan (PIP-KCM) technique was employed to recover missing data to improve the elimination of duplicate sensed data records. Finally, an Efficient Resource Balance Ranking- based Glow-warm Swarm Optimization (ERBV-GSO) technique was used to resolve the local search entrapment for near-optimal solutions and to reduce workload imbalance on available resources for task execution in the IoT-Cloud IaaS platform. Experiments were carried out using the NetworkX simulator and the results of N-PSAGMM, PIP-KCM and ERBV-GSO techniques with N-PSL, PIP, ERVS and Resource Fragmentation Aware (RF-Aware) algorithms were compared. The experimental results showed that the proposed NPSAGMM, PIP-KCM, and ERBV-GSO techniques produced a tremendous performance improvement rate based on 3.602%/6.74% Precision, 9.724%/8.77% Recall, 5.350%/4.42% Area under Curve for the detection of event and error outliers. Furthermore, the results indicated an improvement rate of 94.273% F1-score, 0.143 Reduction Ratio, and with minimum 0.149% Root Mean Squared Error for redundant data elimination as well as the minimum number of 608 Virtual Machine migrations, 47.62% Resource Utilization and 41.13% load balancing degree for the allocation of desired resources deployed to execute sensory data filtering tasks respectively. Therefore, the proposed techniques have proven to be effective for improving the load balancing of allocating the desired resources to execute efficient outlier (Event and Error) detection and eliminate redundant data records in the IoT-based Cloud IaaS Infrastructure

    Octopus++: an enhanced mutual authentication security protocol and lightweight encryption and decryption algorithm based on DNA in fog computing

    Get PDF
    The Internet of Things (IoT) envisions a world wherein everyday objects may connect to the internet and exchange data, analyse, store, and gather data from their environment and efficiently mediate on it. Fog computing, closer to the IoT, is formulated in data processing, filtering, aggregating, and storing. In fog IoT network one of the main challenges is security. The existing security solutions are based on modern cryptography algorithms are computationally complex which causes the fog IoT network to slow down. Therefore, in fog IoT the operations must be lightweight and secure. The security considerations include attacks, especially Man in the Middle attack (MitM), challenges, requirements, and existing solutions that are deeply analyzed and reviewed. Hence, omega network key generation based on deoxyribonucleic acid (ONDNA) is proposed, which provides lightweight encryption and decryption in fog computing. The security level of ONDNA is tested using NIST test suite. ONDNA passes all the 17 recommended NIST Test Suite tests. Next, we proposed a modified security protocol based on ONDNA and hash message authentication code with secure hash algorithm 2. The modified protocol is noted as OCTOPUS++. We proved that the OCTOPUS++ provides confidentiality, mutual authentication, and resistance to MitM attack using the widely accepted Burrows Abdi Needham (BAN) logic. The OCTOPUS++ is evaluated in terms of execution time. The average execution time for 20-time execution of OCTOPUS++ is 1.018917 milliseconds. The average execution time for Octopus, LAMAS and Amor is 2.444324, 20.1638 and 14.1152 milliseconds respectively. The results show that the OCTOPUS++ has less execution time than other existing protocol

    Security architecture for Fog-To-Cloud continuum system

    Get PDF
    Nowadays, by increasing the number of connected devices to Internet rapidly, cloud computing cannot handle the real-time processing. Therefore, fog computing was emerged for providing data processing, filtering, aggregating, storing, network, and computing closer to the users. Fog computing provides real-time processing with lower latency than cloud. However, fog computing did not come to compete with cloud, it comes to complete the cloud. Therefore, a hierarchical Fog-to-Cloud (F2C) continuum system was introduced. The F2C system brings the collaboration between distributed fogs and centralized cloud. In F2C systems, one of the main challenges is security. Traditional cloud as security provider is not suitable for the F2C system due to be a single-point-of-failure; and even the increasing number of devices at the edge of the network brings scalability issues. Furthermore, traditional cloud security cannot be applied to the fog devices due to their lower computational power than cloud. On the other hand, considering fog nodes as security providers for the edge of the network brings Quality of Service (QoS) issues due to huge fog device’s computational power consumption by security algorithms. There are some security solutions for fog computing but they are not considering the hierarchical fog to cloud characteristics that can cause a no-secure collaboration between fog and cloud. In this thesis, the security considerations, attacks, challenges, requirements, and existing solutions are deeply analyzed and reviewed. And finally, a decoupled security architecture is proposed to provide the demanded security in hierarchical and distributed fashion with less impact on the QoS.Hoy en día, al aumentar rápidamente el número de dispositivos conectados a Internet, el cloud computing no puede gestionar el procesamiento en tiempo real. Por lo tanto, la informática de niebla surgió para proporcionar procesamiento de datos, filtrado, agregación, almacenamiento, red y computación más cercana a los usuarios. La computación nebulizada proporciona procesamiento en tiempo real con menor latencia que la nube. Sin embargo, la informática de niebla no llegó a competir con la nube, sino que viene a completar la nube. Por lo tanto, se introdujo un sistema continuo jerárquico de niebla a nube (F2C). El sistema F2C aporta la colaboración entre las nieblas distribuidas y la nube centralizada. En los sistemas F2C, uno de los principales retos es la seguridad. La nube tradicional como proveedor de seguridad no es adecuada para el sistema F2C debido a que se trata de un único punto de fallo; e incluso el creciente número de dispositivos en el borde de la red trae consigo problemas de escalabilidad. Además, la seguridad tradicional de la nube no se puede aplicar a los dispositivos de niebla debido a su menor poder computacional que la nube. Por otro lado, considerar los nodos de niebla como proveedores de seguridad para el borde de la red trae problemas de Calidad de Servicio (QoS) debido al enorme consumo de energía computacional del dispositivo de niebla por parte de los algoritmos de seguridad. Existen algunas soluciones de seguridad para la informática de niebla, pero no están considerando las características de niebla a nube jerárquica que pueden causar una colaboración insegura entre niebla y nube. En esta tesis, las consideraciones de seguridad, los ataques, los desafíos, los requisitos y las soluciones existentes se analizan y revisan en profundidad. Y finalmente, se propone una arquitectura de seguridad desacoplada para proporcionar la seguridad exigida de forma jerárquica y distribuida con menor impacto en la QoS.Postprint (published version

    A security and privacy scheme based on node and message authentication and trust in fog-enabled VANET

    Get PDF
    Security and privacy are the most important concerns related to vehicular ad hoc network (VANET), as it is an open-access and self-organized network. The presence of ‘selfish’ nodes distributed in the network are taken into account as an important challenge and as a security threat in VANET. A selfish node is a legitimate vehicle node which tries to achieve the most benefit from the network by broadcasting wrong information. An efficient and proper security model can be useful to tackle advances from attackers, as well as selfish nodes. In this study, a privacy-preserving node and message authentication scheme, along with a trust model was developed. The proposed node authentication ensures the legitimacy of the vehicle nodes, whereas the message authentication was developed to ensure the message's integrity. To deal with selfish nodes, an experience-based trust model was also designed. Additionally, to fulfill the privacy-preserving aspect, the mapping of each vehicle was performed using a different pseudo-identity. In this paper, fog nodes instead of road-side units (RSUs), were distributed along the roadside. This was mainly because of the fact that fog computing reduces latency, and results in increased throughput. Security analysis indicated that our scheme met the VANETs' security requirements. In addition, the performance analysis showed that the proposed scheme had a lower communication and computation overhead, compared to the other related works. Monte-Carlo simulation results were applied to estimate the false-positive rates (FPR), which also proved the validity of the proposed security scheme
    corecore