2,968 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    SHARKS: Smart Hacking Approaches for RisK Scanning in Internet-of-Things and Cyber-Physical Systems based on Machine Learning

    Full text link
    Cyber-physical systems (CPS) and Internet-of-Things (IoT) devices are increasingly being deployed across multiple functionalities, ranging from healthcare devices and wearables to critical infrastructures, e.g., nuclear power plants, autonomous vehicles, smart cities, and smart homes. These devices are inherently not secure across their comprehensive software, hardware, and network stacks, thus presenting a large attack surface that can be exploited by hackers. In this article, we present an innovative technique for detecting unknown system vulnerabilities, managing these vulnerabilities, and improving incident response when such vulnerabilities are exploited. The novelty of this approach lies in extracting intelligence from known real-world CPS/IoT attacks, representing them in the form of regular expressions, and employing machine learning (ML) techniques on this ensemble of regular expressions to generate new attack vectors and security vulnerabilities. Our results show that 10 new attack vectors and 122 new vulnerability exploits can be successfully generated that have the potential to exploit a CPS or an IoT ecosystem. The ML methodology achieves an accuracy of 97.4% and enables us to predict these attacks efficiently with an 87.2% reduction in the search space. We demonstrate the application of our method to the hacking of the in-vehicle network of a connected car. To defend against the known attacks and possible novel exploits, we discuss a defense-in-depth mechanism for various classes of attacks and the classification of data targeted by such attacks. This defense mechanism optimizes the cost of security measures based on the sensitivity of the protected resource, thus incentivizing its adoption in real-world CPS/IoT by cybersecurity practitioners.Comment: This article has been accepted in IEEE Transactions on Emerging Topics in Computing. 17 pages, 12 figures, IEEE copyrigh

    Improve Quality of Service for the Internet of Things using Blockchain & Machine Learning Algorithms.

    Get PDF
    [EN] The quality of service (QoS) parameters in IoT applications plays a prominent role in determining the performance of an application. Considering the significance and popularity of IoT systems, it can be predicted that the number of users and IoT devices are going to increase exponentially shortly. Therefore, it is extremely important to improve the QoS provided by IoT applications to increase their adaptability. Majority of the IoT systems are characterized by their heterogeneous and diverse nature. It is challenging for these systems to provide high-quality access to all the connecting devices with uninterrupted connectivity. Considering their heterogeneity, it is equally difficult to achieve better QoS parameters. Artificial intelligence-based machine learning (ML) tools are considered a potential tool for improving the QoS parameters in IoT applications. This research proposes a novel approach for enhancing QoS parameters in IoT using ML and Blockchain techniques. The IoT network with Blockchain technology is simulated using an NS2 simulator. Different QoS parameters such as delay, throughput, packet delivery ratio, and packet drop are analyzed. The obtained QoS values are classified using different ML models such as Naive Bayes (NB), Decision Tree (DT), and Ensemble, learning techniques. Results show that the Ensemble classifier achieves the highest classification accuracy of 83.74% compared to NB and DT classifiers.SIPublicación en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y León (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEÓN, Actuación:20007-CL - Apoyo Consorcio BUCL

    Identification of User Behavioural Biometrics for Authentication using Keystroke Dynamics and Machine Learning

    Get PDF
    This thesis focuses on the effective classification of the behavior of users accessing computing devices to authenticate them. The authentication is based on keystroke dynamics, which captures the users behavioral biometric and applies machine learning concepts to classify them. The users type a strong passcode ”.tie5Roanl” to record their typing pattern. In order to confirm identity, anonymous data from 94 users were collected to carry out the research. Given the raw data, features were extracted from the attributes based on the button pressed and action timestamp events. The support vector machine classifier uses multi-class classification with one vs. one decision shape function to classify different users. To reduce the classification error, it is essential to identify the important features from the raw data. In an effort to confront the generation of features from attributes an efficient feature extraction algorithm has been developed, obtaining high classification performance are now being sought. To handle the multi-class problem, the random forest classifier is used to identify the users effectively. In addition, mRMR feature selection has been applied to increase the classification performance metrics and to confirm the identity of the users based on the way they access computing devices. From the results, we conclude that device information and touch pressure effectively contribute to identifying each user. Out of them, features that contain device information are responsible for increasing the performance metrics of the system by adding a token-based authentication layer. Based upon the results, random forest yields better classification results for this dataset. The research will contribute significantly to the field of cyber-security by forming a robust authentication system using machine learning algorithms
    corecore