142 research outputs found

    Timely Classification of Encrypted or ProtocolObfuscated Internet Traffic Using Statistical Methods

    Get PDF
    Internet traffic classification aims to identify the type of application or protocol that generated a particular packet or stream of packets on the network. Through traffic classification, Internet Service Providers (ISPs), governments, and network administrators can access basic functions and several solutions, including network management, advanced network monitoring, network auditing, and anomaly detection. Traffic classification is essential as it ensures the Quality of Service (QoS) of the network, as well as allowing efficient resource planning. With the increase of encrypted or obfuscated protocol traffic on the Internet and multilayer data encapsulation, some classical classification methods have lost interest from the scientific community. The limitations of traditional classification methods based on port numbers and payload inspection to classify encrypted or obfuscated Internet traffic have led to significant research efforts focused on Machine Learning (ML) based classification approaches using statistical features from the transport layer. In an attempt to increase classification performance, Machine Learning strategies have gained interest from the scientific community and have shown promise in the future of traffic classification, specially to recognize encrypted traffic. However, ML approach also has its own limitations, as some of these methods have a high computational resource consumption, which limits their application when classifying large traffic or realtime flows. Limitations of ML application have led to the investigation of alternative approaches, including featurebased procedures and statistical methods. In this sense, statistical analysis methods, such as distances and divergences, have been used to classify traffic in large flows and in realtime. The main objective of statistical distance is to differentiate flows and find a pattern in traffic characteristics through statistical properties, which enable classification. Divergences are functional expressions often related to information theory, which measure the degree of discrepancy between any two distributions. This thesis focuses on proposing a new methodological approach to classify encrypted or obfuscated Internet traffic based on statistical methods that enable the evaluation of network traffic classification performance, including the use of computational resources in terms of CPU and memory. A set of traffic classifiers based on KullbackLeibler and JensenShannon divergences, and Euclidean, Hellinger, Bhattacharyya, and Wootters distances were proposed. The following are the four main contributions to the advancement of scientific knowledge reported in this thesis. First, an extensive literature review on the classification of encrypted and obfuscated Internet traffic was conducted. The results suggest that portbased and payloadbased methods are becoming obsolete due to the increasing use of traffic encryption and multilayer data encapsulation. MLbased methods are also becoming limited due to their computational complexity. As an alternative, Support Vector Machine (SVM), which is also an ML method, and the KolmogorovSmirnov and Chisquared tests can be used as reference for statistical classification. In parallel, the possibility of using statistical methods for Internet traffic classification has emerged in the literature, with the potential of good results in classification without the need of large computational resources. The potential statistical methods are Euclidean Distance, Hellinger Distance, Bhattacharyya Distance, Wootters Distance, as well as KullbackLeibler (KL) and JensenShannon divergences. Second, we present a proposal and implementation of a classifier based on SVM for P2P multimedia traffic, comparing the results with KolmogorovSmirnov (KS) and Chisquare tests. The results suggest that SVM classification with Linear kernel leads to a better classification performance than KS and Chisquare tests, depending on the value assigned to the Self C parameter. The SVM method with Linear kernel and suitable values for the Self C parameter may be a good choice to identify encrypted P2P multimedia traffic on the Internet. Third, we present a proposal and implementation of two classifiers based on KL Divergence and Euclidean Distance, which are compared to SVM with Linear kernel, configured with the standard Self C parameter, showing a reduced ability to classify flows based solely on packet sizes compared to KL and Euclidean Distance methods. KL and Euclidean methods were able to classify all tested applications, particularly streaming and P2P, where for almost all cases they efficiently identified them with high accuracy, with reduced consumption of computational resources. Based on the obtained results, it can be concluded that KL and Euclidean Distance methods are an alternative to SVM, as these statistical approaches can operate in realtime and do not require retraining every time a new type of traffic emerges. Fourth, we present a proposal and implementation of a set of classifiers for encrypted Internet traffic, based on JensenShannon Divergence and Hellinger, Bhattacharyya, and Wootters Distances, with their respective results compared to those obtained with methods based on Euclidean Distance, KL, KS, and ChiSquare. Additionally, we present a comparative qualitative analysis of the tested methods based on Kappa values and Receiver Operating Characteristic (ROC) curves. The results suggest average accuracy values above 90% for all statistical methods, classified as ”almost perfect reliability” in terms of Kappa values, with the exception of KS. This result indicates that these methods are viable options to classify encrypted Internet traffic, especially Hellinger Distance, which showed the best Kappa values compared to other classifiers. We conclude that the considered statistical methods can be accurate and costeffective in terms of computational resource consumption to classify network traffic. Our approach was based on the classification of Internet network traffic, focusing on statistical distances and divergences. We have shown that it is possible to classify and obtain good results with statistical methods, balancing classification performance and the use of computational resources in terms of CPU and memory. The validation of the proposal supports the argument of this thesis, which proposes the implementation of statistical methods as a viable alternative to Internet traffic classification compared to methods based on port numbers, payload inspection, and ML.A classificação de tráfego Internet visa identificar o tipo de aplicação ou protocolo que gerou um determinado pacote ou fluxo de pacotes na rede. Através da classificação de tráfego, Fornecedores de Serviços de Internet (ISP), governos e administradores de rede podem ter acesso às funções básicas e várias soluções, incluindo gestão da rede, monitoramento avançado de rede, auditoria de rede e deteção de anomalias. Classificar o tráfego é essencial, pois assegura a Qualidade de Serviço (QoS) da rede, além de permitir planear com eficiência o uso de recursos. Com o aumento de tráfego cifrado ou protocolo ofuscado na Internet e do encapsulamento de dados multicamadas, alguns métodos clássicos da classificação perderam interesse de investigação da comunidade científica. As limitações dos métodos tradicionais da classificação com base no número da porta e na inspeção de carga útil payload para classificar o tráfego de Internet cifrado ou ofuscado levaram a esforços significativos de investigação com foco em abordagens da classificação baseadas em técnicas de Aprendizagem Automática (ML) usando recursos estatísticos da camada de transporte. Na tentativa de aumentar o desempenho da classificação, as estratégias de Aprendizagem Automática ganharam o interesse da comunidade científica e se mostraram promissoras no futuro da classificação de tráfego, principalmente no reconhecimento de tráfego cifrado. No entanto, a abordagem em ML também têm as suas próprias limitações, pois alguns desses métodos possuem um elevado consumo de recursos computacionais, o que limita a sua aplicação para classificação de grandes fluxos de tráfego ou em tempo real. As limitações no âmbito da aplicação de ML levaram à investigação de abordagens alternativas, incluindo procedimentos baseados em características e métodos estatísticos. Neste sentido, os métodos de análise estatística, tais como distâncias e divergências, têm sido utilizados para classificar tráfego em grandes fluxos e em tempo real. A distância estatística possui como objetivo principal diferenciar os fluxos e permite encontrar um padrão nas características de tráfego através de propriedades estatísticas, que possibilitam a classificação. As divergências são expressões funcionais frequentemente relacionadas com a teoria da informação, que mede o grau de discrepância entre duas distribuições quaisquer. Esta tese focase na proposta de uma nova abordagem metodológica para classificação de tráfego cifrado ou ofuscado da Internet com base em métodos estatísticos que possibilite avaliar o desempenho da classificação de tráfego de rede, incluindo a utilização de recursos computacionais, em termos de CPU e memória. Foi proposto um conjunto de classificadores de tráfego baseados nas Divergências de KullbackLeibler e JensenShannon e Distâncias Euclidiana, Hellinger, Bhattacharyya e Wootters. A seguir resumemse os tese. Primeiro, realizámos uma ampla revisão de literatura sobre classificação de tráfego cifrado e ofuscado de Internet. Os resultados sugerem que os métodos baseados em porta e baseados em carga útil estão se tornando obsoletos em função do crescimento da utilização de cifragem de tráfego e encapsulamento de dados multicamada. O tipo de métodos baseados em ML também está se tornando limitado em função da complexidade computacional. Como alternativa, podese utilizar a Máquina de Vetor de Suporte (SVM), que também é um método de ML, e os testes de KolmogorovSmirnov e Quiquadrado como referência de comparação da classificação estatística. Em paralelo, surgiu na literatura a possibilidade de utilização de métodos estatísticos para classificação de tráfego de Internet, com potencial de bons resultados na classificação sem aporte de grandes recursos computacionais. Os métodos estatísticos potenciais são as Distâncias Euclidiana, Hellinger, Bhattacharyya e Wootters, além das Divergências de Kullback–Leibler (KL) e JensenShannon. Segundo, apresentamos uma proposta e implementação de um classificador baseado na Máquina de Vetor de Suporte (SVM) para o tráfego multimédia P2P (PeertoPeer), comparando os resultados com os testes de KolmogorovSmirnov (KS) e Quiquadrado. Os resultados sugerem que a classificação da SVM com kernel Linear conduz a um melhor desempenho da classificação do que os testes KS e Quiquadrado, dependente do valor atribuído ao parâmetro Self C. O método SVM com kernel Linear e com valores adequados para o parâmetro Self C pode ser uma boa escolha para identificar o tráfego Par a Par (P2P) multimédia cifrado na Internet. Terceiro, apresentamos uma proposta e implementação de dois classificadores baseados na Divergência de KullbackLeibler (KL) e na Distância Euclidiana, sendo comparados com a SVM com kernel Linear, configurado para o parâmestro Self C padrão, apresenta reduzida capacidade de classificar fluxos com base apenas nos tamanhos dos pacotes em relação aos métodos KL e Distância Euclidiana. Os métodos KL e Euclidiano foram capazes de classificar todas as aplicações testadas, destacandose streaming e P2P, onde para quase todos os casos foi eficiente identificálas com alta precisão, com reduzido consumo de recursos computacionais.Com base nos resultados obtidos, podese concluir que os métodos KL e Distância Euclidiana são uma alternativa à SVM, porque essas abordagens estatísticas podem operar em tempo real e não precisam de retreinamento cada vez que surge um novo tipo de tráfego. Quarto, apresentamos uma proposta e implementação de um conjunto de classificadores para o tráfego de Internet cifrado, baseados na Divergência de JensenShannon e nas Distâncias de Hellinger, Bhattacharyya e Wootters, sendo os respetivos resultados comparados com os resultados obtidos com os métodos baseados na Distância Euclidiana, KL, KS e Quiquadrado. Além disso, apresentamos uma análise qualitativa comparativa dos métodos testados com base nos valores de Kappa e Curvas Característica de Operação do Receptor (ROC). Os resultados sugerem valores médios de precisão acima de 90% para todos os métodos estatísticos, classificados como “confiabilidade quase perfeita” em valores de Kappa, com exceçãode KS. Esse resultado indica que esses métodos são opções viáveis para a classificação de tráfego cifrado da Internet, em especial a Distância de Hellinger, que apresentou os melhores resultados do valor de Kappa em comparaçãocom os demais classificadores. Concluise que os métodos estatísticos considerados podem ser precisos e económicos em termos de consumo de recursos computacionais para classificar o tráfego da rede. A nossa abordagem baseouse na classificação de tráfego de rede Internet, focando em distâncias e divergências estatísticas. Nós mostramos que é possível classificar e obter bons resultados com métodos estatísticos, equilibrando desempenho de classificação e uso de recursos computacionais em termos de CPU e memória. A validação da proposta sustenta o argumento desta tese, que propõe a implementação de métodos estatísticos como alternativa viável à classificação de tráfego da Internet em relação aos métodos com base no número da porta, na inspeção de carga útil e de ML.Thesis prepared at Instituto de Telecomunicações Delegação da Covilhã and at the Department of Computer Science of the University of Beira Interior, and submitted to the University of Beira Interior for discussion in public session to obtain the Ph.D. Degree in Computer Science and Engineering. This work has been funded by Portuguese FCT/MCTES through national funds and, when applicable, cofunded by EU funds under the project UIDB/50008/2020, and by operation Centro010145FEDER000019 C4 Centro de Competências em Cloud Computing, cofunded by the European Regional Development Fund (ERDF/FEDER) through the Programa Operacional Regional do Centro (Centro 2020). This work has also been funded by CAPES (Brazilian Federal Agency for Support and Evaluation of Graduate Education) within the Ministry of Education of Brazil under a scholarship supported by the International Cooperation Program CAPES/COFECUB Project 9090134/ 2013 at the University of Beira Interior

    On the cyber security issues of the internet infrastructure

    Get PDF
    The Internet network has received huge attentions by the research community. At a first glance, the network optimization and scalability issues dominate the efforts of researchers and vendors. Many results have been obtained in the last decades: the Internet’s architecture is optimized to be cheap, robust and ubiquitous. In contrast, such a network has never been perfectly secure. During all its evolution, the security threats of the Internet persist as a transversal and endless topic. Nowadays, the Internet network hosts a multitude of mission critical activities. The electronic voting systems and financial services are carried out through it. Governmental institutions, financial and business organizations depend on the performance and the security of the Internet. This role confers to the Internet network a critical characterization. At the same time, the Internet network is a vector of malicious activities, like Denial of Service attacks; many reports of attacks can be found in both academic outcomes and daily news. In order to mitigate this wide range of issues, many research efforts have been carried out in the past decades; unfortunately, the complex architecture and the scale of the Internet make hard the evaluation and the adoption of such proposals. In order to improve the security of the Internet, the research community can benefit from sharing real network data. Unfortunately, privacy and security concerns inhibit the release of these data: its suffices to imagine the big amount of private information (e.g., political preferences or religious belief) it is possible to get while reading the Internet packets exchanged between users and web services. This scenario motivates my research, and represents the context of this dissertation which contributes to the analysis of the security issues of the Internet infrastructures and describes relevant security proposals. In particular, the main outcomes described in this dissertation are: • the definition of a secure routing protocol for the Internet network able to provide cryptographic guarantees against false route announcement and invalid path attack; • the definition of a new obfuscation technique that allow the research community to publicly release their real network flows with formal guarantees of security and privacy; • the evidence of a new kind of leakage of sensitive informations obtained hacking the models used by sundry Machine Learning Algorithms

    Performance Evaluation of Network Anomaly Detection Systems

    Get PDF
    Nowadays, there is a huge and growing concern about security in information and communication technology (ICT) among the scientific community because any attack or anomaly in the network can greatly affect many domains such as national security, private data storage, social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad research area, and many different techniques and approaches for this purpose have emerged through the years. Attacks, problems, and internal failures when not detected early may badly harm an entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection system based on the statistical method Principal Component Analysis (PCADS-AD). This approach creates a network profile called Digital Signature of Network Segment using Flow Analysis (DSNSF) that denotes the predicted normal behavior of a network traffic activity through historical data analysis. That digital signature is used as a threshold for volume anomaly detection to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP addresses and Ports, to provides the network administrator necessary information to solve them. Via evaluation techniques, addition of a different anomaly detection approach, and comparisons to other methods performed in this thesis using real network traffic data, results showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection accuracy on the detection schema. The observed results seek to contribute to the advance of the state of the art in methods and strategies for anomaly detection that aim to surpass some challenges that emerge from the constant growth in complexity, speed and size of today’s large scale networks, also providing high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia da informação e comunicação (TIC) entre a comunidade científica. Isto porque qualquer ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade em muitos domínios, como segurança nacional, armazenamento de dados privados, bem-estar social, questões econômicas, e assim por diante. Portanto, a deteção de anomalias é uma ampla área de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito surgiram ao longo dos anos. Ataques, problemas e falhas internas quando não detetados precocemente podem prejudicar gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autônomo de deteção de anomalias baseado em perfil utilizando o método estatístico Análise de Componentes Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital do Segmento de Rede usando Análise de Fluxos (DSNSF) que denota o comportamento normal previsto de uma atividade de tráfego de rede por meio da análise de dados históricos. Essa assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar disparidades na tendência de tráfego normal. O sistema proposto utiliza sete atributos de fluxo de tráfego: bits, pacotes e número de fluxos para detetar problemas, além de endereços IP e portas de origem e destino para fornecer ao administrador de rede as informações necessárias para resolvê-los. Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem de deteção distinta da proposta principal e comparações com outros métodos realizados nesta tese usando dados reais de tráfego de rede, os resultados mostraram boas previsões de tráfego pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção. Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir para o avanço do estado da arte em métodos e estratégias de deteção de anomalias, visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade e tamanho das redes de grande porte da atualidade, proporcionando também alta performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para que possa ser aplicado a deteção em tempo real

    Tapjacking Threats and Mitigation Techniques for Android Applications

    Get PDF
    With the increased dependency on web applications through mobile devices, malicious attack techniques have now shifted from traditional web applications running on desktop or laptop (allowing mouse click- based interactions) to mobile applications running on mobile devices (allowing touch-based interactions). Clickjacking is a type of malicious attack originating in web applications, where victims are lured to click on seemingly benign objects in web pages. However, when clicked, unintended actions are performed without the user’s knowledge. In particular, it is shown that users are lured to touch an object of an application triggering unintended actions not actually intended by victims. This new form of clickjacking on mobile devices is called tapjacking. There is little research that thoroughly investigates attacks and mitigation techniques due to tapjacking in mobile devices. In this thesis, we identify coding practices that can be helpful for software practitioners to avoid malicious attacks and define a detection techniques to prevent the consequence of malicious attacks for the end users. We first find out where tapjacking attack type falls within the broader literature of malware, in particular for Android malware. In this direction, we propose a classification of Android malware. Then, we propose a novel technique based on Kullback-Leibler Divergence (KLD) to identify possible tapjacking behavior in applications. We validate the approach with a set of benign and malicious android applications. We also implemented a prototype tool for detecting tapjacking attack symptom using the KLD based measurement. The evaluation results show that tapjacking can be detected effectively with KLD

    A precise bare simulation approach to the minimization of some distances. Foundations

    Full text link
    In information theory -- as well as in the adjacent fields of statistics, machine learning, artificial intelligence, signal processing and pattern recognition -- many flexibilizations of the omnipresent Kullback-Leibler information distance (relative entropy) and of the closely related Shannon entropy have become frequently used tools. To tackle corresponding constrained minimization (respectively maximization) problems by a newly developed dimension-free bare (pure) simulation method, is the main goal of this paper. Almost no assumptions (like convexity) on the set of constraints are needed, within our discrete setup of arbitrary dimension, and our method is precise (i.e., converges in the limit). As a side effect, we also derive an innovative way of constructing new useful distances/divergences. To illustrate the core of our approach, we present numerous examples. The potential for widespread applicability is indicated, too; in particular, we deliver many recent references for uses of the involved distances/divergences and entropies in various different research fields (which may also serve as an interdisciplinary interface)

    Network Traffic Analysis Using Stochastic Grammars

    Get PDF
    Network traffic analysis is widely used to infer information from Internet traffic. This is possible even if the traffic is encrypted. Previous work uses traffic characteristics, such as port numbers, packet sizes, and frequency, without looking for more subtle patterns in the network traffic. In this work, we use stochastic grammars, hidden Markov models (HMMs) and probabilistic context-free grammars (PCFGs), as pattern recognition tools for traffic analysis. HMMs are widely used for pattern recognition and detection. We use a HMM inference approach. With inferred HMMs, we use confidence intervals (CI) to detect if a data sequence matches the HMM. To compare HMMs, we define a normalized Markov metric. A statistical test is used to determine model equivalence. Our metric systematically removes the least likely events from both HMMs until the remaining models are statistically equivalent. This defines the distance between models. We extend the use of HMMs to PCFGs, which have more expressive power. We estimate PCFG production probabilities from data. A statistical test is used for detection. We present three applications of HMM and PCFG detection to network traffic analysis. First, we infer the presence of protocol tunneling through Tor (the onion router) anonymization network. The Markov metric quantifies the similarity of network traffic HMMs in Tor to identify the protocol. It also measures communication noise in Tor network. We use HMMs to detect centralized botnet traffic. We infer HMMs from botnet traffic data and detect botnet infections. Experimental results show that HMMs can accurately detect Zeus botnet traffic. To hide their locations better, newer botnets have P2P control structures. Hierarchical P2P botnets contain recursive and hierarchical patterns. We use PCFGs to detect P2P botnet traffic. Experimentation on real-world traffic data shows that PCFGs can accurately differentiate between P2P botnet traffic and normal Internet traffic

    Systematic Characterization of Power Side Channel Attacks for Residual and Added Vulnerabilities

    Get PDF
    Power Side Channel Attacks have continued to be a major threat to cryptographic devices. Hence, it will be useful for designers of cryptographic systems to systematically identify which type of power Side Channel Attacks their designs remain vulnerable to after implementation. It’s also useful to determine which additional vulnerabilities they have exposed their devices to, after the implementation of a countermeasure or a feature. The goal of this research is to develop a characterization of power side channel attacks on different encryption algorithms\u27 implementations to create metrics and methods to evaluate their residual vulnerabilities and added vulnerabilities. This research studies the characteristics that influence the power side leakage, classifies them, and identifies both the residual vulnerabilities and the added vulnerabilities. Residual vulnerabilities are defined as the traits that leave the implementation of the algorithm still vulnerable to power Side Channel Attacks (SCA), sometimes despite the attempt at implementing countermeasures by the designers. Added vulnerabilities to power SCA are defined as vulnerabilities created or enhanced by the algorithm implementations and/or modifications. The three buckets in which we categorize the encryption algorithm implementations are: i. Countermeasures against power side channel attacks, ii. IC power delivery network impact to power leakage (including voltage regulators), iii. Lightweight ciphers and applications for the Internet of Things (IoT ) From the characterization of masking countermeasures, an example outcome developed is that masking schemes, when uniformly distributed random masks are used, are still vulnerable to collision power attacks. Another example outcome derived is that masked AES, when glitches occur, is still vulnerable to Differential Power Analysis (DPA). We have developed a characterization of power side-channel attacks on the hardware implementations of different symmetric encryption algorithms to provide a detailed analysis of the effectiveness of state-of-the-art countermeasures against local and remote power side-channel attacks. The characterization is accomplished by studying the attributes that influence power side-channel leaks, classifying them, and identifying both residual vulnerabilities and added vulnerabilities. The evaluated countermeasures include masking, hiding, and power delivery network scrambling. But, vulnerability to DPA depends largely on the quality of the leaked power, which is impacted by the characteristics of the device power delivery network. Countermeasures and deterrents to power side-channel attacks targeting the alteration or scrambling of the power delivery network have been shown to be effective against local attacks where the malicious agent has physical access to the target system. However, remote attacks that capture the leaked information from within the IC power grid are shown herein to be nonetheless effective at uncovering the secret key in the presence of these countermeasures/deterrents. Theoretical studies and experimental analysis are carried out to define and quantify the impact of integrated voltage regulators, voltage noise injection, and integration of on-package decoupling capacitors for both remote and local attacks. An outcome yielded by the studies is that the use of an integrated voltage regulator as a countermeasure is effective for a local attack. However, remote attacks are still effective and hence break the integrated voltage regulator countermeasure. From experimental analysis, it is observed that within the range of designs\u27 practical values, the adoption of on-package decoupling capacitors provides only a 1.3x increase in the minimum number of traces required to discover the secret key. However, the injection of noise in the IC power delivery network yields a 37x increase in the minimum number of traces to discover. Thus, increasing the number of on-package decoupling capacitors or the impedance between the local probing site and the IC power grid should not be relied on as countermeasures to power side-channel attacks, for remote attack schemes. Noise injection should be considered as it is more effective at scrambling the leaked signal to eliminate sensitive identifying information. However, the analysis and experiments carried out herein are applied to regular symmetric ciphers which are not suitable for protecting Internet of Things (IoT) devices. The protection of communications between IoT devices is of great concern because the information exchanged contains vital sensitive data. Malicious agents seek to exploit those data to extract secret information about the owners or the system. Power side channel attacks are of great concern on these devices because their power consumption unintentionally leaks information correlatable to the device\u27s secret data. Several studies have demonstrated the effectiveness of authenticated encryption with advanced data (AEAD), in protecting communications with these devices. In this research, we have proposed a comprehensive evaluation of the ten algorithm finalists of the National Institute of Standards and Technology (NIST) IoT lightweight cipher competition. The study shows that, nonetheless, some still present some residual vulnerabilities to power side channel attacks (SCA). For five ciphers, we propose an attack methodology as well as the leakage function needed to perform correlation power analysis (CPA). We assert that Ascon, Sparkle, and PHOTON-Beetle security vulnerability can generally be assessed with the security assumptions Chosen ciphertext attack and leakage in encryption only, with nonce-misuse resilience adversary (CCAmL1) and Chosen ciphertext attack and leakage in encryption only with nonce-respecting adversary (CCAL1) , respectively. However, the security vulnerability of GIFT-COFB, Grain, Romulus, and TinyJambu can be evaluated more straightforwardly with publicly available leakage models and solvers. They can also be assessed simply by increasing the number of traces collected to launch the attack
    corecore