144 research outputs found

    Security of HyperLogLog (HLL) Cardinality Estimation: Vulnerabilities and Protection

    Full text link
    Count distinct or cardinality estimates are widely used in network monitoring for security. They can be used, for example, to detect the malware spread, network scans, or a denial of service attack. There are many algorithms to estimate cardinality. Among those, HyperLogLog (HLL) has been one of the most widely adopted. HLL is simple, provides good cardinality estimates over a wide range of values, requires a small amount of memory, and allows merging of estimates from different sources. However, as HLL is increasingly used to detect attacks, it can itself become the target of attackers that want to avoid being detected. To the best of our knowledge, the security of HLL has not been studied before. In this letter, we take an initial step in its study by first exposing a vulnerability of HLL that allows an attacker to manipulate its estimate. This shows the importance of designing secure HLL implementations. In the second part of the letter, we propose an efficient protection technique to detect and avoid the HLL manipulation. The results presented strongly suggest that the security of HLL should be further studied given that it is widely adopted in many networking and computing applications

    Breaking Cuckoo Hash: Black Box Attacks

    Get PDF
    Introduced less than twenty years ago, cuckoo hashing has a number of attractive features like a constant worst case number of memory accesses for queries and close to full memory utilization. Cuckoo hashing has been widely adopted to perform exact matching of an incoming key with a set of stored (key, value) pairs in both software and hardware implementations. This widespread adoption makes it important to consider the security of cuckoo hashing. Most hash based data structures can be attacked by generating collisions that reduce their performance. In fact, for cuckoo hashing collisions could lead to insertion failures which in some systems would lead to a system failure. For example, if cuckoo hashing is used to perform Ethernet lookup and a given MAC address cannot be added to the cuckoo hash, the switch would not be able to correctly forward frames to that address. Previous works have shown that this can be done when the attacker knows the hash functions used in the implementation. However, in many cases the attacker would not have that information and would only have access to the cuckoo hash operations to perform insertions, removals or queries. This article considers the security of a cuckoo hash to an attacker that has only a black box access to it. The analysis shows that by carefully performing user operations on the cuckoo hash, the attacker can force insertion failures with a small set of elements. The proposed attack has been implemented and tested for different configurations to demonstrate its feasibility. The fact that cuckoo hash can be broken with only access to its user functions should be taken into account when implementing it in critical systems. The article also discusses potential approaches to mitigate this vulnerability.This work was supported by the ACHILLES project (PID2019-104207RB-I00) and the Go2Edge network (RED2018-102585-T) funded by the Spanish Ministry of Science and Innovation and by the Madrid Community project TAPIR-CM (P2018/TCS-4496).Publicad

    Selective Neuron Re-Computation (SNRC) for Error-Tolerant Neural Networks

    Get PDF
    Artificial Neural networks (ANNs) are widely used to solve classification problems for many machine learning applications. When errors occur in the computational units of an ANN implementation due to for example radiation effects, the result of an arithmetic operation can be changed, and therefore, the predicted classification class may be erroneously affected. This is not acceptable when ANNs are used in many safety-critical applications, because the incorrect classification may result in a system failure. Existing error-tolerant techniques usually rely on physically replicating parts of the ANN implementation or incurring in a significant computation overhead. Therefore, efficient protection schemes are needed for ANNs that are run on a processor and used in resource-limited platforms. A technique referred to as Selective Neuron Re-Computation (SNRC), is proposed in this paper. As per the ANN structure and algorithmic properties, SNRC can identify the cases in which the errors have no impact on the outcome; therefore, errors only need to be handled by re-computation when the classification result is detected as unreliable. Compared with existing temporal redundancy-based protection schemes, SNRC saves more than 60 percent of the re-computation (more than 90 percent in many cases) overhead to achieve complete error protection as assessed over a wide range of datasets. Different activation functions are also evaluated.This research was supported by the National Science Foundation Grants CCF-1953961 and 1812467, by the ACHILLES project PID2019-104207RB-I00 and the Go2Edge network RED2018-102585-T funded by the Spanish Ministry of Science and Innovation and by the Madrid Community research project TAPIR-CM P2018/TCS-4496.Publicad

    Codes for Limited Magnitude Error Correction in Multilevel Cell Memories

    Get PDF
    Multilevel cell (MLC) memories have been advocated for increasing density at low cost in next generation memories. However, the feature of several bits in a cell reduces the distance between levels; this reduced margin makes such memories more vulnerable to defective phenomena and parameter variations, leading to an error in stored data. These errors typically are of limited magnitude, because the induced change causes the stored value to exceed only a few of the level boundaries. To protect these memories from such errors and ensure that the stored data is not corrupted, Error Correction Codes (ECCs) are commonly used. However, most existing codes have been designed to protect memories in which each cell stores a bit and thus, they are not efficient to protect MLC memories. In this paper, an efficient scheme that can correct up to magnitude-3 errors is presented and evaluated. The scheme is based by combining ECCs that are commonly used to protect traditional memories. In particular, Interleaved Parity (IP) bits and Single Error Correction and Double Adjacent Error Correction (SEC-DAEC) codes are utilized; both these codes are combined in the proposed IP-DAEC scheme to efficiently provide a strong coding function for correction, thus exceeding the capabilities of most existing coding schemes for limited magnitude errors. The SEC-DAEC code is used to detect the cell in error and correct some bits, while the IP bits identify the remaining erroneous bits in the memory cell. The use of these simple codes results in an efficient implementation of the decoder compared to existing techniques as shown by the evaluation results presented in this paper. The proposed scheme is also competitive in terms of number of parity check bits and memory redundancy. Therefore, the proposed IP-DAEC scheme is a very efficient alternative to protect and correct MLC memories from limited magnitude errors.Pedro Reviriego was partially supported by the TEXEO project (TEC2016-80339-R) funded by the Spanish Research Plan and by the Madrid Community research project TAPIR-CM grant no. P2018/TCS-4496

    Performance analysis of Energy Efficient Ethernet on video streaming servers

    Get PDF
    [EN] Current trends on traffic growth oversee a steady increase of video streaming services, and the subsequent development of the associated infrastructure to allocate and distribute such contents. One of the operational costs associated to this infrastructure is the power bill. Therefore any mechanism used to decrease it, reducing also the carbon footprint asso- ciated to it, is welcome. In this work we investigate the suitability of the recently standard- ized IEEE 802.3az Energy Efficient Ethernet (EEE) for video traffic generated by video- streaming servers. The conclusion of the analysis is positive about the achievable energy savings, due to the inherent features of traffic patterns of video-streaming servers which help reducing the number of transitions between active and low-power modes in EEE.Part of the research leading to these received funding from the European Community's Seventh Framework Programme (FP7-ICT-2009-5) under Grant agreement No. 258053 (MEDIEVAL project). Additionally, the authors would like to acknowledge the support to this work by the CAM-UC3M Greencom Research Grant (under code CCG10-UC3M/TIC-5624), the FIERRO Spanish project (TEC2010-12250-E) and the Google Research Award "New Protocol Semantics and Scheduling Primitives for Energy Efficiency: Burst Coalescing at the Link and Application Layers".De La Oliva, A.; Vargas Hernández, TR.; Guerri Cebollada, JC.; Alberto Hernandez, J.; Reviriego, P. (2012). Performance analysis of Energy Efficient Ethernet on video streaming servers. Computer Networks. 57(3):599-608. https://doi.org/10.1016/j.comnet.2012.09.019S59960857

    Latin and Greek in Computing: Ancient Words in a New World

    Get PDF
    In computing, old words are reused by giving them new meanings. However, many people may not be aware of the origin of ancient words that have found new uses in computing. We advocate for basic computing science and engineering courses to cover the origins of the most common ancient words used in computing.We thank Mariano Arnal for his comments and suggestions on words and their origins. This work was supported in part by the ACHILLES project PID2019-104207RB-I00 funded by the Spanish Agencia Estatal de Investigación 10.13039/501100011033. (We can see that Greek mythology also influences researchers when naming their projects. In our case, we took the name from the famous character in Homers' Iliad.

    Protecting Memories against Soft Errors: The Case for Customizable Error Correction Codes

    Get PDF
    As technology scales, radiation induced soft errors create more complex error patterns in memories with a single particle corrupting several bits. This poses a challenge to the Error Correction Codes (ECCs) traditionally used to protect memories that can correct only single bit errors. During the last decade, a number of codes have been developed to correct the emerging error patterns, focusing initially on double adjacent errors and later on three bit burst errors. However, as the memory cells get smaller and smaller, the error patterns created by radiation will continue to change and thus new codes will be needed. In addition, the memory layout and the technology used may also make some patterns more likely than others. For example, in some memories, there maybe elements that separate blocks of bits in a word, making errors that affect two blocks less likely. Finally, for a given memory, depending on the data stored, some error patterns may be more critical than others. For example, if numbers are stored in the memory, in most cases, errors on the more significant bits have a larger impact. Therefore, for a given memory and application, to achieve optimal protection, we would like to have a code that corrects a given set of patterns. This is not possible today as there is a limited number of code choices available in terms of correctable error patterns and word lengths. However, most of the codes used to protect memories are linear block codes that have a regular structure and which design can be automated. In this paper, we propose the automation of error correction code design for memory protection. To that end, we introduce a software tool that given a word length and the error patterns that need to be corrected, produces a linear block code described by its parity check matrix and also the bit placement. The benefits of this automated design approach are illustrated with several case studies. Finally, the tool is made available so that designers can easily produce custom error correction codes for their specific needs.Jiaqiang Li and Liyi Xiao would like to acknowledge the support of the Fundamental Research Funds for the Central Universities (Grant No. HIT.KISTP.201404), Harbin science and innovation research special fund (2015RAXXJ003), and Special found for development of Shenzhen strategic emerging industries (JCYJ20150625142543456). Pedro Reviriego would like to acknowledge the support of the TEXEO project TEC2016-80339-R funded by the Spanish Ministry of Economy and Competitivity and of the Madrid Community research project TAPIR-CM Grant No. P2018/TCS-4496

    Round Trip Time (RTT) Delay in the Internet: Analysis and Trends

    Full text link
    Both capacity and latency are crucial performance metrics for the optimal operation of most networking services and applications, from online gaming to futuristic holographic-type communications. Networks worldwide have witnessed important breakthroughs in terms of capacity, including fibre introduction everywhere, new radio technologies and faster core networks. However, the impact of these capacity upgrades on end-to-end delay is not straightforward as traffic has also grown exponentially. This article overviews the current status of end-to-end latency on different regions and continents worldwide and how far these are from the theoretical minimum baseline, given by the speed of light propagation over an optical fibre. We observe that the trend in the last decade goes toward latency reduction (in spite of the ever-increasing annual traffic growth), but still there are important differences between countries
    corecore