42 research outputs found

    Analysis and design of distributed antenna aided twin-layer femto-and macro-cell networks relying on fractional frequency-reuse

    No full text
    Distributed Antenna Systems (DAS) and femtocells are capable of improving the attainable performance in the cell-edge area and in indoor residential areas, respectively. In order to achieve a high spectral efficiency, both the Distributed Antenna Elements (DAEs) and Femtocell Base Stations (FBSs) may have to reuse the spectrum of the macrocellular network. As a result, the performance of both outdoor macrocell users and indoor femtocell users suffers from Co-Channel Interference (CCI). Hence in this paper, heterogenous celluar networks are investigated, where the DAS-aided macrocels and femtocells co-exist within the same area

    How to Choose Interesting Points for Template Attacks?

    Get PDF
    Template attacks are widely accepted to be the most powerful side-channel attacks from an information theoretic point of view. For template attacks, many papers suggested a guideline for choosing interesting points which is still not proven. The guideline is that one should only choose one point as the interesting point per clock cycle. Up to now, many different methods of choosing interesting points were introduced. However, it is still unclear that which approach will lead to the best classification performance for template attacks. In this paper, we comprehensively evaluate and compare the classification performance of template attacks when using different methods of choosing interesting points. Evaluation results show that the classification performance of template attacks has obvious difference when different methods of choosing interesting points are used. The CPA based method and the SOST based method will lead to the best classification performance. Moreover, we find that some methods of choosing interesting points provide the same results in the same circumstance. Finally, we verify the guideline for choosing interesting points for template attacks is correct by presenting a new way of conducting template attacks

    Towards Optimal Leakage Exploitation Rate in Template Attacks

    Get PDF
    Under the assumption that one has a reference device identical or similar to the target device, and thus be well capable of characterizing power leakages of the target device, Template Attacks are widely accepted to be the most powerful side-channel attacks. However, the question of whether Template Attacks are really optimal in terms of the leakage exploitation rate is still unclear. In this paper, we present a negative answer to this crucial question by introducing a normalization process into classical Template Attacks. Specifically, our contributions are two folds. On the theoretical side, we prove that Normalized Template Attacks are better in terms of the leakage exploitation rate than Template Attacks; on the practical side, we evaluate the key-recovery efficiency of Normalized Template Attacks and Template Attacks in the same attacking scenario. Evaluation results show that, compared with Template Attacks, Normalized Template Attacks are more effective. We note that, the computational price of the normalization process is of extremely low, and thus it is very easy-to-implement in practice. Therefore, the normalization process should be integrated into Template Attacks as a necessary step, so that one can better understand practical threats of Template Attacks

    Cryptosystems Resilient to Both Continual Key Leakages and Leakages from Hash Functions

    Get PDF
    Yoneyama et al. introduced Leaky Random Oracle Model (LROM for short) at ProvSec2008 in order to discuss security (or insecurity) of cryptographic schemes which use hash functions as building blocks when leakages from pairs of input and output of hash functions occur. This kind of leakages occurs due to various attacks caused by sloppy usage or implementation. Their results showed that this kind of leakages may threaten the security of some cryptographic schemes. However, an important fact is that such attacks would leak not only pairs of input and output of hash functions, but also the secret key. Therefore, LROM is rather limited in the sense that it considers leakages from pairs of input and output of hash functions alone, instead of taking into consideration other possible leakages from the secret key simultaneously. On the other hand, many other leakage models mainly concentrate on leakages from the secret key and ignore leakages from hash functions for a cryptographic scheme exploiting hash functions in these leakage models. Some examples show that the above drawbacks of LROM and other leakage models may cause insecurity of some schemes which are secure in the two kinds of leakage model. In this paper, we present an augmented model of both LROM and some leakage models, which both the secret key and pairs of input and output of hash functions can be leaked. Furthermore, the secret key can be leaked continually during the whole life cycle of a cryptographic scheme. Hence, our new model is more universal and stronger than LROM and some leakage models (e.g. only computation leaks model and bounded memory leakage model). As an application example, we also present a public key encryption scheme which is provably IND-CCA secure in our new model

    On the Impacts of Mathematical Realization over Practical Security of Leakage Resilient Cryptographic Schemes

    Get PDF
    In real world, in order to transform an abstract and generic cryptographic scheme into actual physical implementation, one usually undergoes two processes: mathematical realization at algorithmic level and physical realization at implementation level. In the former process, the abstract and generic cryptographic scheme is transformed into an exact and specific mathematical scheme, while in the latter process the output of mathematical realization is being transformed into a physical cryptographic module runs as a piece of software, or hardware, or combination of both. In black-box model (i.e. leakage-free setting), a cryptographic scheme can be mathematically realized without affecting its theoretical security as long as the mathematical components meet the required cryptographic properties. However, up to now, no previous work formally show that whether one can mathematically realize a leakage resilient cryptographic scheme in existent ways without affecting its practical security. Our results give a negative answer to this important question by introducing attacks against several kinds of mathematical realization of a practical leakage resilient cryptographic scheme. Our results show that there may exist a big gap between the theoretical tolerance leakage rate and the practical tolerance leakage rate of the same leakage resilient cryptographic scheme if the mathematical components in the mathematical realization are not provably secure in leakage setting. Therefore, on one hand, we suggest that all (practical) leakage resilient cryptographic schemes should at least come with a kind of mathematical realization. Using this kind of mathematical realization, its practical security can be guaranteed. On the other hand, our results inspire cryptographers to design advanced leakage resilient cryptographic schemes whose practical security is independent of the specific details of its mathematical realization

    Weak-Key Leakage Resilient Cryptography

    Get PDF
    In traditional cryptography, the standard way of examining the security of a scheme is to analyze it in a black-box manner, capturing no side channel attacks which exploit various forms of unintended information leakages and do threaten the practical security of the scheme. One way to protect against such attacks aforementioned is to extend the traditional models so as to capture them. Early models rely on the assumption that only computation leaks information, and are incapable of capturing memory attacks such as cold boot attacks. Thus, Akavia et al.(TCC \u2709) formalize the general model of key-leakage attacks to cover them. However, most key-leakage attacks in reality tend to be weak key leakage attacks which can be viewed as a nonadaptive version of the key-leakage attacks. Powerful as those may be, the existing constructions of cryptographic schemes in adaptive key-leakage attacks model still have some drawbacks such as they are quite inefficient or they can only tolerate a small amount of leakage. Therefore, we mainly consider models that cover weak key-leakage attacks and the corresponding constructions in them. We extend the transformation paradigm presented by Naor and Segev that can transform from any chosen-plaintext secure public-key encryption (PKE) scheme to a chosen-plaintext weak key-leakage secure PKE scheme. Our extensions are two-fold. Firstly, we extend the paradigm into chosen-ciphertext attack scenarios and prove that the properties of it still hold in these scenarios. We also give an instantiation based on DDH assumption in this setting. Additionally, we extend the paradigm to cover more side channel attacks under the consideration of different types of leakage functions. We further consider attacks which require the secret key still has enough min-entropy after leaking and prove the original paradigm is still applicable in this case with chosen-ciphertext attacks. Attacks that require the secret key is computationally infeasible to recover given the leakage information are taken into consideration as well. And we formalize the informal discusses by Naor and Segev in (Crypto\u27 09) on how to adapt the original paradigm in this new models

    Enhancing methane sensing with NDIR technology: Current trends and future prospects

    No full text
    This study presents an in-depth review of non-dispersive infrared (NDIR) sensors for methane detection, focusing on their principles of operation, performance characteristics, advanced signal processing techniques, multi-gas detection capabilities, and applications in various industries. NDIR sensors offer significant advantages in methane sensing, including high sensitivity, selectivity, and long-term stability. The underlying principles of NDIR sensors involve measuring the absorption of infrared radiation by the target gas molecules, leading to precise and reliable methane concentration measurements. Advanced signal processing techniques, such as single-frequency filtering and wavelet filtering algorithms, have been explored to improve the performance of the sensor by reducing noise, enhancing the signal-to-noise ratio, and achieving more accurate results. In the context of multi-gas detection, NDIR sensors face challenges due to overlapping absorption spectra. However, various solutions, including narrow-band optical bandpass filters, gas filter correlation techniques, and machine learning algorithms, have been proposed to address these issues effectively. This study delves into specific applications of NDIR sensors in various industries, such as coal mines, wastewater treatment plants, and agriculture. In these settings, NDIR sensors have demonstrated their reliability, accuracy, and real-time monitoring capabilities, contributing to environmental protection, safety, and energy recovery. Furthermore, the anticipated future trends and developments in NDIR methane detection technology are explored, including increased miniaturization, integration with artificial intelligence, improvements in power efficiency, and the development of multi-gas NDIR sensors. These advancements are expected to further enhance the capabilities and widespread adoption of NDIR sensors in methane detection applications
    corecore