592 research outputs found

    Side Channel Leakage Analysis - Detection, Exploitation and Quantification

    Get PDF
    Nearly twenty years ago the discovery of side channel attacks has warned the world that security is more than just a mathematical problem. Serious considerations need to be placed on the implementation and its physical media. Nowadays the ever-growing ubiquitous computing calls for in-pace development of security solutions. Although the physical security has attracted increasing public attention, side channel security remains as a problem that is far from being completely solved. An important problem is how much expertise is required by a side channel adversary. The essential interest is to explore whether detailed knowledge about implementation and leakage model are indispensable for a successful side channel attack. If such knowledge is not a prerequisite, attacks can be mounted by even inexperienced adversaries. Hence the threat from physical observables may be underestimated. Another urgent problem is how to secure a cryptographic system in the exposure of unavoidable leakage. Although many countermeasures have been developed, their effectiveness pends empirical verification and the side channel security needs to be evaluated systematically. The research in this dissertation focuses on two topics, leakage-model independent side channel analysis and security evaluation, which are described from three perspectives: leakage detection, exploitation and quantification. To free side channel analysis from the complicated procedure of leakage modeling, an observation to observation comparison approach is proposed. Several attacks presented in this work follow this approach. They exhibit efficient leakage detection and exploitation under various leakage models and implementations. More importantly, this achievement no longer relies on or even requires precise leakage modeling. For the security evaluation, a weak maximum likelihood approach is proposed. It provides a quantification of the loss of full key security due to the presence of side channel leakage. A constructive algorithm is developed following this approach. The algorithm can be used by security lab to measure the leakage resilience. It can also be used by a side channel adversary to determine whether limited side channel information suffices the full key recovery at affordable expense

    Estimation and Modeling Problems in Parametric Audio Coding

    Get PDF

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future

    Speaker Recognition in Unconstrained Environments

    Get PDF
    Speaker recognition is applied in smart home devices, interactive voice response systems, call centers, online banking and payment solutions as well as in forensic scenarios. This dissertation is concerned with speaker recognition systems in unconstrained environments. Before this dissertation, research on making better decisions in unconstrained environments was insufficient. Aside from decision making, unconstrained environments imply two other subjects: security and privacy. Within the scope of this dissertation, these research subjects are regarded as both security against short-term replay attacks and privacy preservation within state-of-the-art biometric voice comparators in the light of a potential leak of biometric data. The aforementioned research subjects are united in this dissertation to sustain good decision making processes facing uncertainty from varying signal quality and to strengthen security as well as preserve privacy. Conventionally, biometric comparators are trained to classify between mated and non-mated reference,--,probe pairs under idealistic conditions but are expected to operate well in the real world. However, the more the voice signal quality degrades, the more erroneous decisions are made. The severity of their impact depends on the requirements of a biometric application. In this dissertation, quality estimates are proposed and employed for the purpose of making better decisions on average in a formalized way (quantitative method), while the specifications of decision requirements of a biometric application remain unknown. By using the Bayesian decision framework, the specification of application-depending decision requirements is formalized, outlining operating points: the decision thresholds. The assessed quality conditions combine ambient and biometric noise, both of which occurring in commercial as well as in forensic application scenarios. Dual-use (civil and governmental) technology is investigated. As it seems unfeasible to train systems for every possible signal degradation, a low amount of quality conditions is used. After examining the impact of degrading signal quality on biometric feature extraction, the extraction is assumed ideal in order to conduct a fair benchmark. This dissertation proposes and investigates methods for propagating information about quality to decision making. By employing quality estimates, a biometric system's output (comparison scores) is normalized in order to ensure that each score encodes the least-favorable decision trade-off in its value. Application development is segregated from requirement specification. Furthermore, class discrimination and score calibration performance is improved over all decision requirements for real world applications. In contrast to the ISOIEC 19795-1:2006 standard on biometric performance (error rates), this dissertation is based on biometric inference for probabilistic decision making (subject to prior probabilities and cost terms). This dissertation elaborates on the paradigm shift from requirements by error rates to requirements by beliefs in priors and costs. Binary decision error trade-off plots are proposed, interrelating error rates with prior and cost beliefs, i.e., formalized decision requirements. Verbal tags are introduced to summarize categories of least-favorable decisions: the plot's canvas follows from Bayesian decision theory. Empirical error rates are plotted, encoding categories of decision trade-offs by line styles. Performance is visualized in the latent decision subspace for evaluating empirical performance regarding changes in prior and cost based decision requirements. Security against short-term audio replay attacks (a collage of sound units such as phonemes and syllables) is strengthened. The unit-selection attack is posed by the ASVspoof 2015 challenge (English speech data), representing the most difficult to detect voice presentation attack of this challenge. In this dissertation, unit-selection attacks are created for German speech data, where support vector machine and Gaussian mixture model classifiers are trained to detect collage edges in speech representations based on wavelet and Fourier analyses. Competitive results are reached compared to the challenged submissions. Homomorphic encryption is proposed to preserve the privacy of biometric information in the case of database leakage. In this dissertation, log-likelihood ratio scores, representing biometric evidence objectively, are computed in the latent biometric subspace. Conventional comparators rely on the feature extraction to ideally represent biometric information, latent subspace comparators are trained to find ideal representations of the biometric information in voice reference and probe samples to be compared. Two protocols are proposed for the the two-covariance comparison model, a special case of probabilistic linear discriminant analysis. Log-likelihood ratio scores are computed in the encrypted domain based on encrypted representations of the biometric reference and probe. As a consequence, the biometric information conveyed in voice samples is, in contrast to many existing protection schemes, stored protected and without information loss. The first protocol preserves privacy of end-users, requiring one public/private key pair per biometric application. The latter protocol preserves privacy of end-users and comparator vendors with two key pairs. Comparators estimate the biometric evidence in the latent subspace, such that the subspace model requires data protection as well. In both protocols, log-likelihood ratio based decision making meets the requirements of the ISOIEC 24745:2011 biometric information protection standard in terms of unlinkability, irreversibility, and renewability properties of the protected voice data

    Security of Ubiquitous Computing Systems

    Get PDF
    The chapters in this open access book arise out of the EU Cost Action project Cryptacus, the objective of which was to improve and adapt existent cryptanalysis methodologies and tools to the ubiquitous computing framework. The cryptanalysis implemented lies along four axes: cryptographic models, cryptanalysis of building blocks, hardware and software security engineering, and security assessment of real-world systems. The authors are top-class researchers in security and cryptography, and the contributions are of value to researchers and practitioners in these domains. This book is open access under a CC BY license

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Features extraction using random matrix theory.

    Get PDF
    Representing the complex data in a concise and accurate way is a special stage in data mining methodology. Redundant and noisy data affects generalization power of any classification algorithm, undermines the results of any clustering algorithm and finally encumbers the monitoring of large dynamic systems. This work provides several efficient approaches to all aforementioned sides of the analysis. We established, that notable difference can be made, if the results from the theory of ensembles of random matrices are employed. Particularly important result of our study is a discovered family of methods based on projecting the data set on different subsets of the correlation spectrum. Generally, we start with traditional correlation matrix of a given data set. We perform singular value decomposition, and establish boundaries between essential and unimportant eigen-components of the spectrum. Then, depending on the nature of the problem at hand we either use former or later part for the projection purpose. Projecting the spectrum of interest is a common technique in linear and non-linear spectral methods such as Principal Component Analysis, Independent Component Analysis and Kernel Principal Component Analysis. Usually the part of the spectrum to project is defined by the amount of variance of overall data or feature space in non-linear case. The applicability of these spectral methods is limited by the assumption that larger variance has important dynamics, i.e. if the data has a high signal-to-noise ratio. If it is true, projection of principal components targets two problems in data mining, reduction in the number of features and selection of more important features. Our methodology does not make an assumption of high signal-to-noise ratio, instead, using the rigorous instruments of Random Matrix Theory (RNIT) it identifies the presence of noise and establishes its boundaries. The knowledge of the structure of the spectrum gives us possibility to make more insightful projections. For instance, in the application to router network traffic, the reconstruction error procedure for anomaly detection is based on the projection of noisy part of the spectrum. Whereas, in bioinformatics application of clustering the different types of leukemia, implicit denoising of the correlation matrix is achieved by decomposing the spectrum to random and non-random parts. For temporal high dimensional data, spectrum and eigenvectors of its correlation matrix is another representation of the data. Thus, eigenvalues, components of the eigenvectors, inverse participation ratio of eigenvector components and other operators of eigen analysis are spectral features of dynamic system. In our work we proposed to extract spectral features using the RMT. We demonstrated that with extracted spectral features we can monitor the changing dynamics of network traffic. Experimenting with the delayed correlation matrices of network traffic and extracting its spectral features, we visualized the delayed processes in the system. We demonstrated in our work that broad range of applications in feature extraction can benefit from the novel RMT based approach to the spectral representation of the data

    Towards understanding human locomotion

    Get PDF
    Die zentrale Frage, die in der vorliegenden Arbeit untersucht wurde, ist, wie man die komplizierte Dynamik des menschlichen Laufens besser verstehen kann. In der wissenschaftlichen Literatur werden zur Beschreibung von Laufbewegungen (Gehen und Rennen) oftmals minimalistische "Template"-Modelle verwendet. Diese sehr einfachen Modelle beschreiben nur einen ausgewählten Teil der Dynamik, meistens die Schwerpunktsbahn. In dieser Arbeit wird nun versucht, mittels Template-Modellen das Verständnis des Laufens voranzubringen. Die Analyse der Schwerpunktsbewegung durch Template-Modelle setzt eine präzise Bestimmung der Schwerpunktsbahn im Experiment voraus. Hierfür wird in Kapitel 2.3 eine neue Methode vorgestellt, welche besonders robust gegen die typischen Messfehler bei Laufexperimenten ist. Die am häfigsten verwendeten Template-Modelle sind das Masse-Feder-Modell und das inverse Pendel, welche zur Beschreibung der Körperschwerpunktsbewegung gedacht sind und das Drehmoment um den Schwerpunkt vernachlässigen. Zur Beschreibung der Stabilisierung der Körperhaltung (und damit der Drehimpulsbilanz) wird in Abschnitt 3.3 das Template-Modell "virtuelles Pendel" für das menschliche Gehen eingeführt und mit experimentellen Daten verglichen. Die Diskussion möglicher Realisierungsmechanismen legt dabei nahe, dass die Aufrichtung des menschlichen Gangs im Laufe der Evolution keine große mechanische Hürde war. In der Literatur wird oft versucht, Eigenschaften der Bewegung wie Stabilität durch Eigenschaften der Template-Modelle zu erklären. Dies wird in modifizierter Form auch in der vorliegen Arbeit getan. Hierzu wird zunächst eine experimentell bestimmte Schwerpunktsbewegung auf das Masse-Feder-Modell übertragen. Anschließend wird die Kontrollvorschrift der Schritt-zu-Schritt-Anpassung der Modellparameter identifiziert sowie eine geeignete Näherung angegeben, um die Stabilität des Modells, welches mit dieser Kontrollvorschrift ausgestattet wird, zu analysieren. Der Vergleich mit einer direkten Bestimmung der Stabilität aus einem Floquet-Modell zeigt qualitativ gute Übereinstimmung. Beide Ansätze führen auf das Ergebnis, dass beim langsamen menschlichen Rennen Störungen innerhalb von zwei Schritten weitgehend abgebaut werden. Zusammenfassend wurde gezeigt, wie Template-Modelle zum Verständnis der Laufbewegung beitragen können. Gerade die Identifikation der individuellen Kontrollvorschrift auf der Abstraktionsebene des Masse-Feder-Modells erlaubt zukünftig neue Wege, aktive Prothesen oder Orthesen in menschenähnlicher Weise zu steuern und ebnet den Weg, menschliches Rennen auf Roboter zu übertragen.Human locomotion is part of our everyday life, however the mechanisms are not well enough understood to be transferred into technical devices like orthoses, protheses or humanoid robots. In biomechanics often minimalistic so-called template models are used to describe locomotion. While these abstract models in principle offer a language to describe both human behavior and technical control input, the relation between human locomotion and locomotion of these templates was long unclear. This thesis focusses on how human locomotion can be described and analyzed using template models. Often, human running is described using the SLIP template. Here, it is shown that SLIP (possibly equipped with any controller) cannot show human-like disturbance reactions, and an appropriate extension of SLIP is proposed. Further, a new template to describe postural stabilization is proposed. Summarizing, this theses shows how simple template models can be used to enhance the understanding of human gait

    Non-acyclicity of coset lattices and generation of finite groups

    Get PDF
    corecore