627 research outputs found

    Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve

    Full text link
    The widespread adoption of large language models (LLMs) makes it important to recognize their strengths and limitations. We argue that in order to develop a holistic understanding of these systems we need to consider the problem that they were trained to solve: next-word prediction over Internet text. By recognizing the pressures that this task exerts we can make predictions about the strategies that LLMs will adopt, allowing us to reason about when they will succeed or fail. This approach - which we call the teleological approach - leads us to identify three factors that we hypothesize will influence LLM accuracy: the probability of the task to be performed, the probability of the target output, and the probability of the provided input. We predict that LLMs will achieve higher accuracy when these probabilities are high than when they are low - even in deterministic settings where probability should not matter. To test our predictions, we evaluate two LLMs (GPT-3.5 and GPT-4) on eleven tasks, and we find robust evidence that LLMs are influenced by probability in the ways that we have hypothesized. In many cases, the experiments reveal surprising failure modes. For instance, GPT-4's accuracy at decoding a simple cipher is 51% when the output is a high-probability word sequence but only 13% when it is low-probability. These results show that AI practitioners should be careful about using LLMs in low-probability situations. More broadly, we conclude that we should not evaluate LLMs as if they are humans but should instead treat them as a distinct type of system - one that has been shaped by its own particular set of pressures.Comment: 50 pages plus 11 page of references and 23 pages of appendice

    On the security of embedded systems against side-channel attacks

    Get PDF
    Side-Channel Analysis (SCA) represents a serious threat to the security of millions of smart devices that form part of the so-called Internet of Things (IoT). On the other hand, perform the "right- fitting" cryptographic code for the IoT is a highly challenging task due to the reduced resource constraints of must of the IoT devices and the variety of cryptographic algorithms on disposal. An important criterion to assess the suitability of a light-weight cipher implementation, with respect to the SCA point of view, is the amount of energy leakage available to an adversary. In this thesis, the efficiency of a selected function that is commonly used in AES implementations in the perspective of Correlation Power Analysis (CPA) attacks are analyzed, leading to focus on the very common situation where the exact time of the sensitive processing is drowned in a large number of leakage points. In the particular case of statistical attacks, much of the existing literature essentially develop the theory under the assumption that the exact sensitive time is known and cannot be directly applied when the latter assumption is relaxed, being such a particular aspect for the simple Differential Power Analysis (DPA) in contrast with the CPA. To deal with this issue, an improvement that makes the statistical attack a real alternative compared with the simple DPA has been proposed. For the power consumption model (Hamming Weight model), and by rewriting the simple DPA attacks in terms of correlation coefficients between Boolean functions. Exhibiting properties of S-boxes relied on CPA attacks and showing that these properties are opposite to the non-linearity criterion and to the propagation criterion assumed for the former DPA. In order to achieve this goal, the study has been illustrated by various attack experiments performed on several copies implementations of the light-weight AES chipper in a well-known micro-controller educative platform within an 8-bit processor architecture deployed on a 350 nanometers CMOS technology. The Side-channel attacks presented in this work have been set in ideal conditions to capture the full complexity of an attack performed in real-world conditions, showing that certain implementation aspects can influence the leakage levels. On the other side, practical improvements are proposed for specific contexts by exploring the relationship between the non-linearity of the studied selection function and the measured leakages, with the only pretension to bridge the gap between the theory and the practice. The results point to new enlightenment on the resilience of basic operations executed by common light-weight ciphers implementations against CPA attacks

    A Review of the Teaching and Learning on Power Electronics Course

    Get PDF
    —In this review, we describe various kinds of problem and solution related teaching and learning on power electronics course all around the world. The method was used the study of literature on journal articles and proceedings published by reputable international organizations. Thirtynine papers were obtained using Boolean operators, according to the specified criteria. The results of the problems generally established that student learning motivation was low, teaching approaches that are still teacher-centered, the scope of the curriculum extends, and the physical limitations of laboratory equipment. The solutions offered are very diverse ranging from models, strategies, methods and learning techniques supported by information and communication technology

    Static power analysis of cryptographic devices

    Get PDF
    Side-channel attacks are proven to be efficient tools in attacking cryptographic devices. Dynamic power leakage has been used as a source for many well-known side-channel attack algorithms. As process technology size shrinks, the relative amount of static power consumption increases accordingly, and reaches a significant level in sub-100- nm chips, potentially changing the nature of side-channel analysis based on power consumption. In this thesis, we demonstrate our work in side-channel attacks exploiting static power leakage. Our research interest is particularly focused on profiled attacks. Firstly, we present recent developments of static power analysis and provide our results to further support some of the conclusions in existing publications. We also give a description of the template attack we developed for static power analysis of block ciphers. This template attack uses new distinguishers which are previously applied to other data analysis fields. The results of our study are achieved using simulations in a 45-nm and 65-nm CMOS environment, and demonstrate the viability of static-power-based template attacks. Secondly, we bring kernel density estimation into the scenario of static power analysis. We compare the performance of the kernel method and conventional Gaussian distinguisher. It is demonstrated in our experiments that the static power leakage may not satisfy multivariate Gaussian distribution, in which case the kernel method results in better attack outcomes. Thirdly, we perform template attacks on a masked S-box circuit using static and dynamic power leakage. We are the first to compare static power and dynamic power in the scenario of profiled attacks against masked devices. The attacks are shown to be successful, and by performing multiple attacks and adding Gaussian noise, we conclude that in the 45-nm environment, dynamic power analysis requires a high sampling rate for the oscilloscopes, while the results of static-power-based attacks are more sensitive to additive noise. Lastly, we attempt to combine static and dynamic power leakage in order to take the advantage of both leakage sources. With the help of deep learning technology, we are able to propose more complex schemes to combine different leakage sources. Three combining schemes are proposed and evaluated using a masked S-box circuit simulated with 45-nm library. The experiment results show that the hierarchical LSTM proposal performs the best or close to the best in all test cases

    Finger Vein Verification with a Convolutional Auto-encoder

    Get PDF

    Proceedings of the 2021 Symposium on Information Theory and Signal Processing in the Benelux, May 20-21, TU Eindhoven

    Get PDF

    Identification through Finger Bone Structure Biometrics

    Get PDF

    Postmodernism of reaction: decade of the 80´s in Portugal

    Get PDF
    The phenomenon of “return to painting” of the 1980s is usually linked with emerging neo-conservative politics and booming art market. Therefore, historiographical literature frequently presents this international trend as “embarrassment” to art history, synonymous with the term “reactionary postmodernism” (Foster, 1983). However, the analyses of the critical history of the phenomenon allow us to recognize the paradoxical role played by figurative painting in the theoretical debate regarding the exhaustion of modernist discourse. On the one hand painting was accused of a reactionary position and an attempt to return to pre-modernist ideals of representation. On the other hand, it seemed to break with those ideals through hybridisation of painterly discourse and advance in the practices of appropriation and deconstruction. This dissertation seeks to focus on the postmodern painterly strategies and critical discourse in the works of the artists associated with the “return to painting” phenomenon. The work brings critical and historiographical analyses of the paintings associated with such trends as neo-expressionism, transavantgarde, New Image Painting and return to painting in Portugal. Those premises will allow us not only to develop already asserted ideas, but also distance the “return to painting” phenomenon from its pejorative image. Simultaneously, the work attempts to contribute to the discussion regarding return to painting in Portugal. The phenomenon that vividly marked its presence on the local artistic scene remains mostly unexamined. Therefore, the dissertation aims to enrich the discussion regarding postmodern painting in Portugal.O fenómeno do retorno à pintura na década de 1980 é habitualmente associado às emergentes políticas neo-conservativas e ao crescimento exponencial do mercado de arte. Consequentemente, esta tendência internacional é frequentemente exposta pela historiografia como “indigna” da história da arte, sinónimo do termo “pós-modernismo reacionário” (Foster, 1983). Contudo, a análise da história crítica deste fenómeno possibilita-nos reconhecer o papel paradoxal que a pintura figurativa exerceu no debate teórico relativo à exaustão do discurso modernista. Por um lado, a pintura foi acusada de posicionar-se reacionariamente e procurar um retorno a ideais de representação pré-modernistas. Por outro lado, aparentou a existência de um corte com esses mesmo ideais através da hibridização do discurso pictórico e do desenvolvimento de práticas de apropriação e desconstrução. Esta dissertação procura analisar as estratégias pictóricas pós-modernistas e o discurso crítico presente nas obras dos artistas associados ao fenómeno do retorno à pintura. O trabalho revê as análises críticas e historiográficas das pinturas associadas a tendências como o neo-expressionismo, a transvanguarda, a New Image Painting e o retorno da pintura em Portugal. Esta revisão permitir-nos-á não apenas desenvolver ideias já conhecidas, mas também demarcar uma distância em relação à imagem pejorativa associada ao fenómeno do “retorno à pintura”. Simultaneamente, este trabalho procura contribuir para a discussão sobre o retorno á pintura em Portugal. O fenómeno marcou vivamente a cena artística nacional, mas permanece largamente por examinar. Consequentemente, esta dissertação pretende enriquecer a discussão sobre a pintura pós-modernista em Portugal
    corecore