40 research outputs found

    EM Attack Is Non-Invasive? - Design Methodology and Validity Verification of EM Attack Sensor

    Get PDF
    This paper presents a standard-cell-based semi-automatic design methodology of a new conceptual countermeasure against electromagnetic (EM) analysis and fault-injection attacks. The countermeasure namely EM attack sensor utilizes LC oscillators which detect variations in the EM field around a cryptographic LSI caused by a micro probe brought near the LSI. A dual-coil sensor architecture with an LUT-programming-based digital calibration can prevent a variety of microprobe-based EM attacks that cannot be thwarted by conventional countermeasures. All components of the sensor core are semi-automatically designed by standard EDA tools with a fully-digital standard cell library and hence minimum design cost. This sensor can be therefore scaled together with the cryptographic LSI to be protected. The sensor prototype is designed based on the proposed methodology together with a 128bit-key composite AES processor in 0.18um CMOS with overheads of only 1.9% in area, 7.6% in power, and 0.2% in performance, respectively. The validity against a variety of EM attack scenarios has been verified successfully

    Dynamic Polymorphic Reconfiguration to Effectively “CLOAK” a Circuit’s Function

    Get PDF
    Today\u27s society has become more dependent on the integrity and protection of digital information used in daily transactions resulting in an ever increasing need for information security. Additionally, the need for faster and more secure cryptographic algorithms to provide this information security has become paramount. Hardware implementations of cryptographic algorithms provide the necessary increase in throughput, but at a cost of leaking critical information. Side Channel Analysis (SCA) attacks allow an attacker to exploit the regular and predictable power signatures leaked by cryptographic functions used in algorithms such as RSA. In this research the focus on a means to counteract this vulnerability by creating a Critically Low Observable Anti-Tamper Keeping Circuit (CLOAK) capable of continuously changing the way it functions in both power and timing. This research has determined that a polymorphic circuit design capable of varying circuit power consumption and timing can protect a cryptographic device from an Electromagnetic Analysis (EMA) attacks. In essence, we are effectively CLOAKing the circuit functions from an attacker

    Null Convention Logic applications of asynchronous design in nanotechnology and cryptographic security

    Get PDF
    This dissertation presents two Null Convention Logic (NCL) applications of asynchronous logic circuit design in nanotechnology and cryptographic security. The first application is the Asynchronous Nanowire Reconfigurable Crossbar Architecture (ANRCA); the second one is an asynchronous S-Box design for cryptographic system against Side-Channel Attacks (SCA). The following are the contributions of the first application: 1) Proposed a diode- and resistor-based ANRCA (DR-ANRCA). Three configurable logic block (CLB) structures were designed to efficiently reconfigure a given DR-PGMB as one of the 27 arbitrary NCL threshold gates. A hierarchical architecture was also proposed to implement the higher level logic that requires a large number of DR-PGMBs, such as multiple-bit NCL registers. 2) Proposed a memristor look-up-table based ANRCA (MLUT-ANRCA). An equivalent circuit simulation model has been presented in VHDL and simulated in Quartus II. Meanwhile, the comparison between these two ANRCAs have been analyzed numerically. 3) Presented the defect-tolerance and repair strategies for both DR-ANRCA and MLUT-ANRCA. The following are the contributions of the second application: 1) Designed an NCL based S-Box for Advanced Encryption Standard (AES). Functional verification has been done using Modelsim and Field-Programmable Gate Array (FPGA). 2) Implemented two different power analysis attacks on both NCL S-Box and conventional synchronous S-Box. 3) Developed a novel approach based on stochastic logics to enhance the resistance against DPA and CPA attacks. The functionality of the proposed design has been verified using an 8-bit AES S-box design. The effects of decision weight, bitstream length, and input repetition times on error rates have been also studied. Experimental results shows that the proposed approach enhances the resistance to against the CPA attack by successfully protecting the hidden key --Abstract, page iii

    Introduction to Electromagnetic Information Security

    Get PDF

    On the Easiness of Turning Higher-Order Leakages into First-Order

    Get PDF
    Applying random and uniform masks to the processed intermediate values of cryptographic algorithms is arguably the most common countermeasure to thwart side-channel analysis attacks. So-called masking schemes exist in various shapes but are mostly used to prevent side-channel leakages up to a certain statistical order. Thus, to learn any information about the key-involving computations a side-channel adversary has to estimate the higher-order statistical moments of the leakage distributions. However, the complexity of this approach increases exponentially with the statistical order to be estimated and the precision of the estimation suffers from an enormous sensitivity to the noise level. In this work we present an alternative procedure to exploit higher-order leakages which captivates by its simplicity and effectiveness. Our approach, which focuses on (but is not limited to) univariate leakages of hardware masking schemes, is based on categorizing the power traces according to the distribution of leakage points. In particular, at each sample point an individual subset of traces is considered to mount ordinary first-order attacks. We present the theoretical concept of our approach based on simulation traces and examine its efficiency on noisy real-world measurements taken from a first-order secure threshold implementation of the block cipher PRESENT-80, implemented on a 150nm CMOS ASIC prototype chip. Our analyses verify that the proposed technique is indeed a worthy alternative to conventional higher-order attacks and suggest that it might be able to relax the sensitivity of higher-order evaluations to the noise level

    RAKSHA:Reliable and Aggressive frameworK for System design using High-integrity Approaches

    Get PDF
    Advances in the fabrication technology have been a major driving force in the unprecedented increase in computing capabilities over the last several decades. Despite huge reductions in the switching energy of the transistors, two major issues have emerged with decreasing fabrication technology scales. They are: 1) increased impact of process, voltage, and temperature (PVT) variation on transistor performance, and 2) increased susceptibility of transistors to soft errors induced by high energy particles. In presence of PVT variation, as transistor sizes continue to decrease, the design margins used to guarantee correct operation in the presence of worst-case scenarios have been increasing. Systems run at a clock frequency, which is determined by accounting the worst-case timing paths, operating conditions, and process variations. Timing speculation based reliable and aggressive clocking advocates going beyond worst-case limits to achieve best performance while not avoiding, but detecting and correcting a modest number of timing errors. Such design methodology exploits the fact that timing critical paths are rarely exercised in a design, and typical execution happens much faster than the timing requirements dictated by worst-case scenarios. Better-than-worst-case design methodology is advocated by several recent research pursuits, which propose to exploit in-built fault tolerance mechanisms to enhance computer system performance. Recent works have also shown that the performance lose due to over provisioning base on worst-case design margins is upward of 20\% in terms operating frequency and upward of 50\% in terms of power efficiency. The threat of soft error induced system failure in computing systems has become more prominent as we adopt ultra-deep submicron process technologies. With respect to soft error susceptibility, decreasing transistor geometries lower the energy threshold needed by high-energy particles to induce errors. As this trend continues, the need for fault tolerance mechanisms to counteract this effect has moved from a nice to have, to be a requirement in current and future systems. In this dissertation, RAKSHA (meaning to protect and save in Sanskrit), we take a multidimensional look at the challenges of system design built with scaled-technologies using high integrity techniques. In RAKSHA, to mitigate soft errors, we propose lightweight high-integrity mechanisms as basic system building blocks which allow system to offer performance levels comparable to a non-fault tolerant system. In addition, we also propose to effectively exploit and use the availability of fault tolerant mechanisms to allow and tolerate data-dependent failures, thus setting systems to operate at typical case circuit delays and enhance system performance. We also propose the use of novel high-integrity cells for increasing system energy efficiency and also potentially increasing system security by combating power-analysis-based side channel attacks. Such an approach allows balancing of performance, power, and security with no further overhead over the resources needed to incorporate fault tolerance. Using our framework, instead of designing circuits to meet worst-case requirements, circuits can be designed to meet typical-case requirements. In RAKSHA, we propose two efficient soft error mitigation schemes, namely Soft Error Mitigation (SEM) and Soft and Timing Error Mitigation (STEM), using the approach of multiple clocking of data for protecting combinational logic blocks from soft errors. Our first technique, SEM, based on distributed and temporal voting of three registers, unloads the soft error detection overhead from the critical path of the systems. SEM is also capable of ignoring false errors and recovers from soft errors using in-situ fast recovery avoiding recomputation. Our second technique, STEM, while tolerating soft errors, adds timing error detection capability to guarantee reliable execution in aggressively clocked designs that enhance system performance by operating beyond worst-case clock frequency. We also present a specialized low overhead clock phase management scheme that ably supports our proposed techniques. Timing annotated gate level simulations, using 45nm libraries, of a pipelined adder-multiplier and DLX processor show that both our techniques achieve near 100% fault coverage. For DLX processor, even under severe fault injection campaigns, SEM achieves an average performance improvement of 26.58% over a conventional triple modular redundancy voter based soft error mitigation scheme, while STEM outperforms SEM by 27.42%. We refer to systems built with SEM and STEM cells as reliable and aggressive systems. Energy consumption minimization in computing systems has attracted a great deal of attention and has also become critical due to battery life considerations and environmental concerns. To address this problem, many task scheduling algorithms are developed using dynamic voltage and frequency scaling (DVFS). Majority of these algorithms involve two passes: schedule generation and slack reclamation. Using this approach, linear combination of frequencies has been proposed to achieve near optimal energy for systems operating with discrete and traditional voltage frequency pairs. In RAKSHA, we propose a new slack reclamation algorithm, aggressive dynamic and voltage scaling (ADVFS), using reliable and aggressive systems. ADVFS exploits the enhanced voltage frequency spectrum offered by reliable and aggressive designs for improving energy efficiency. Formal proofs are provided to show that optimal energy for reliable and aggressive designs is either achieved by using single frequency or by linear combination of frequencies. ADVFS has been evaluated using random task graphs and our results show 18% reduction in energy when compared with continuous DVFS and over more than 33% when compared with scheme using linear combination of traditional voltage frequency pairs. Recent events have indicated that attackers are banking on side-channel attacks, such as differential power analysis (DPA) and correlation power analysis (CPA), to exploit information leaks from physical devices. Random dynamic voltage frequency scaling (RDVFS) has been proposed to prevent such attacks and has very little area, power, and performance overheads. But due to the one-to-one mapping present between voltage and frequency of DVFS voltage-frequency pairs, RDVFS cannot prevent power attacks. In RAKSHA, we propose a novel countermeasure that uses reliable and aggressive designs to break this one-to-one mapping. Our experiments show that our technique significantly reduces the correlation for the actual key and also reduces the risk of power attacks by increasing the probability for incorrect keys to exhibit maximum correlation. Moreover, our scheme also enables systems to operate beyond the worst-case estimates to offer improved power and performance benefits. For the experiments conducted on AES S-box implemented using 45nm CMOS technology, our approach has increased performance by 22% over the worst-case estimates. Also, it has decreased the correlation for the correct key by an order and has increased the probability by almost 3.5X times for wrong keys when compared with the original key to exhibit maximum correlation. Overall, RAKSHA offers a new way to balance the intricate interplay between various design constraints for the systems designed using small scaled-technologies

    Security for Service-Oriented On-Demand Grid Computing

    Get PDF
    Grid Computing ist mittlerweile zu einem etablierten Standard für das verteilte Höchstleistungsrechnen geworden. Während die erste Generation von Grid Middleware-Systemen noch mit proprietären Schnittstellen gearbeitet hat, wurde durch die Einführung von service-orientierten Standards wie WSDL und SOAP durch die Open Grid Services Architecture (OGSA) die Interoperabilität von Grids signifikant erhöht. Dies hat den Weg für mehrere nationale und internationale Grid-Projekten bereitet, in denen eine groß e Anzahl von akademischen und eine wachsende Anzahl von industriellen Anwendungen im Grid ausgeführt werden, die die bedarfsgesteuerte (on-demand) Provisionierung und Nutzung von Ressourcen erfordern. Bedarfsgesteuerte Grids zeichnen sich dadurch aus, dass sowohl die Software, als auch die Benutzer einer starken Fluktuation unterliegen. Weiterhin sind sowohl die Software, als auch die Daten, auf denen operiert wird, meist proprietär und haben einen hohen finanziellen Wert. Dies steht in starkem Kontrast zu den heutigen Grid-Anwendungen im akademischen Umfeld, die meist offen im Quellcode vorliegen bzw. frei verfügbar sind. Um den Ansprüchen einer bedarfsgesteuerten Grid-Nutzung gerecht zu werden, muss das Grid administrative Komponenten anbieten, mit denen Anwender autonom Software installieren können, selbst wenn diese Root-Rechte benötigen. Zur gleichen Zeit muss die Sicherheit des Grids erhöht werden, um Software, Daten und Meta-Daten der kommerziellen Anwender zu schützen. Dies würde es dem Grid auch erlauben als Basistechnologie für das gerade entstehende Gebiet des Cloud Computings zu dienen, wo ähnliche Anforderungen existieren. Wie es bei den meisten komplexen IT-Systemen der Fall ist, sind auch in traditionellen Grid Middlewares Schwachstellen zu finden, die durch die geforderten Erweiterungen der administrativen Möglichkeiten potentiell zu einem noch größ erem Problem werden. Die Schwachstellen in der Grid Middleware öffnen einen homogenen Angriffsvektor auf die ansonsten heterogenen und meist privaten Cluster-Umgebungen. Hinzu kommt, dass anders als bei den privaten Cluster-Umgebungen und kleinen akademischen Grid-Projekten die angestrebten groß en und offenen Grid-Landschaften die Administratoren mit gänzlich unbekannten Benutzern und Verhaltenstrukturen konfrontieren. Dies macht das Erkennen von böswilligem Verhalten um ein Vielfaches schwerer. Als Konsequenz werden Grid-Systeme ein immer attraktivere Ziele für Angreifer, da standardisierte Zugriffsmöglichkeiten Angriffe auf eine groß e Anzahl von Maschinen und Daten von potentiell hohem finanziellen Wert ermöglichen. Während die Rechenkapazität, die Bandbreite und der Speicherplatz an sich schon attraktive Ziele darstellen können, sind die im Grid enthaltene Software und die gespeicherten Daten viel kritischere Ressourcen. Modelldaten für die neuesten Crash-Test Simulationen, eine industrielle Fluid-Simulation, oder Rechnungsdaten von Kunden haben einen beträchtlichen Wert und müssen geschützt werden. Wenn ein Grid-Anbieter nicht für die Sicherheit von Software, Daten und Meta-Daten sorgen kann, wird die industrielle Verbreitung der offenen Grid-Technologie nicht stattfinden. Die Notwendigkeit von strikten Sicherheitsmechanismen muss mit der diametral entgegengesetzten Forderung nach einfacher und schneller Integration von neuer Software und neuen Kunden in Einklang gebracht werden. In dieser Arbeit werden neue Ansätze zur Verbesserung der Sicherheit und Nutzbarkeit von service-orientiertem bedarfsgesteuertem Grid Computing vorgestellt. Sie ermöglichen eine autonome und sichere Installation und Nutzung von komplexer, service-orientierter und traditioneller Software auf gemeinsam genutzen Ressourcen. Neue Sicherheitsmechanismen schützen Software, Daten und Meta-Daten der Anwender vor anderen Anwendern und vor externen Angreifern. Das System basiert auf Betriebssystemvirtualisierungstechnologien und bietet dynamische Erstellungs- und Installationsfunktionalitäten für virtuelle Images in einer sicheren Umgebung, in der automatisierte Mechanismen anwenderspezifische Firewall-Regeln setzen, um anwenderbezogene Netzwerkpartitionen zu erschaffen. Die Grid-Umgebung wird selbst in mehrere Bereiche unterteilt, damit die Kompromittierung von einzelnen Komponenten nicht so leicht zu einer Gefährdung des gesamten Systems führen kann. Die Grid-Headnode und der Image-Erzeugungsserver werden jeweils in einzelne Bereiche dieser demilitarisierten Zone positioniert. Um die sichere Anbindung von existierenden Geschäftsanwendungen zu ermöglichen, werden der BPEL-Standard (Business Process Execution Language) und eine Workflow-Ausführungseinheit um Grid-Sicherheitskonzepte erweitert. Die Erweiterung erlaubt eine nahtlose Integration von geschützten Grid Services mit existierenden Web Services. Die Workflow-Ausführungseinheit bietet die Erzeugung und die Erneuerung (im Falle von lange laufenden Anwendungen) von Proxy-Zertifikaten. Der Ansatz ermöglicht die sichere gemeinsame Ausführung von neuen, fein-granularen, service-orientierten Grid Anwendungen zusammen mit traditionellen Batch- und Job-Farming Anwendungen. Dies wird durch die Integration des vorgestellten Grid Sandboxing-Systems in existierende Cluster Scheduling Systeme erreicht. Eine innovative Server-Rotationsstrategie sorgt für weitere Sicherheit für den Grid Headnode Server, in dem transparent das virtuelle Server Image erneuert wird und damit auch unbekannte und unentdeckte Angriffe neutralisiert werden. Um die Angriffe, die nicht verhindert werden konnten, zu erkennen, wird ein neuartiges Intrusion Detection System vorgestellt, das auf Basis von Datenstrom-Datenbanksystemen funktioniert. Als letzte Neuerung dieser Arbeit wird eine Erweiterung des modellgetriebenen Softwareentwicklungsprozesses eingeführt, die eine automatisierte Generierung von sicheren Grid Services ermöglicht, um die komplexe und damit unsichere manuelle Erstellung von Grid Services zu ersetzen. Eine prototypische Implementierung der Konzepte wird auf Basis des Globus Toolkits 4, der Sun Grid Engine und der ActiveBPEL Engine vorgestellt. Die modellgetriebene Entwicklungsumgebung wurde in Eclipse für das Globus Toolkit 4 realisiert. Experimentelle Resultate und eine Evaluation der kritischen Komponenten des vorgestellten neuen Grids werden präsentiert. Die vorgestellten Sicherheitsmechanismem sollen die nächste Phase der Evolution des Grid Computing in einer sicheren Umgebung ermöglichen

    The application of digital techniques to an automatic radar track extraction system

    Get PDF
    'Modern' radar systems have come in for much criticism in recent years, particularly in the aftermath of the Falklands campaign. There have also been notable failures in commercial designs, including the well-publicised 'Nimrod' project which was abandoned due to persistent inability to meet signal processing requirements. There is clearly a need for improvement in radar signal processing techniques as many designs rely on technology dating from the late 1970's, much of which is obsolete by today’s standards. The Durham Radar Automatic Track Extraction System (RATES) is a practical implementation of current microprocessor technology, applied to plot extraction of surveillance radar data. In addition to suggestions for the design of such a system, results are quoted for the predicted performance when compared with a similar product using 1970's design methodology. Suggestions are given for the use of other VLSI techniques in plot extraction, including logic arrays and digital signal processors. In conclusion, there is an illustrated discussion concerning the use of systolic arrays in RATES and a prediction that this will represent the optimum architecture for future high-speed radar signal processors
    corecore