133 research outputs found

    Optimization on fixed low latency implementation of GBT protocol in FPGA

    Full text link
    In the upgrade of ATLAS experiment, the front-end electronics components are subjected to a large radiation background. Meanwhile high speed optical links are required for the data transmission between the on-detector and off-detector electronics. The GBT architecture and the Versatile Link (VL) project are designed by CERN to support the 4.8 Gbps line rate bidirectional high-speed data transmission which is called GBT link. In the ATLAS upgrade, besides the link with on-detector, the GBT link is also used between different off-detector systems. The GBTX ASIC is designed for the on-detector front-end, correspondingly for the off-detector electronics, the GBT architecture is implemented in Field Programmable Gate Arrays (FPGA). CERN launches the GBT-FPGA project to provide examples in different types of FPGA. In the ATLAS upgrade framework, the Front-End LInk eXchange (FELIX) system is used to interface the front-end electronics of several ATLAS subsystems. The GBT link is used between them, to transfer the detector data and the timing, trigger, control and monitoring information. The trigger signal distributed in the down-link from FELIX to the front-end requires a fixed and low latency. In this paper, several optimizations on the GBT-FPGA IP core are introduced, to achieve a lower fixed latency. For FELIX, a common firmware will be used to interface different front-ends with support of both GBT modes: the forward error correction mode and the wide mode. The modified GBT-FPGA core has the ability to switch between the GBT modes without FPGA reprogramming. The system clock distribution of the multi-channel FELIX firmware is also discussed in this paper

    Synchronization for capacity -approaching coded communication systems

    Get PDF
    The dissertation concentrates on synchronization of capacity approaching error-correction codes that are deployed in noisy channels with very low signal-to-noise ratio (SNR). The major topics are symbol timing synchronization and frame synchronization.;Capacity-approaching error-correction codes, like turbo codes and low-density parity-check (LDPC) codes, are capable of reaching very low bit error rates and frame error rates in noisy channels by iterative decoding. To fully achieve the potential decoding capability of Turbo codes and LDPC codes, proper symbol timing synchronization, frame synchronization and channel state estimation are required. The dissertation proposes a joint estimator of symbol time delay and channel SNR for symbol timing recovery, and a maximum a posteriori (MAP) frame synchronizer for frame synchronization.;Symbol timing recovery is implemented by sampling and interpolation. The received signal is sampled multiple times per symbol period with unknown delay and unknown SNR. A joint estimator estimates the time delay and the SNR. The signal is rebuilt by interpolating available samples using estimated time delay. The intermediate decoding results enable decision-feedback estimation. The estimates of time delay and SNR are refined by iterative processing. This refinement improves the system performance significantly.;Usually the sampling rate is assumed to be a strict integer multiple of the symbol rate. However, in a practical system the local oscillators in the transmitter and the receiver may have random drifts. Therefore the sampling rate is no longer an exact multiple of the symbol rate, and the sampling time follows a random walk. This random walk may harm the system performance severely. The dissertation analyzes the effect of random time walks and proposes to mitigate the effect by overlapped sliding windows and iterative processing.;Frame synchronization is required to find the correct boundaries of codewords. MAP frame synchronization in the sense of minimizing the frame sync failure rate is investigated. The MAP frame synchronizer explores low-density parity-check attributes of the capacity-approaching codes. The accuracy of frame synchronization is adequate for considered coded systems to work reliably under very low SNR

    A study of high density bit transition requirements versus the effects on BCH error correcting coding

    Get PDF
    Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed

    A study of high density bit transition requirements versus the effects on BCH error correcting codes

    Get PDF
    The use of PN sequence generators to create a minimum number of transitions in an NRZ bit stream is described. The CSG encoder/decoder design was constructed and demonstrated

    Multilevel sequences and line codes

    Get PDF
    M.Ing. (Electrical Engineering)As the demand for high-speed data communications over conventional channels such as coaxial cables and twisted pairs grows, it becomes neccesary to optimize every aspect of the communication system at reasonable cost to meet this demand effectively. The choice of a line code is one of the most important aspects in the design of a communications system, as the line code determines the complexity, and thus also the cost, of several circuits in the system. It has become known in recent years that a multilevel line code is preferable to a binary code in cases where high-speed communications are desired. Apart from ternary codes, not many multilevel codes are available. Some of the existing line codes also suffer from serious drawbacks regarding a lack of complying to input restrictions, small values of efficiency, and great code complexity. In this study, Markov models and values of channel capacity are presented for several classes of restricted multilevel sequences which are thought to be of practical importance in view of the channel input restrictions that these codes satisfy. Different coding methods are used to construct low-complexity encoders and decoders for generating and decoding these sequences with high values of efficiency, good error behaviour and favourable power spectral densitie

    Implementation of FPGA in the Design of Embedded Systems

    Get PDF
    The use of FPGAs (Field Programmable Gate Arrays) and configurable processors is an interesting new phenomenon in embedded development. FPGAs offer all of the features needed to implement most complex designs. Clock management is facilitated by on-chip PLL (phase-locked loop) or DLL (delay-locked loop) circuitry. Dedicated memory blocks can be configured as basic single-port RAMs, ROMs, FIFOs, or CAMs. Data processing, as embodied in the devices’ logic fabric, varies widely. The ability to link the FPGA with backplanes, high-speed buses, and memories is afforded by support for various single ended and differential I/O standards. Also found on today’s FPGAs are system-building resources such as high speed serial I/Os, arithmetic modules, embedded processors, and large amounts of memory. Here in our project we have tried to implement such powerful FPGAs in the design of possible embedded systems that can be designed, burned and deployed at the site of operation for handling of many kinds of applications. In our project we have basically dealt with two of such applications –one the prioritized traffic light controller and other a speech encrypting and decrypting system

    RAPID CLOCK RECOVERY ALGORITHMS FOR DIGITAL MAGNETIC RECORDING AND DATA COMMUNICATIONS

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN024293 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Design and Analysis of Cryptographic Algorithms for Authentication

    Get PDF
    During the previous decades, the upcoming demand for security in the digital world, e.g., the Internet, lead to numerous groundbreaking research topics in the field of cryptography. This thesis focuses on the design and analysis of cryptographic primitives and schemes to be used for authentication of data and communication endpoints, i.e., users. It is structured into three parts, where we present the first freely scalable multi-block-length block-cipher-based compression function (Counter-bDM) in the first part. The presented design is accompanied by a thorough security analysis regarding its preimage and collision security. The second and major part is devoted to password hashing. It is motivated by the large amount of leaked password during the last years and our discovery of side-channel attacks on scrypt – the first modern password scrambler that allowed to parameterize the amount of memory required to compute a password hash. After summarizing which properties we expect from a modern password scrambler, we (1) describe a cache-timing attack on scrypt based on its password-dependent memory-access pattern and (2) outline an additional attack vector – garbage-collector attacks – that exploits optimization which may disregard to overwrite the internally used memory. Based on our observations, we introduce Catena – the first memory-demanding password-scrambling framework that allows a password-independent memory-access pattern for resistance to the aforementioned attacks. Catena was submitted to the Password Hashing Competition (PHC) and, after two years of rigorous analysis, ended up as a finalist gaining special recognition for its agile framework approach and side-channel resistance. We provide six instances of Catena suitable for a variety of applications. We close the second part of this thesis with an overview of modern password scramblers regarding their functional, security, and general properties; supported by a brief analysis of their resistance to garbage-collector attacks. The third part of this thesis is dedicated to the integrity (authenticity of data) of nonce-based authenticated encryption schemes (NAE). We introduce the so-called j-IV-Collision Attack, allowing to obtain an upper bound for an adversary that is provided with a first successful forgery and tries to efficiently compute j additional forgeries for a particular NAE scheme (in short: reforgeability). Additionally, we introduce the corresponding security notion j-INT-CTXT and provide a comparative analysis (regarding j-INT-CTXT security) of the third-round submission to the CAESAR competition and the four classical and widely used NAE schemes CWC, CCM, EAX, and GCM.Die fortschreitende Digitalisierung in den letzten Jahrzehnten hat dazu geführt, dass sich das Forschungsfeld der Kryptographie bedeutsam weiterentwickelt hat. Diese, im Wesentlichen aus drei Teilen bestehende Dissertation, widmet sich dem Design und der Analyse von kryptographischen Primitiven und Modi zur Authentifizierung von Daten und Kommunikationspartnern. Der erste Teil beschäftigt sich dabei mit blockchiffrenbasierten Kompressionsfunktionen, die in ressourcenbeschränkten Anwendungsbereichen eine wichtige Rolle spielen. Im Rahmen dieser Arbeit präsentieren wir die erste frei skalierbare und sichere blockchiffrenbasierte Kompressionsfunktion Counter-bDM und erweitern somit flexibel die erreichbare Sicherheit solcher Konstruktionen. Der zweite Teil und wichtigste Teil dieser Dissertation widmet sich Passwort-Hashing-Verfahren. Zum einen ist dieser motiviert durch die große Anzahl von Angriffen auf Passwortdatenbanken großer Internet-Unternehmen. Zum anderen bot die Password Hashing Competition (PHC) die Möglichkeit, unter Aufmerksamkeit der Expertengemeinschaft die Sicherheit bestehender Verfahren zu hinterfragen, sowie neue sichere Verfahren zu entwerfen. Im Rahmen des zweiten Teils entwarfen wir Anforderungen an moderne Passwort-Hashing-Verfahren und beschreiben drei Arten von Seitenkanal-Angriffen (Cache-Timing-, Weak Garbage-Collector- und Garbage-Collector-Angriffe) auf scrypt – das erste moderne Password-Hashing-Verfahren welches erlaubte, den benötigten Speicheraufwand zur Berechnung eines Passworthashes frei zu wählen. Basierend auf unseren Beobachtungen und Angriffen, stellen wir das erste moderne PasswordHashing-Framework Catena vor, welches für gewählte Instanzen passwortunabhängige Speicherzugriffe und somit Sicherheit gegen oben genannte Angriffe garantiert. Catena erlangte im Rahmen des PHC-Wettbewerbs besondere Anerkennung für seine Agilität und Resistenz gegen SeitenkanalAngriffe. Wir präsentieren sechs Instanzen des Frameworks, welche für eine Vielzahl von Anwendungen geeignet sind. Abgerundet wird der zweite Teil dieser Arbeit mit einem vergleichenden Überblick von modernen Passwort-Hashing-Verfahren hinsichtlich ihrer funktionalen, sicherheitstechnischen und allgemeinen Eigenschaften. Dieser Vergleich wird unterstützt durch eine kurze Analyse bezüglich ihrer Resistenz gegen (Weak) Garbage-Collector-Angriffe. Der dritte teil dieser Arbeit widmet sich der Integrität von Daten, genauer, der Sicherheit sogenannter Nonce-basierten authentisierten Verschlüsselungsverfahren (NAE-Verfahren), welche ebenso wie Passwort-Hashing-Verfahren in der heutigen Sicherheitsinfrastruktur des Internets eine wichtige Rolle spielen. Während Standard-Definitionen keine Sicherheit nach dem Fund einer ersten erfolgreich gefälschten Nachricht betrachten, erweitern wir die Sicherheitsanforderungen dahingehend wie schwer es ist, weitere Fälschungen zu ermitteln. Wir abstrahieren die Funktionsweise von NAEVerfahren in Klassen, analysieren diese systematisch und klassifizieren die Dritt-Runden-Kandidaten des CAESAR-Wettbewerbs, sowie vier weit verbreitete NAE-Verfahren CWC, CCM, EAX und GCM

    Development of a field-deployable error rate tester for 10 Gbps fiber optic transmission utilizing field programmable gate array technology

    Get PDF
    The bit error rate of a digital signal is the number of bits received incorrectly divided by the total number of bits received. A Bit Error Rate Tester (BERT) is a piece of equipment designed to calculate the bit error rate of a communication medium. The typical method of bit error rate testing is to produce a pseudorandom-binary sequence that can be transmitted over a fiber optic line and verified on the receiving end. The current equipment for bit error rate testing of 10 Gbps fiber optic data equipment is generally large, labbench-based equipment which is not feasible for use in the field. Utilizing current FPGA technology, the design, and the manufacture of BERT equipment for 10 Gbps fiber optic data lines that is compact enough for field use should be feasible. This thesis investigates prevailing methods for bit error testing and compares them for relative strengths and weaknesses. Based on this analysis, a proposed process will allow BERT implementation in a compact package. Successful development and implementation of this design will facilitate the productization of a portable and cheaper alternative to the more expensive and stationary desktop BERTs in current use
    corecore