30 research outputs found

    Self-timed field programmmable gate array architectures

    Get PDF

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first

    Systemunterstützung für moderne Speichertechnologien

    Get PDF
    Trust and scalability are the two significant factors which impede the dissemination of clouds. The possibility of privileged access to customer data by a cloud provider limits the usage of clouds for processing security-sensitive data. Low latency cloud services rely on in-memory computations, and thus, are limited by several characteristics of Dynamic RAM (DRAM) such as capacity, density, energy consumption, for example. Two technological areas address these factors. Mainstream server platforms, such as Intel Software Guard eXtensions (SGX) und AMD Secure Encrypted Virtualisation (SEV) offer extensions for trusted execution in untrusted environments. Various technologies of Non-Volatile RAM (NV-RAM) have better capacity and density compared to DRAM and thus can be considered as DRAM alternatives in the future. However, these technologies and extensions require new programming approaches and system support since they add features to the system architecture: new system components (Intel SGX) and data persistence (NV-RAM). This thesis is devoted to the programming and architectural aspects of persistent and trusted systems. For trusted systems, an in-depth analysis of new architectural extensions was performed. A novel framework named EActors and a database engine named STANlite were developed to effectively use the capabilities of trusted~execution. For persistent systems, an in-depth analysis of prospective memory technologies, their features and the possible impact on system architecture was performed. A new persistence model, called the hypervisor-based model of persistence, was developed and evaluated by the NV-Hypervisor. This offers transparent persistence for legacy and proprietary software, and supports virtualisation of persistent memory.Vertrauenswürdigkeit und Skalierbarkeit sind die beiden maßgeblichen Faktoren, die die Verbreitung von Clouds behindern. Die Möglichkeit privilegierter Zugriffe auf Kundendaten durch einen Cloudanbieter schränkt die Nutzung von Clouds bei der Verarbeitung von sicherheitskritischen und vertraulichen Informationen ein. Clouddienste mit niedriger Latenz erfordern die Durchführungen von Berechnungen im Hauptspeicher und sind daher an Charakteristika von Dynamic RAM (DRAM) wie Kapazität, Dichte, Energieverbrauch und andere Aspekte gebunden. Zwei technologische Bereiche befassen sich mit diesen Faktoren: Etablierte Server Plattformen wie Intel Software Guard eXtensions (SGX) und AMD Secure Encrypted Virtualisation (SEV) stellen Erweiterungen für vertrauenswürdige Ausführung in nicht vertrauenswürdigen Umgebungen bereit. Verschiedene Technologien von nicht flüchtigem Speicher bieten bessere Kapazität und Speicherdichte verglichen mit DRAM, und können daher in Zukunft als Alternative zu DRAM herangezogen werden. Jedoch benötigen diese Technologien und Erweiterungen neuartige Ansätze und Systemunterstützung bei der Programmierung, da diese der Systemarchitektur neue Funktionalität hinzufügen: Systemkomponenten (Intel SGX) und Persistenz (nicht-flüchtiger Speicher). Diese Dissertation widmet sich der Programmierung und den Architekturaspekten von persistenten und vertrauenswürdigen Systemen. Für vertrauenswürdige Systeme wurde eine detaillierte Analyse der neuen Architekturerweiterungen durchgeführt. Außerdem wurden das neuartige EActors Framework und die STANlite Datenbank entwickelt, um die neuen Möglichkeiten von vertrauenswürdiger Ausführung effektiv zu nutzen. Darüber hinaus wurde für persistente Systeme eine detaillierte Analyse zukünftiger Speichertechnologien, deren Merkmale und mögliche Auswirkungen auf die Systemarchitektur durchgeführt. Ferner wurde das neue Hypervisor-basierte Persistenzmodell entwickelt und mittels NV-Hypervisor ausgewertet, welches transparente Persistenz für alte und proprietäre Software, sowie Virtualisierung von persistentem Speicher ermöglicht

    Microkernel mechanisms for improving the trustworthiness of commodity hardware

    Full text link
    The thesis presents microkernel-based software-implemented mechanisms for improving the trustworthiness of computer systems based on commercial off-the-shelf (COTS) hardware that can malfunction when the hardware is impacted by transient hardware faults. The hardware anomalies, if undetected, can cause data corruptions, system crashes, and security vulnerabilities, significantly undermining system dependability. Specifically, we adopt the single event upset (SEU) fault model and address transient CPU or memory faults. We take advantage of the functional correctness and isolation guarantee provided by the formally verified seL4 microkernel and hardware redundancy provided by multicore processors, design the redundant co-execution (RCoE) architecture that replicates a whole software system (including the microkernel) onto different CPU cores, and implement two variants, loosely-coupled redundant co-execution (LC-RCoE) and closely-coupled redundant co-execution (CC-RCoE), for the ARM and x86 architectures. RCoE treats each replica of the software system as a state machine and ensures that the replicas start from the same initial state, observe consistent inputs, perform equivalent state transitions, and thus produce consistent outputs during error-free executions. Compared with other software-based error detection approaches, the distinguishing feature of RCoE is that the microkernel and device drivers are also included in redundant co-execution, significantly extending the sphere of replication (SoR). Based on RCoE, we introduce two kernel mechanisms, fingerprint validation and kernel barrier timeout, detecting fault-induced execution divergences between the replicated systems, with the flexibility of tuning the error detection latency and coverage. The kernel error-masking mechanisms built on RCoE enable downgrading from triple modular redundancy (TMR) to dual modular redundancy (DMR) without service interruption. We run synthetic benchmarks and system benchmarks to evaluate the performance overhead of the approach, observe that the overhead varies based on the characteristics of workloads and the variants (LC-RCoE or CC-RCoE), and conclude that the approach is applicable for real-world applications. The effectiveness of the error detection mechanisms is assessed by conducting fault injection campaigns on real hardware, and the results demonstrate compelling improvement

    Hybrid pulse interval modulation-code-division multiple-access for optical wireless communications.

    Get PDF
    The work in this thesis investigates the properties of the IR diffuse wireless link with regard to: the use of sets of signature sequences with good message separation properties (hence providing low BER), the suitability of a hPIM-CDMA scheme for the IR diffuse wireless systems under the constraint of eye safety regulations (i.e. when all users are transmitting simultaneously), the quality of message separation due to multipath propagation. The suitability of current DS-CDMA systems using other modulation techniques are also investigated and compared with hPIM-CDMA for the performances in power efficiency, data throughput enhancement and error rate.A new algorithm has also been proposed for generating large sets of (n,3,1,1)OOC practically with reduced computation time. The algorithm introduces five conditions that are well refined and help in speeding up the code construction process. Results for elapsed computation times for constructing the codes using the proposed algorithm are compared with theory and show a significant achievement. The models for hPIM-CDMA and hPPM-CDMA systems, which were based on passive devices only, were also studied. The technique used in hPIM-CDMA, which uses a variable and shorter symbol duration, to achieve higher data throughput is presented in detail. An in-depth analysis of the BER performance was presented and results obtained show that a lower BER and higher data throughput can be achieved. A corrected BER expression for the hPPM-CDMA was presented and the justification for this detailed. The analyses also show that for DS-CDMA systems using certain sets of signature sequences, the BER performance cannot be approximated by a Gaussian function

    Design and implementation of a downlink MC-CDMA receiver

    Get PDF
    Cette thèse présente une étude d'un système complet de transmission en liaison descendante utilisant la technologie multi-porteuse avec l'accès multiple par division de code (Multi-Carrier Code Division Multiple Access, MC-CDMA). L'étude inclut la synchronisation et l'estimation du canal pour un système MC-CDMA en liaison descendante ainsi que l'implémentation sur puce FPGA d'un récepteur MC-CDMA en liaison descendante en bande de base. Le MC-CDMA est une combinaison de la technique de multiplexage par fréquence orthogonale (Orthogonal Frequency Division Multiplexing, OFDM) et de l'accès multiple par répartition de code (CDMA), et ce dans le but d'intégrer les deux technologies. Le système MC-CDMA est conçu pour fonctionner à l'intérieur de la contrainte d'une bande de fréquence de 5 MHz pour les modèles de canaux intérieur/extérieur pédestre et véhiculaire tel que décrit par le "Third Genaration Partnership Project" (3GPP). La composante OFDM du système MC-CDMA a été simulée en utilisant le logiciel MATLAB dans le but d'obtenir des paramètres de base. Des codes orthogonaux à facteur d'étalement variable (OVSF) de longueur 8 ont été choisis comme codes d'étalement pour notre système MC-CDMA. Ceci permet de supporter des taux de transmission maximum jusquà 20.6 Mbps et 22.875 Mbps (données non codées, pleine charge de 8 utilisateurs) pour les canaux intérieur/extérieur pédestre et véhiculaire, respectivement. Une étude analytique des expressions de taux d'erreur binaire pour le MC-CDMA dans un canal multivoies de Rayleigh a été réalisée dans le but d'évaluer rapidement et de façon précise les performances. Des techniques d'estimation de canal basées sur les décisions antérieures ont été étudiées afin d'améliorer encore plus les performances de taux d'erreur binaire du système MC-CDMA en liaison descendante. L'estimateur de canal basé sur les décisions antérieures et utilisant le critère de l'erreur quadratique minimale linéaire avec une matrice' de corrélation du canal de taille 64 x 64 a été choisi comme étant un bon compromis entre la performance et la complexité pour une implementation sur puce FPGA. Une nouvelle séquence d'apprentissage a été conçue pour le récepteur dans la configuration intérieur/extérieur pédestre dans le but d'estimer de façon grossière le temps de synchronisation et le décalage fréquentiel fractionnaire de la porteuse dans le domaine du temps. Les estimations fines du temps de synchronisation et du décalage fréquentiel de la porteuse ont été effectués dans le domaine des fréquences à l'aide de sous-porteuses pilotes. Un récepteur en liaison descendante MC-CDMA complet pour le canal intérieur /extérieur pédestre avec les synchronisations en temps et en fréquence en boucle fermée a été simulé avant de procéder à l'implémentation matérielle. Le récepteur en liaison descendante en bande de base pour le canal intérieur/extérieur pédestre a été implémenté sur un système de développement fabriqué par la compagnie Nallatech et utilisant le circuit XtremeDSP de Xilinx. Un transmetteur compatible avec le système de réception a également été réalisé. Des tests fonctionnels du récepteur ont été effectués dans un environnement sans fil statique de laboratoire. Un environnement de test plus dynamique, incluant la mobilité du transmetteur, du récepteur ou des éléments dispersifs, aurait été souhaitable, mais n'a pu être réalisé étant donné les difficultés logistiques inhérentes. Les taux d'erreur binaire mesurés avec différents nombres d'usagers actifs et différentes modulations sont proches des simulations sur ordinateurs pour un canal avec bruit blanc gaussien additif

    Adaptive PN code synchronisation in DS-CDMA systems

    Get PDF
    Spread Spectrum (SS) communication, initially designed for military applications, is now the basis for many of today's advanced communications systems such as Code Division Multiple Access (CDMA), Global Positioning System (GPS), Wireless Local Loop (WLL) , etc. For effective communication to take place in systems using SS modulation, the Pseudo-random Noise (PN) code used at the receiver to despread the received signal must be identical and be synchronised with the PN code that was used to spread the signal at the transmitter. Synchronisation is done in two steps: coarse synchronisation or acquisition, and fine synchronisation or tracking. Acquisition involves obtaining a coarse estimate of the phase shift between the transmitted PN code and that at the receiver so that the received PN code will be aligned or synchronised with the locally generated PN code. After acquisition, tracldng is now done which involves maintaining the alignment of the two PN codes. This thesis presents results of the research calTied out on a proposed adaptive PN code acquisition circuit designed to improve the synchronisation process in Direct Sequence CDMA (DS-CDMA) systems. The acquisition circuit is implemented using a Matched Filter (MF) for the correlation operation and the threshold setting device is an adaptive processor known as the Cell Averaging Constant False Alarm Rate (CA-CFAR) processor. It is a double dwell acquisition circuit where the second dwell is implemented by Post Detection Integration (PDI). Depending on the application, PDI can be used to mitigate the effect of frequency offset in non-coherent detectors and/or in the implementation of multiple dwell acquisition systems. Equations relating the performance measures - the probability of false alarm (Pra ), the probability of detection (P d) and the mean acquisition time (E {Tacq}) - of the circuit are deri ved. Monte Carlo simulation was used for the independent validation of the theoretical results obtained, and the strong agreement between these results shows the accuracy of the derived equations for the proposed circuit. Due to the combination of PDI and CA-CFAR processor in the implementation of the circuit, results obtained show that it can provide a good measure of robustness to frequency offset and noise power variations in mobile environment, consequently leading to improved acquisition time performance. The complete synchronisation circuit is realised by using this circuit in conjunction with a conventional code tracking circuit. Therefore, a study of a Non-coherent Delay-Locked Loop (NDLL) code tracking circuit is also calTied out.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Collaborative coding multiple access communications

    Get PDF
    This thesis investigates collaborative coding multiple access (CCMA) channel communication schemes. The CCMA schemes potentially permit efficient simultaneous transmission by several users sharing a common channel, without subdivision in time, frequency or orthogonal codes. The main areas of investigation include the information transmission capacity for single and multiple access channels, coding/decoding techniques and practical system design for CCMA schemes. The information transmission capacity of a sampled and quantised single access AWGN channel is developed. It is determined and optimised when the channel input and output are limited by certain practical constraints. These investigations have led to the development and determination of the information transmission capacity of multiple access channels. The capacity of a multiple access channel is studied for two different classes of T-user channel models from both theoretical and practical points of view. It is shown, in principle, that higher transmission rates or, equivalently, more reliable communication than with time sharing is achievable employing the same signalling alphabet. The CCMA schemes, in addition to providing the multiple access function, can also incorporate a certain degree of error control capability. Two main decoding techniques, hard decision and maximum likelihood soft decision, are presented with uniquely decodable CCMA schemes. A new low complexity maximum likelihood decoding technique is described and analysed. Reliability performance of various collaborative codes is studied by simulation employing these decoding techniques. It is shown that uniquely decodable schemes permit the multiple access function to be combined with forward error correction. It is also found that soft decision decoding can provide an energy gain over hard decision decoding. The final area of investigation is a practical CCMA modem system design to combine collaborative coding and modulation. An M-ary frequency shift keying based modulation scheme is described for the T-user CCMA schemes. Three particular types of demodulation techniques, square-law, zerocrossing counting, and quadrature receiver, are described. These techniques are developed in software, tested and evaluated over noiseless and noisy channels
    corecore