10,238 research outputs found
Emerging Security Threats in Modern Digital Computing Systems: A Power Management Perspective
Design of computing systems — from pocket-sized smart phones to massive cloud based data-centers — have one common daunting challenge : minimizing the power consumption. In this effort, power management sector is undergoing a rapid and profound transformation to promote clean and energy proportional computing. At the hardware end of system design, there is proliferation of specialized, feature rich and complex power management hardware components. Similarly, in the software design layer complex power management suites are growing rapidly. Concurrent to this development, there has been an upsurge in the integration of third-party components to counter the pressures of shorter time-to-market. These trends collectively raise serious concerns about trust and security of power management solutions.
In recent times, problems such as overheating, performance degradation and poor battery life, have dogged the mobile devices market, including the infamous recall of Samsung Note 7. Power outage in the data-center of a major airline left innumerable passengers stranded, with thousands of canceled flights costing over 100 million dollars. This research examines whether such events of unintentional reliability failure, can be replicated using targeted attacks by exploiting the security loopholes in the complex power management infrastructure of a computing system.
At its core, this research answers an imminent research question: How can system designers ensure secure and reliable operation of third-party power management units? Specifically, this work investigates possible attack vectors, and novel non-invasive detection and defense mechanisms to safeguard system against malicious power attacks. By a joint exploration of the threat model and techniques to seamlessly detect and protect against power attacks, this project can have a lasting impact, by enabling the design of secure and cost-effective next generation hardware platforms
Mobile graphics: SIGGRAPH Asia 2017 course
Peer ReviewedPostprint (published version
Sistemas de teste automáticos para transceivers NG-PON2
Optical communications have had a fundamental role in conecting people
worldwide. More than ever, there has been an incessant necessity to turn
technology more ubiquitous
With recent advancements in optical technology, it has become possible
to keep up with the demand for higher transmission rates in upstream or
downstream, higher bandwidth e still guaranteeing Quality of Service (QoS)
among inumerous users
This emerging necessity has taken telecommunication companies to inovate
in the area of development regarding optical equipment and also dealing
with the referred necessities. For this to happen, good quality control,
calibration e testing of produced parts is of paramount importance.
The work cut out for this dissertation is focused on the improvement and
addition of funtionalities to a test-board designed to perform measurements
of BER levels, calibration and maintenance of parts according to the newest
optical standard(New Gigabit Passive Optical Network 2 (NGPON2)) that
operates in maximum rates of 10Gb/s per channel.
In the rst part of this work, emphasis is given to the development of a slave
Inter Integrated Circuit (I2C) module that ensures connection between the
test board and the user, supplying BER values measured through a block
dedicated to measure BER levels. Later the same module will allow to
access all micro-controlers of the test-board, ensuring calibration functions.
On a second part, a characterization of different transceivers of different
Field Programmable Gate Array (FPGA)s is performed, consisting of an
eye diagram analysis of the transceivers and if possible, to test 10Gb/s
continuous mode through BER curves assessing their response.
Finally, a comparison is made between all transceivers, the obtained response
along with all the respective results, will contribute to the source project
of the automatic test board developed at PICadvanced with the intent on
evaluating 10 Gigabit Small Form Factor Pluggables (XFP) production.As comunicações têm vindo a ter um papel fundamental em interligar todas
as pessoas do mundo. Mais do que nunca, tem havido uma incessante
necessidade de tornar a tecnologia mais ubíqua.
Com o recente avanço e desenvolvimento da tecnologia Optica, tem sido
possível acompanhar a demanda por altas taxas de transmissão em upstream
ou downstream, maior largura de banda e ainda garantir Quality of Service
(QoS) entre ínumeros utilizadores, etc. . .
Esta necessidade emergente tem levado empresas de telecomunicações a inovar
na área de desenvolvimento de equipamento óptico e por consequente,
comaltar as necessidades referidas. Para isto acontecer tem de haver um
bom controlo, calibração e teste de peças produzidas.
O trabalho desta dissertação dedica-se ao melhoramento e acrescento
de funcionalidades a uma placa de testes desenhada para desempenhar
medições de níveis de Bit Error Ratio (BER), calibração e manutenção de
peças para o novo standard óptico (New Gigabit Passive Optical Network 2
(NGPON2)) que recorre ao uso de taxas máximas de transmissão de 10Gb/s
por canal
Na primeira parte do trabalho é dado foco ao desenvolvimento de um módulo
escravo Inter Integrated Circuit (I2C) que visa estabelecer o contacto entre
a placa de calibração e o utilizador fornecendo os valores de BER medidos
através de um bloco dedicado a medir o nível de BER. Mais tarde este
módulo servirá para poder aceder aos micro-circuitos da placa de testes
podendo realizar funções de calibração.
Numa segunda parte, é realizada uma caracterização de diferentes
transceivers de diferentes Field Programmable Gate Array (FPGA)s, a caracterização consiste numa análise do diagram de olho de transceivers e ainda
sendo possível, testar o modo contínuo nas mesmas, através curvas de BER
para avaliar a sua resposta.
Por fim, é feita uma comparação entre os mesmos transceivers, além de
que todos os resultados obtidos irão contribuir para a o projecto fonte da
placa de testes automatizada desenvolvida pela PICadvanced com o intuito
de avaliar a produção de 10 Gigabit Small Form Factor Pluggables (XFP).Mestrado em Engenharia Eletrónica e Telecomunicaçõe
Systemunterstützung für moderne Speichertechnologien
Trust and scalability are the two significant factors which impede the dissemination of clouds.
The possibility of privileged access to customer data by a cloud provider limits the usage of clouds for processing security-sensitive data.
Low latency cloud services rely on in-memory computations, and thus, are limited by several characteristics of Dynamic RAM (DRAM) such as capacity, density, energy consumption, for example.
Two technological areas address these factors.
Mainstream server platforms, such as Intel Software Guard eXtensions (SGX) und AMD Secure Encrypted Virtualisation (SEV) offer extensions for trusted execution in untrusted environments.
Various technologies of Non-Volatile RAM (NV-RAM) have better capacity and density compared to DRAM and thus can be considered as DRAM alternatives in the future.
However, these technologies and extensions require new programming approaches and system support since they add features to the system architecture: new system components (Intel SGX) and data persistence (NV-RAM).
This thesis is devoted to the programming and architectural aspects of persistent and trusted systems.
For trusted systems, an in-depth analysis of new architectural extensions was performed.
A novel framework named EActors and a database engine named STANlite were developed to effectively use the capabilities of trusted~execution.
For persistent systems, an in-depth analysis of prospective memory technologies, their features and the possible impact on system architecture was performed.
A new persistence model, called the hypervisor-based model of persistence, was developed and evaluated by the NV-Hypervisor.
This offers transparent persistence for legacy and proprietary software, and supports virtualisation of persistent memory.Vertrauenswürdigkeit und Skalierbarkeit sind die beiden maßgeblichen Faktoren, die die Verbreitung von Clouds behindern.
Die Möglichkeit privilegierter Zugriffe auf Kundendaten durch einen Cloudanbieter schränkt die Nutzung von Clouds bei der Verarbeitung von sicherheitskritischen und vertraulichen Informationen ein.
Clouddienste mit niedriger Latenz erfordern die Durchführungen von Berechnungen im Hauptspeicher und sind daher an Charakteristika von Dynamic RAM (DRAM) wie Kapazität, Dichte, Energieverbrauch und andere Aspekte gebunden.
Zwei technologische Bereiche befassen sich mit diesen Faktoren: Etablierte Server Plattformen wie Intel Software Guard eXtensions (SGX) und AMD Secure Encrypted Virtualisation (SEV) stellen Erweiterungen für vertrauenswürdige Ausführung in nicht vertrauenswürdigen Umgebungen bereit.
Verschiedene Technologien von nicht flüchtigem Speicher bieten bessere Kapazität und Speicherdichte verglichen mit DRAM, und können daher in Zukunft als Alternative zu DRAM herangezogen werden.
Jedoch benötigen diese Technologien und Erweiterungen neuartige Ansätze und Systemunterstützung bei der Programmierung, da diese der Systemarchitektur neue Funktionalität hinzufügen: Systemkomponenten (Intel SGX) und Persistenz (nicht-flüchtiger Speicher).
Diese Dissertation widmet sich der Programmierung und den Architekturaspekten von persistenten und vertrauenswürdigen Systemen.
Für vertrauenswürdige Systeme wurde eine detaillierte Analyse der neuen Architekturerweiterungen durchgeführt.
Außerdem wurden das neuartige EActors Framework und die STANlite Datenbank entwickelt, um die neuen Möglichkeiten von vertrauenswürdiger Ausführung effektiv zu nutzen.
Darüber hinaus wurde für persistente Systeme eine detaillierte Analyse zukünftiger Speichertechnologien, deren Merkmale und mögliche Auswirkungen auf die Systemarchitektur durchgeführt.
Ferner wurde das neue Hypervisor-basierte Persistenzmodell entwickelt und mittels NV-Hypervisor ausgewertet, welches transparente Persistenz für alte und proprietäre Software, sowie Virtualisierung von persistentem Speicher ermöglicht
Dependable Embedded Systems
This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems
- …