274 research outputs found
Flexible Hardware-based Security-aware Mechanisms and Architectures
For decades, software security has been the primary focus in securing our computing platforms. Hardware was always assumed trusted, and inherently served as the foundation, and thus the root of trust, of our systems. This has been further leveraged in developing hardware-based dedicated security extensions and architectures to protect software from attacks exploiting software vulnerabilities such as memory corruption. However, the recent outbreak of microarchitectural attacks has shaken these long-established trust assumptions in hardware entirely, thereby threatening the security of all of our computing platforms and bringing hardware and microarchitectural security under scrutiny. These attacks have undeniably revealed the grave consequences of hardware/microarchitecture security flaws to the entire platform security, and how they can even subvert the security guarantees promised by dedicated security architectures. Furthermore, they shed light on the sophisticated challenges particular to hardware/microarchitectural security; it is more critical (and more challenging) to extensively analyze the hardware for security flaws prior to production, since hardware, unlike software, cannot be patched/updated once fabricated.
Hardware cannot reliably serve as the root of trust anymore, unless we develop and adopt new design paradigms where security is proactively addressed and scrutinized across the full stack of our computing platforms, at all hardware design and implementation layers. Furthermore, novel flexible security-aware design mechanisms are required to be incorporated in processor microarchitecture and hardware-assisted security architectures, that can practically address the inherent conflict between performance and security by allowing that the trade-off is configured to adapt to the desired requirements.
In this thesis, we investigate the prospects and implications at the intersection of hardware and security that emerge across the full stack of our computing platforms and System-on-Chips (SoCs). On one front, we investigate how we can leverage hardware and its advantages, in contrast to software, to build more efficient and effective security extensions that serve security architectures, e.g., by providing execution attestation and enforcement, to protect the software from attacks exploiting software vulnerabilities. We further propose that they are microarchitecturally configured at runtime to provide different types of security services, thus adapting flexibly to different deployment requirements. On another front, we investigate how we can protect these hardware-assisted security architectures and extensions themselves from microarchitectural and software attacks that exploit design flaws that originate in the hardware, e.g., insecure resource sharing in SoCs. More particularly, we focus in this thesis on cache-based side-channel attacks, where we propose sophisticated cache designs, that fundamentally mitigate these attacks, while still preserving performance by enabling that the performance security trade-off is configured by design. We also investigate how these can be incorporated into flexible and customizable security architectures, thus complementing them to further support a wide spectrum of emerging applications with different performance/security requirements. Lastly, we inspect our computing platforms further beneath the design layer, by scrutinizing how the actual implementation of these mechanisms is yet another potential attack surface. We explore how the security of hardware designs and implementations is currently analyzed prior to fabrication, while shedding light on how state-of-the-art hardware security analysis techniques are fundamentally limited, and the potential for improved and scalable approaches
The International Handbook of Social Enterprise Law
This open-access book brings together international experts who shed new light on the status of social enterprises, benefit corporations and other purpose-driven companies. The respective chapters take a multidisciplinary approach (combining law, philosophy, history, sociology and economics) and provide valuable insights on fostering social entrepreneurship and advancing the common good. In recent years, we have witnessed a significant shift of how business activities are conducted, mainly through the rise of social enterprises. In an effort to target social problems at their roots, social entrepreneurs create organizations that bring transformative social changes by considering, among others, ethical, social, and environmental factors. A variety of social enterprise models are emerging internationally and are proving their vitality and importance. But what does the term “social enterprise” mean? What are its roots? And how does it work in practice within the legal framework of any country? This handbook attempts to answer these questions from a theoretical, historical, and comparative perspective, bringing together 44 contributions written by 71 expert researchers and practitioners in this field. The first part provides an overview of the social enterprise movement, its evolution, and the different forms entities can take to meet global challenges, overcoming the limits of what governments and states can do. The second part focuses on the emergence of benefit corporations and the growing importance of sustainability and societal values, while also analyzing their different legal forms and adaptation to their regulatory environment. In turn, the last part presents the status quo of purpose-driven companies in 36 developed and emerging economies worldwide. This handbook offers food for thought and guidance for everyone interested in this field. It will benefit practitioners and decision-makers involved in social and community organizations, as well as in international development and, more generally speaking, social sciences and economics
External Nonparametric Memory in Deep Learning
Deep Neural Networks are limited in their ability to access and manipulate external knowledge after training. This capability is desirable; information access can be localized for interpretability, the external information itself may be modified improving editability, and external systems can be used for retrieval and storage, freeing up internal parameters that would otherwise be required to memorize knowledge. This dissertation presents three such approaches that augment deep neural networks with various forms external memory, achieving state-of-the-art results across multiple benchmarks and sub-fields.
First, we examine the limits of retrieval alone in Sample-Efficient Reinforcement Learning (RL) setting. We propose a method, NAIT, that is purely memory based, but is able to achieve performance comparable with the best neural models on the ATARI100k benchmark. Because NAIT does not make use of parametric function approximation, and instead approximates only locally, it is extremely computationally efficient, reducing the run-time for a full sweep over ATARI100k from days to minutes. NAIT provides a strong counterpoint to the prevailing notion that retrieval based lazy learning approaches are too slow to be practically useful in RL.
Next, we combine the promising non-parametric retrieval approach of NAIT with large image and text encoders for the task of Long-Tail Visual Recognition. This method, Retrieval Augmented Classification (RAC), achieves state-of-the art performance on the highly competitive long-tail datasets iNaturalist2018 and Places365-LT. This work is one of the first systems to effectively combine parametric and non-parametric approaches in Computer Vision. Most promisingly, we observe RAC's retrieval component achieves its highest per-class accuracies on sparse, infrequent classes, indicating non-parametric memory is an effective mechanism to model the `long-tail' of world knowledge.
Finally, we move beyond standard single-step retrieval and investigate multi-step retrieval over graphs of sentences for the task of Reading Comprehension. We first propose a mechanism to effectively construct such graphs from collections of documents, and then learn a general traversal policy over such graphs, conditioned on the query. We demonstrate the combination of this retriever with existing models both consistently boosts accuracy and reduces training time by 2-3x
Data acquisition for Germanium-detector arrays
Die Wandlung von analogen zu digitalen Signalen und die anschließende online/offline Verarbeitung ist die technologische Voraussetzung zahlreicher Experimente. Für diese Aufgaben werden häufig sogenannte Analog-Digital-Wandler (ADC) und FPGAs („field-programmable gate array“) eingesetzt. Die vorliegende Arbeit beschreibt die Evaluierung der FPGA und ADC Komponenten für die geplante FlashCAM 2.0 DAQ (FC2.0 DAQ). Die Entwicklung der ersten FlashCAM (1.0) DAQ (FC1.0 DAQ) wurde unter Federführung des Max-Planck-Instituts für Kernphysik im Jahre 2012 begonnen und war ursprünglich eine exklusive Entwicklung für das Cherenkov Telescope Array (CTA) Experiment. In der Zwischenzeit wird FlashCAM in zahlreichen Experimenten (HESS, HAWK, LEGEND-200, etc.) eingesetzt, die sowohl Photomultiplier (PMTs) als auch High Purity Germanium (HPGe) Detektoren umfassen. Beide Detektorentypen unterscheiden sich massiv in ihren Anforderungen und können auch von der neuen DAQ abgedeckt werden.
Das Themengebiert der Arbeit umfasst den gesamten funktionellen Umfang einer modernen DAQ. Moderne DAQ Systeme benötigen eine möglichst hohe Read Out Performance zwischen dem DAQ Board und dem es kontrollierenden Server. Die Umsetzung eines leistungsfähigen Firmware Designs und das Design einer hierauf angepassten Hardware/Softwareschnittstelle wird am Beispiel der Zynq Familie vorgestellt. Die Zynq-Familie von Xilinx ist von besonderem Interesse, da der Hardwarehersteller Trenz Elektronik ein flexibles, einfach aufsteckbares Modulkonzept mit verschiedenen SoCs der Zynq-Serie anbietet. Neben der Read Out Performance einer DAQ ist ihre Auflösungsgrenze von entscheidender Bedeutung für das Gelingen des finalen Experiments. Die verwendete FADC Karte muss sich daher durch exzellente SNR und Linearitätseigenschaften auszeichnen. Die Evaluierung solcher FADC Karten setzt ein Testsetup voraus, dass in Signalreinheit und Stabilität die hohen Anforderungen der devices under test übertreffen muss. Praktisch sind diese Bedingungen nur unter hohem (Kosten) Aufwand erreichbar. Im Rahmen der Arbeit wurden daher auch alternative Testkonzepte entwickelt, die mit akzeptablen Abstrichen in der Genauigkeit eine Messung im experimentellen Umfeld ermöglichen können. Da sich die Themengebiete in ihrem Inhalt deutlich unterscheiden, wurde die vorliegende Arbeit in zwei Themenkomplexe aufgeteilt. Der erste Teil der Arbeit beschäftigt sich mit dem Einsatz der Zynq Familie in der geplanten „FlashCAM“ Nachfolger DAQ. Der zweite Teil widmet sich der ADC Nichtlinearitätsbestimmung.
Die wichtigsten Ergebnisse der Arbeit lassen sich folgt zusammenfassen:
▪ Die „High Performance“ (HP) Schnittstellen der Zynq-UltraScale+ haben eine aussetzerfreie Bandbreite von 2.4 GB/s in den externen Arbeitsspeicher der Trenz Module. Wird noch zusätzlich die standardmäßig vorhandene 1 Gb PS-Ethernet Verbindung betrieben, verbleibt der CPU noch eine Bandbreite von mindestens 0.5 GB/s in den Arbeitsspeicher. Im Fall der Zynq-7000 Serie ist eine effiziente Implementierung der HP Schnittstellen schwierig, da die CPU nur vergleichsweise niedrige Arbeitsspeicherzugriffsraten erreicht. Die HP Schnittstellen sind eine wichtige Designalternative da ein durchgehender Datentransfer in den externen Arbeitsspeicher ein Design ermöglichen würde dass weniger stark durch den verfügbaren FPGA internen Speicher begrenzt ist. Dies wäre besonders für Anwendungen in der HPGe-Spektroskopie wünschenswert, da der praktische Nutzen des verwendeten Designs stark von der zur Verfügung stehende Puffergröße abhängt.
▪ Die “Accelerator Coherency” Schnittstelle (ACP) ermöglicht ein direkter Datentransfer aus der FPGA in den Cache der Zynq-CPU. Die entworfene ACP-CMA hat eine Bandweite von bis zu 2.4 GB/s und bietet für Cache-CPU Zugriffe noch ausreichend Reserve. Dass die Zynq-CPU die Cachedaten ohne ein Abwürgen der ACP-CMA verarbeiten kann, ist entscheidend. Wäre dies nicht der Fall könnte die CPU im Parallelbetrieb von Ethernet und ACP-CMA nicht die notwendigen Vorarbeiten zur Ethernet-Übertragung („Event Building“) bewältigen. In der Evaluierung wurde eine maximale Event Building Bandbreite von 0.7 GB/s festgestellt. Wahrscheinlich ist die reale maximale Bandbreite deutlich höher anzusiedeln. Einschränkend muss betont werden, dass in praktischen Applikationen zusätzliche Einschränkungen in Kraft treten, die de-facto einen kontinuierlichen Betrieb der ACP-CMA unmöglich machen. Diese Einschränkungen – die nicht prinzipieller Natur sind - wurden in der durchgeführten Ermittlung nicht berücksichtigt. Da weiterhin alle Zynq-FPGAs über einen Cache verfügen, ist die ACP-CMA eine Designlösung, die auf allen verfügbaren Zynq-FPGAs sinnvoll implementiert werden kann. Dies unterscheidet sie von der entwickelten HP-DMA, die häufig nur für Implementierungen in einer Zynq-UltraScale FPGA interessant ist.
▪ Der neuentwickelte FC2.0 Prototype wurde bereits in experimentellen Setups eingesetzt. Als Anwendungsbeispiel dient die Messung und Analyse eines γ-ray Spektrums eines HPGe-Detektors.
▪ Der Erfolg einer ADC Nichtlinearitätsbestimmungen ist stark von der Signalreinheit des verwendeten Eingangssignal abhängig. In Simulationen konnte gezeigt werden, dass die neu entwickelten Verfahren nur relativ schwach durch Pulsernichtlinearitäten verfälscht werden. Einen praktischen Vergleich zwischen den neuen und einer klassischen Methode konnte keinen signifikanten Unterschied feststellen. Die untersuchten Methoden können daher für eine zukünftige Implementation in FC2.0 empfohlen werden
Collected Papers (on Physics, Artificial Intelligence, Health Issues, Decision Making, Economics, Statistics), Volume XI
This eleventh volume of Collected Papers includes 90 papers comprising 988 pages on Physics, Artificial Intelligence, Health Issues, Decision Making, Economics, Statistics, written between 2001-2022 by the author alone or in collaboration with the following 84 co-authors (alphabetically ordered) from 19 countries: Abhijit Saha, Abu Sufian, Jack Allen, Shahbaz Ali, Ali Safaa Sadiq, Aliya Fahmi, Atiqa Fakhar, Atiqa Firdous, Sukanto Bhattacharya, Robert N. Boyd, Victor Chang, Victor Christianto, V. Christy, Dao The Son, Debjit Dutta, Azeddine Elhassouny, Fazal Ghani, Fazli Amin, Anirudha Ghosha, Nasruddin Hassan, Hoang Viet Long, Jhulaneswar Baidya, Jin Kim, Jun Ye, Darjan Karabašević, Vasilios N. Katsikis, Ieva Meidutė-Kavaliauskienė, F. Kaymarm, Nour Eldeen M. Khalifa, Madad Khan, Qaisar Khan, M. Khoshnevisan, Kifayat Ullah,, Volodymyr Krasnoholovets, Mukesh Kumar, Le Hoang Son, Luong Thi Hong Lan, Tahir Mahmood, Mahmoud Ismail, Mohamed Abdel-Basset, Siti Nurul Fitriah Mohamad, Mohamed Loey, Mai Mohamed, K. Mohana, Kalyan Mondal, Muhammad Gulfam, Muhammad Khalid Mahmood, Muhammad Jamil, Muhammad Yaqub Khan, Muhammad Riaz, Nguyen Dinh Hoa, Cu Nguyen Giap, Nguyen Tho Thong, Peide Liu, Pham Huy Thong, Gabrijela Popović, Surapati Pramanik, Dmitri Rabounski, Roslan Hasni, Rumi Roy, Tapan Kumar Roy, Said Broumi, Saleem Abdullah, Muzafer Saračević, Ganeshsree Selvachandran, Shariful Alam, Shyamal Dalapati, Housila P. Singh, R. Singh, Rajesh Singh, Predrag S. Stanimirović, Kasan Susilo, Dragiša Stanujkić, Alexandra Şandru, Ovidiu Ilie Şandru, Zenonas Turskis, Yunita Umniyati, Alptekin Ulutaș, Maikel Yelandi Leyva Vázquez, Binyamin Yusoff, Edmundas Kazimieras Zavadskas, Zhao Loon Wang.
Online learning on the programmable dataplane
This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network.
To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms
Web-IDE for Low-Code Development in OutSystems
Due to the growing popularity of cloud computing and its numerous benefits, many
desktop applications have been, and will continue to be, migrated into the cloud and
made available through the web. These applications can then be accessed through any
device that has access to a browser and internet connection, eliminating the need for
installation or managing dependencies. Moreover, the process of introduction to the
product is much simpler, faster and collaboration aspects are facilitated.
OutSystems is a company that provides software that enables, through an Integrated
Development Environment (IDE) and a specific Low-Code language, users to securely
and rapidly build robust applications. However, there are only available desktop versions
of this IDE. For this reason, the objective of the proposed thesis is to understand what
would be the best path for developing a Web-based version of the IDE.
To achieve this, it is important not only to understand the OutSystems Platform and,
more specifically, the architecture of the Service Studio IDE, which is the component IDE
provided by the product, but also to explore the state-of-the-art technologies that could
prove to be beneficial for the development of the project.
The goal of this work is to debate different architectural possibilities to implement
the project in question and present a conclusion as to what the adequate course of action,
given the context of the problem. After distinguishing what are the biggest uncertainties
and relevant points, a proof of concept is to be presented accompanied with the respective
implementation details.
Finally, this work intends to determine what would be a viable technological architecture
to build a Web-based IDE that is capable of maintaining an acceptable performance,
similarly to Service Studio IDE, while also insuring that the this system is scalable, in
order to be able to provide the service to a large amount of users. That is to say, to present
a conclusion regarding the feasibility of the project proposed.Devido ao aumento de popularidade de tecnologias de computação cloud e as suas inúmeras
vantagens, aplicações desktop estão e vão continuar a ser migradas para a cloud para
que possam ser acedidas através da web. Estas aplicações podem ser acedidas através de
qualquer dispositivo que tenha acesso à internet, eliminando a necessidade de instalação e
gestão de dependências. Além disso, o processo de introdução ao produto é simplificado,
mais rápido e a colaboração é facilitada.
A OutSystems é uma empresa que disponibiliza um software que faz com que utilizadores,
através de um IDE e uma linguagem de baixo nível, possam criar aplicações
robustas de forma rápida e segura. No entanto, atualmente só existem versões deste IDE
para desktop. Como tal, o objetivo da tese proposta é perceber qual será a melhor forma
de desenvolver uma versão do IDE sobre a Web.
Para alcançar isto, é importante não só compreender a Plataforma OutSystems e, mais
especificamente, a arquitetura do Service Studio IDE, que é o principal componente disponibilizado
pelo produto, mas também explorar as tecnologias estado de arte que podem
ser benéficas para o desenvolvimento do projeto.
O objetivo deste trabalho é debater diferentes arquiteturas possíveis para a implementação
do projeto e concluir qual será o curso de ação adequado, dado o contexto
do problema. Após distinguir quais são os maiores pontos de incerteza, uma prova de
conceito é apresentada juntamente com os respetivos detalhes de implementação.
Finalmente, este trabalho tem como intenção detalhar uma arquitetura tecnológica
viável para construir um IDE na web capaz de manter uma performance aceitável, semelhante
à do Service Studio IDE, e garantir a escalabilidade do sistema, de forma a
conseguir oferecer o serviço a um número elevado de utilizadores. Por outras palavras,
apresentar uma conclusão em relação à viabilidade do projeto proposto
- …