303 research outputs found
Serverless Cloud Computing: A Comparative Analysis of Performance, Cost, and Developer Experiences in Container-Level Services
Serverless cloud computing is a subset of cloud computing considerably adopted to build modern web applications, while the underlying server and infrastructure management duties are abstracted from customers to the cloud vendors. In serverless computing, customers must pay for the runtime consumed by their services, but they are exempt from paying for the idle time. Prior to serverless containers, customers needed to provision, scale, and manage servers, which was a bottleneck for rapidly growing customer-facing applications where latency and scaling were a concern.
The viability of adopting a serverless platform for a web application regarding performance, cost, and developer experiences is studied in this thesis. Three serverless container-level services are employed in this study from AWS and GCP. The services include GCP Cloud Run, GKE AutoPilot, and AWS EKS with AWS Fargate. Platform as a Service (PaaS) underpins the former, and Container as a Service (CaaS) the remainder. A single-page web application was created to perform incremental and spike load tests on those services to assess the performance differences. Furthermore, the cost differences are compared and analyzed. Lastly, the final element considered while evaluating the developer experiences is the complexity of using the services during the project implementation.
Based on the results of this research, it was determined that PaaS-based solutions are a high-performing, affordable alternative for CaaS-based solutions in circumstances where high levels of traffic are periodically anticipated, but sporadic latency is never a concern. Given that this study has limitations, the author recommends additional research to strengthen it
Automatic Generation of Personalized Recommendations in eCoaching
Denne avhandlingen omhandler eCoaching for personlig livsstilsstøtte i sanntid ved bruk av informasjons- og kommunikasjonsteknologi. Utfordringen er å designe, utvikle og teknisk evaluere en prototyp av en intelligent eCoach som automatisk genererer personlige og evidensbaserte anbefalinger til en bedre livsstil. Den utviklede løsningen er fokusert på forbedring av fysisk aktivitet. Prototypen bruker bærbare medisinske aktivitetssensorer. De innsamlede data blir semantisk representert og kunstig intelligente algoritmer genererer automatisk meningsfulle, personlige og kontekstbaserte anbefalinger for mindre stillesittende tid. Oppgaven bruker den veletablerte designvitenskapelige forskningsmetodikken for å utvikle teoretiske grunnlag og praktiske implementeringer. Samlet sett fokuserer denne forskningen på teknologisk verifisering snarere enn klinisk evaluering.publishedVersio
Security Technologies and Methods for Advanced Cyber Threat Intelligence, Detection and Mitigation
The rapid growth of the Internet interconnectivity and complexity of communication systems has led us to a significant growth of cyberattacks globally often with severe and disastrous consequences. The swift development of more innovative and effective (cyber)security solutions and approaches are vital which can detect, mitigate and prevent from these serious consequences. Cybersecurity is gaining momentum and is scaling up in very many areas. This book builds on the experience of the Cyber-Trust EU project’s methods, use cases, technology development, testing and validation and extends into a broader science, lead IT industry market and applied research with practical cases. It offers new perspectives on advanced (cyber) security innovation (eco) systems covering key different perspectives. The book provides insights on new security technologies and methods for advanced cyber threat intelligence, detection and mitigation. We cover topics such as cyber-security and AI, cyber-threat intelligence, digital forensics, moving target defense, intrusion detection systems, post-quantum security, privacy and data protection, security visualization, smart contracts security, software security, blockchain, security architectures, system and data integrity, trust management systems, distributed systems security, dynamic risk management, privacy and ethics
No-Regret Caching with Noisy Request Estimates
Online learning algorithms have been successfully used to design caching
policies with regret guarantees. Existing algorithms assume that the cache
knows the exact request sequence, but this may not be feasible in high load
and/or memory-constrained scenarios, where the cache may have access only to
sampled requests or to approximate requests' counters. In this paper, we
propose the Noisy-Follow-the-Perturbed-Leader (NFPL) algorithm, a variant of
the classic Follow-the-Perturbed-Leader (FPL) when request estimates are noisy,
and we show that the proposed solution has sublinear regret under specific
conditions on the requests estimator. The experimental evaluation compares the
proposed solution against classic caching policies and validates the proposed
approach under both synthetic and real request traces
Flexible Hardware-based Security-aware Mechanisms and Architectures
For decades, software security has been the primary focus in securing our computing platforms. Hardware was always assumed trusted, and inherently served as the foundation, and thus the root of trust, of our systems. This has been further leveraged in developing hardware-based dedicated security extensions and architectures to protect software from attacks exploiting software vulnerabilities such as memory corruption. However, the recent outbreak of microarchitectural attacks has shaken these long-established trust assumptions in hardware entirely, thereby threatening the security of all of our computing platforms and bringing hardware and microarchitectural security under scrutiny. These attacks have undeniably revealed the grave consequences of hardware/microarchitecture security flaws to the entire platform security, and how they can even subvert the security guarantees promised by dedicated security architectures. Furthermore, they shed light on the sophisticated challenges particular to hardware/microarchitectural security; it is more critical (and more challenging) to extensively analyze the hardware for security flaws prior to production, since hardware, unlike software, cannot be patched/updated once fabricated.
Hardware cannot reliably serve as the root of trust anymore, unless we develop and adopt new design paradigms where security is proactively addressed and scrutinized across the full stack of our computing platforms, at all hardware design and implementation layers. Furthermore, novel flexible security-aware design mechanisms are required to be incorporated in processor microarchitecture and hardware-assisted security architectures, that can practically address the inherent conflict between performance and security by allowing that the trade-off is configured to adapt to the desired requirements.
In this thesis, we investigate the prospects and implications at the intersection of hardware and security that emerge across the full stack of our computing platforms and System-on-Chips (SoCs). On one front, we investigate how we can leverage hardware and its advantages, in contrast to software, to build more efficient and effective security extensions that serve security architectures, e.g., by providing execution attestation and enforcement, to protect the software from attacks exploiting software vulnerabilities. We further propose that they are microarchitecturally configured at runtime to provide different types of security services, thus adapting flexibly to different deployment requirements. On another front, we investigate how we can protect these hardware-assisted security architectures and extensions themselves from microarchitectural and software attacks that exploit design flaws that originate in the hardware, e.g., insecure resource sharing in SoCs. More particularly, we focus in this thesis on cache-based side-channel attacks, where we propose sophisticated cache designs, that fundamentally mitigate these attacks, while still preserving performance by enabling that the performance security trade-off is configured by design. We also investigate how these can be incorporated into flexible and customizable security architectures, thus complementing them to further support a wide spectrum of emerging applications with different performance/security requirements. Lastly, we inspect our computing platforms further beneath the design layer, by scrutinizing how the actual implementation of these mechanisms is yet another potential attack surface. We explore how the security of hardware designs and implementations is currently analyzed prior to fabrication, while shedding light on how state-of-the-art hardware security analysis techniques are fundamentally limited, and the potential for improved and scalable approaches
Co-designing reliability and performance for datacenter memory
Memory is one of the key components that affects reliability and performance of datacenter servers. Memory in today’s servers is organized and shared in several ways to provide the most performant and efficient access to data. For example, cache hierarchy in multi-core chips to reduce access latency, non-uniform memory access (NUMA) in multi-socket servers to improve scalability,
disaggregation to increase memory capacity. In all these organizations, hardware coherence protocols are used to maintain memory consistency of this shared memory and implicitly move data to the requesting cores.
This thesis aims to provide fault-tolerance against newer models of failure in the organization of memory in datacenter servers. While designing for improved reliability, this thesis explores solutions that can also enhance performance of applications. The solutions build over modern coherence protocols to achieve these properties.
First, we observe that DRAM memory system failure rates have increased, demanding stronger forms of memory reliability. To combat this, the thesis proposes Dvé, a hardware driven replication mechanism where data blocks are replicated across two different memory controllers in a cache-coherent NUMA system. Data blocks are accompanied by a code with strong error detection capabilities so that when an error is detected, correction is performed using the replica. Dvé’s organization offers two independent points of access to data which enables: (a) strong error correction that can recover from a range of faults affecting any of the components in the memory and (b) higher performance by providing another nearer point of memory access. Dvé’s coherent replication keeps the replicas in sync for reliability and also provides coherent access to read replicas during fault-free operation for improved performance. Dvé can
flexibly provide these benefits on-demand at runtime.
Next, we observe that the coherence protocol itself requires to be hardened against failures. Memory in datacenter servers is being disaggregated from the compute servers into dedicated memory servers, driven by standards like CXL. CXL specifies the coherence protocol semantics for compute servers to access and cache data from a shared region in the disaggregated memory. However, the CXL specification lacks the requisite level of fault-tolerance necessary to operate at an inter-server scale within the datacenter. Compute servers can fail or be unresponsive in the datacenter and therefore, it is important that the coherence protocol remain available in the presence of such failures.
The thesis proposes Āpta, a CXL-based, shared disaggregated memory system for keeping the cached data consistent without compromising availability in the face of compute server failures. Āpta architects a high-performance fault-tolerant object-granular memory server that significantly improves performance for stateless function-as-a-service (FaaS) datacenter applications
Autonomy, Efficiency, Privacy and Traceability in Blockchain-enabled IoT Data Marketplace
Personal data generated from IoT devices is a new economic asset that individuals can trade to generate revenue on the emerging data marketplaces. Blockchain technology can disrupt the data marketplace and make trading more democratic, trustworthy, transparent and secure. Nevertheless, the adoption of blockchain to create an IoT data marketplace requires consideration of autonomy and efficiency, privacy, and traceability.
Conventional centralized approaches are built around a trusted third party that conducts and controls all management operations such as managing contracts, pricing, billing, reputation mechanisms etc, raising concern that providers lose control over their data. To tackle this issue, an efficient, autonomous and fully-functional marketplace system is needed, with no trusted third party involved in operational tasks. Moreover, an inefficient allocation of buyers’ demands on battery-operated IoT devices poses a challenge for providers to serve multiple buyers’ demands simultaneously in real-time without disrupting their SLAs (service level agreements). Furthermore, a poor privacy decision to make personal data accessible to unknown or arbitrary buyers may have adverse consequences and privacy violations for providers. Lastly, a buyer could buy data from one marketplace and without the knowledge of the provider, resell bought data to users registered in other marketplaces. This may either lead to monetary loss or privacy violation for the provider. To address such issues, a data ownership traceability mechanism is essential that can track the change in ownership of data due to its trading within and across marketplace systems. However, data ownership traceability is hard because of ownership ambiguity, undisclosed reselling, and dispersal of ownership across multiple marketplaces.
This thesis makes the following novel contributions. First, we propose an autonomous and efficient IoT data marketplace, MartChain, offering key mechanisms for a marketplace leveraging smart contracts to record agreement details, participant ratings, and data prices in blockchain without involving any mediator. Second, MartChain is underpinned by an Energy-aware Demand Selection and Allocation (EDSA) mechanism for optimally selecting and allocating buyers' demands on provider’s IoT devices while satisfying the battery, quality and allocation constraints. EDSA maximizes the revenue of the provider while meeting the buyers’ requirements and ensuring the completion of the selected demands without any interruptions. The proof-of-concept implementation on the Ethereum blockchain shows that our approach is viable and benefits the provider and buyer by creating an autonomous and efficient real-time data trading model.
Next, we propose KYBChain, a Know-Your-Buyer in the privacy-aware decentralized IoT data marketplace that performs a multi-faceted assessment of various characteristics of buyers and evaluates their privacy rating. Privacy rating empowers providers to make privacy-aware informed decisions about data sharing. Quantitative analysis to evaluate the utility of privacy rating demonstrates that the use of privacy rating by the providers results in a decrease of data leakage risk and generated revenue, correlating with the classical risk-utility trade-off. Evaluation results of KYBChain on Ethereum reveal that the overheads in terms of gas consumption, throughput and latency introduced by our privacy rating mechanism compared to a marketplace that does not incorporate a privacy rating system are insignificant relative to its privacy gains.
Finally, we propose TrailChain which generates a trusted trade trail for tracking the data ownership spanning multiple decentralized marketplaces. Our solution includes mechanisms for detecting any unauthorized data reselling to prevent privacy violations and a fair resell payment sharing scheme to distribute payment among data owners for authorized reselling. We performed qualitative and quantitative evaluations to demonstrate the effectiveness of TrailChain in tracking data ownership using four private Ethereum networks. Qualitative security analysis demonstrates that TrailChain is resilient against several malicious activities and security attacks. Simulations show that our method detects undisclosed reselling within the same marketplace and across different marketplaces. Besides, it also identifies whether the provider has authorized the reselling and fairly distributes the revenue among the data owners at marginal overhead
EKKO: an open-source RISC-V soft-core microcontroller
Dissertação de mestrado em Engenharia Eletrónica Industrial e Computadores (especialização em Sistemas Embebidos e Computadores)Com o surgimento da Internet das Coisas (IoT em inglês) nos últimos anos, o número de “coisas”
conectadas está a crescer a um ritmo bastante rápido. Estes dispositivos tornaram-se rapidamente parte
do nosso dia a dia e já podem ser encontrados nos mais diversos domínios de aplicação, tais como,
telecomunicações, saúde, agricultura, e automação industrial. Devido a este crescimento exponencial,
a demanda por sistemas embebidos é cada vez maior, trazendo assim diversos desafios no seu desenvolvimento. De todos os desafios, o time-to-market e os custos de desenvolvimento são de inegável
importância, logo, a escolha de uma plataforma de desenvolvimento adequada é essencial no desenho
destes sistemas.
Devido a este novo paradigma, o grupo de investigação da Universidade do Minho onde esta dissertação se insere tem desenvolvido aplicações neste domínio. No entanto, as atuais plataformas de
desenvolvimento utilizadas são complexas, têm custos associados e são de código fechado. Por estas
razões, o grupo de investigação tem interesse em ter a sua própria plataforma de desenvolvimento.
De modo a solucionar os problemas enumerados acima, esta dissertação tem como objetivo desenvolver uma plataforma de desenvolvimento tanto para hardware como para software. A plataforma deve
ser simples de utilizar e open-source, reduzindo assim os custos e a tornando a gestão de licenças mais
simples. Para além disto, o facto de o sistema ser de código aberto faz também com que este possa ser
facilmente estendido e customizado de acordo com os requisitos da aplicação.
Neste sentido, esta dissertação apresenta um soft-core microcontroller, o qual contem um processador RISC-V, uma RAM, uma unidade de depuração, um temporizador, um periférico I2C e um barra mento AXI. Em adição, este contem também um kit de desenvolvimento de software (SDK em inglês),
o qual inclui um depurador, a opção de utilizar o sistema operativo Azure RTOS ThreadX, e outras ferramentas importantes, tornando o ciclo de desenvolvimento mais fácil, rápido e seguro.With the advent of the Internet of Things (IoT) in most recent years, the number of connected “things”
is increasing quickly. These devices rapidly became part of our daily lives and can be found in the most
different applications domains, such as telecommunications, health care, agriculture and industrial automation. With this exponential growth, the demand for embedded devices is increasing, bringing several
challenges to the development of these systems. From these challenges, the time-to-market and development costs are undeniable extremely important. Thus, choosing a suitable development platform is
essential when designing an embedded system.
Due to this new paradigm, the University of Minho research group where this dissertation fits has been
developing applications in this domain. However, the current development platforms are complex, have
associated costs and are closed-source. For these reasons, the research group has interesting in having
its development platform.
To solve these problems, this dissertation aims to build a development platform for both hardware
and software. The platform must be simple and open-source, reducing development costs and simplifying
license management. Besides, due to its open nature, it will also be easier to extend and modify the
system according to the application’s needs.
In this context, this dissertation presents EKKO, an open-source soft-core microcontroller that contains
a RISC-V core, an on-chip RAM, a debug unit, a timer and an I2C peripheral, and an AXI bus. In addition,
it also contains a Software Development Kit (SDK), which includes a debugger, the option to use Azure
RTOS ThreadX, and other crucial tools, turning the development cycle more accessible, faster and safer
- …