61 research outputs found
X-Attack 2.0: The Risk of Power Wasters and Satisfiability Don’t-Care Hardware Trojans to Shared Cloud FPGAs
Cloud computing environments increasingly provision field-programmable gate arrays (FPGAs) for their programmability and hardware-level parallelism. While FPGAs are typically used by one tenant at a time, multitenant schemes supporting spatial sharing of cloud FPGA resources have been proposed in the literature. However, the spatial multitenancy of FPGAs opens up new attack surfaces. Investigating potential security threats to multitenant FPGAs is thus essential for better understanding and eventually mitigating the security risks. This work makes a notable step forward by systematically analyzing the combined threat of FPGA power wasters and satisfiability don’t-care hardware Trojans in shared cloud FPGAs. We demonstrate a successful remote undervolting attack that activates a hardware Trojan concealed within a victim FPGA design and exploits the payload. The attack is carried out entirely remotely, assuming two spatially colocated FPGA users isolated from one another. The victim user’s circuit is infected with a Trojan, triggered by a pair of don’t-care signals that never reach the combined trigger condition during regular operation. The adversary, targeting the exploitation of the Trojan, deploys power waster circuits to lower the supply voltage of the FPGA. The assumption is that, under the effect of the lowered voltage, don’t-care signals may reach the particular state that triggers the Trojan. We name this exploit X-Attack and demonstrate its feasibility on an embedded FPGA and real-world cloud FPGA instances. Additionally, we study the effects of various attack tuning parameters on the exploit’s success. Finally, we discuss potential countermeasures against this security threat and present a lightweight self-calibrating countermeasure. To the best of our knowledge, this is the first work on undervolting-based fault-injection attacks in multitenant FPGAs to demonstrate the attack on commercially available cloud FPGA instances
Recommended from our members
Cyberattacks and security of cloud computing: a complete guideline
Cloud computing is an innovative technique that offers shared resources for stock cache and server management. Cloud computing saves time and monitoring costs for any organization and turns technological solutions for large-scale systems into server-to-service frameworks. However, just like any other technology, cloud computing opens up many forms of security threats and problems. In this work, we focus on discussing different cloud models and cloud services, respectively. Next, we discuss the security trends in the cloud models. Taking these security trends into account, we move to security problems, including data breaches, data confidentiality, data access controllability, authentication, inadequate diligence, phishing, key exposure, auditing, privacy preservability, and cloud-assisted IoT applications. We then propose security attacks and countermeasures specifically for the different cloud models based on the security trends and problems. In the end, we pinpoint some of the futuristic directions and implications relevant to the security of cloud models. The future directions will help researchers in academia and industry work toward cloud computing security
Models, methods, and tools for developing MMOG backends on commodity clouds
Online multiplayer games have grown to unprecedented scales, attracting millions of players
worldwide. The revenue from this industry has already eclipsed well-established entertainment
industries like music and films and is expected to continue its rapid growth in the future.
Massively Multiplayer Online Games (MMOGs) have also been extensively used in research
studies and education, further motivating the need to improve their development process.
The development of resource-intensive, distributed, real-time applications like MMOG backends
involves a variety of challenges. Past research has primarily focused on the development and
deployment of MMOG backends on dedicated infrastructures such as on-premise data centers
and private clouds, which provide more flexibility but are expensive and hard to set up and
maintain. A limited set of works has also focused on utilizing the Infrastructure-as-a-Service
(IaaS) layer of public clouds to deploy MMOG backends. These clouds can offer various advantages
like a lower barrier to entry, a larger set of resources, etc. but lack resource elasticity,
standardization, and focus on development effort, from which MMOG backends can greatly
benefit.
Meanwhile, other research has also focused on solving various problems related to consistency,
performance, and scalability. Despite major advancements in these areas, there is no standardized
development methodology to facilitate these features and assimilate the development of
MMOG backends on commodity clouds. This thesis is motivated by the results of a systematic
mapping study that identifies a gap in research, evident from the fact that only a handful
of studies have explored the possibility of utilizing serverless environments within commodity
clouds to host these types of backends. These studies are mostly vision papers and do
not provide any novel contributions in terms of methods of development or detailed analyses
of how such systems could be developed. Using the knowledge gathered from this mapping
study, several hypotheses are proposed and a set of technical challenges is identified, guiding
the development of a new methodology.
The peculiarities of MMOG backends have so far constrained their development and deployment
on commodity clouds despite rapid advancements in technology. To explore whether such
environments are viable options, a feasibility study is conducted with a minimalistic MMOG
prototype to evaluate a limited set of public clouds in terms of hosting MMOG backends. Foli
lowing encouraging results from this study, this thesis first motivates toward and then presents
a set of models, methods, and tools with which scalable MMOG backends can be developed
for and deployed on commodity clouds. These are encapsulated into a software development
framework called Athlos which allows software engineers to leverage the proposed development
methodology to rapidly create MMOG backend prototypes that utilize the resources of
these clouds to attain scalable states and runtimes. The proposed approach is based on a dynamic
model which aims to abstract the data requirements and relationships of many types of
MMOGs. Based on this model, several methods are outlined that aim to solve various problems
and challenges related to the development of MMOG backends, mainly in terms of performance
and scalability. Using a modular software architecture, and standardization in common development
areas, the proposed framework aims to improve and expedite the development process
leading to higher-quality MMOG backends and a lower time to market. The models and methods
proposed in this approach can be utilized through various tools during the development
lifecycle.
The proposed development framework is evaluated qualitatively and quantitatively. The thesis
presents three case study MMOG backend prototypes that validate the suitability of the proposed
approach. These case studies also provide a proof of concept and are subsequently used
to further evaluate the framework. The propositions in this thesis are assessed with respect to
the performance, scalability, development effort, and code maintainability of MMOG backends
developed using the Athlos framework, using a variety of methods such as small and large-scale
simulations and more targeted experimental setups. The results of these experiments uncover
useful information about the behavior of MMOG backends. In addition, they provide evidence
that MMOG backends developed using the proposed methodology and hosted on serverless
environments can: (a) support a very high number of simultaneous players under a given latency
threshold, (b) elastically scale both in terms of processing power and memory capacity
and (c) significantly reduce the amount of development effort. The results also show that this
methodology can accelerate the development of high-performance, distributed, real-time applications
like MMOG backends, while also exposing the limitations of Athlos in terms of code
maintainability.
Finally, the thesis provides a reflection on the research objectives, considerations on the hypotheses
and technical challenges, and outlines plans for future work in this domain
Cloud adoption and cyber security in public organizations: an empirical investigation on Norwegian municipalities
The public sector in Norway, particularly municipalities, is currently transforming through the adoption of cloud solutions. This multiple case study investigates cloud adoption and is security challenges that come along with it. The objective is to identify the security challenges that cloud solutions present and techniques or strategies that can be used to mitigate these security challenges. The Systematic Literature Review (SLR) provided valuable insights into the prevalent challenges and associated mitigation techniques in cloud adoption. The thesis also uses a qualitative approach using Semi-Structured Interviews (SSI) to gather insight into informants’ experiences regarding cloud adoption and its security challenges. The study’s empirical data is based on interviews with six different Norwegian municipalities, providing a unique and broad perspective. The analysis of the empirical findings, combined with the literature, reveals several security challenges and mitigation techniques in adopting cloud solutions. The security challenges encompass organizational, environmental, legal, and technical aspects of cloud adoption in the municipality. Based on the findings, it is recommended that Norwegian municipalities act on these issues to ensure a more secure transition to cloud solutions
‘Responsibility to detect?’: autonomous threat detection and its implications for due diligence in cyberspace
Private and public organizations have long relied on intrusion detection systems to alert them of malicious activity in their digital networks. These systems were designed to detect threat signatures in static networks or infer anomalous activity based on their security ‘logs’. They are, however, of limited use to detect threats across heterogeneous, modern-day networks, where computing resources are distributed across cloud or routing services. Recent advancements in machine learning (ML) have led to the development of autonomous threat detection (ATD) applications that monitor, evaluate, and respond to malicious activity with minimal human intervention. The use of ‘intelligent’ and programmable algorithms for ATD will reduce incident response times and enhance the capacity of states to detect threats originating from any layer of their territorial information and communications technologies (ICT) infrastructure. This paper argues that ATD technologies will influence the evolution of a due diligence rule for cyberspace by raising the standard of care owed by states to prevent their networks from being used for malicious, transboundary ICT activities. This paper comprises five sections. Section 1 introduces the paper and its central argument. Section 2 outlines broad trends and operational factors pushing public and private entities towards the adoption of ATD. Section 3 offers an overview of a typical ATD application. Section 4 analyses the impact of ATD on the due diligence obligations of states. Section 5 presents the paper’s conclusions.FGGA – Publicaties zonder aanstelling Universiteit LeidenCybersecurity en cybergovernanc
Internet of Things and the Law: Legal Strategies for Consumer-Centric Smart Technologies
Internet of Things and the Law: Legal Strategies for Consumer-Centric Smart Technologies is the most comprehensive and up-to-date analysis of the legal issues in the Internet of Things (IoT). For decades, the decreasing importance of tangible wealth and power – and the increasing significance of their disembodied counterparts – has been the subject of much legal research. For some time now, legal scholars have grappled with how laws drafted for tangible property and predigital ‘offline’ technologies can cope with dematerialisation, digitalisation, and the internet. As dematerialisation continues, this book aims to illuminate the opposite movement: rematerialisation, namely, the return of data, knowledge, and power within a physical ‘smart’ world. This development frames the book’s central question: can the law steer rematerialisation in a human-centric and socially just direction? To answer it, the book focuses on the IoT, the sociotechnological phenomenon that is primarily responsible for this shift. After a thorough analysis of how existing laws can be interpreted to empower IoT end users, Noto La Diega leaves us with the fundamental question of what happens when the law fails us and concludes with a call for collective resistance against ‘smart’ capitalism
Online learning on the programmable dataplane
This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network.
To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms
Risk Assessment Method of Cloud Environment
Cloud technology usage in nowadays companies constantly grows every year. Moreover, the COVID-19 situation caused even a higher acceleration of cloud adoption. A higher portion of deployed cloud services, however, means also a higher number of exploitable attack vectors. For that reason, risk assessment of the cloud environment plays a significant role for the companies. The target of this paper is to present a risk assessment method specialized in the cloud environment that supports companies with the identification and assessments of the cloud risks. The method itself is based on ISO/IEC 27005 standard and addresses a list of predefined cloud risks. Besides, the paper also presents the risk score calculation definition. The risk assessment method is then applied to an accounting company in a form of a case study. As a result, 24 risks are identified and assessed within the case study where each risk included also exemplary countermeasures. Further, this paper includes a description of the selected cloud risks
- …