114 research outputs found
A software architecture for electro-mobility services: a milestone for sustainable remote vehicle capabilities
To face the tough competition, changing markets and technologies in automotive industry,
automakers have to be highly innovative. In the previous decades, innovations were
electronics and IT-driven, which increased exponentially the complexity of vehicle’s internal
network. Furthermore, the growing expectations and preferences of customers oblige these
manufacturers to adapt their business models and to also propose mobility-based services.
One other hand, there is also an increasing pressure from regulators to significantly reduce
the environmental footprint in transportation and mobility, down to zero in the foreseeable
future.
This dissertation investigates an architecture for communication and data exchange
within a complex and heterogeneous ecosystem. This communication takes place between
various third-party entities on one side, and between these entities and the infrastructure
on the other. The proposed solution reduces considerably the complexity of vehicle
communication and within the parties involved in the ODX life cycle. In such an
heterogeneous environment, a particular attention is paid to the protection of confidential
and private data. Confidential data here refers to the OEM’s know-how which is enclosed
in vehicle projects. The data delivered by a car during a vehicle communication session
might contain private data from customers. Our solution ensures that every entity of this
ecosystem has access only to data it has the right to. We designed our solution to be
non-technological-coupling so that it can be implemented in any platform to benefit from
the best environment suited for each task. We also proposed a data model for vehicle
projects, which improves query time during a vehicle diagnostic session. The scalability and
the backwards compatibility were also taken into account during the design phase of our
solution.
We proposed the necessary algorithms and the workflow to perform an efficient vehicle
diagnostic with considerably lower latency and substantially better complexity time and
space than current solutions. To prove the practicality of our design, we presented a
prototypical implementation of our design. Then, we analyzed the results of a series of tests
we performed on several vehicle models and projects. We also evaluated the prototype
against quality attributes in software engineering
Security Scenario Generator (SecGen): A Framework for Generating Randomly Vulnerable Rich-scenario VMs for Learning Computer Security and Hosting CTF Events
Computer security students benefit from hands-on experience applying security tools and techniques to attack and defend vulnerable systems. Virtual machines (VMs) provide an effective way of sharing targets for hacking. However, developing these hacking challenges is time consuming, and once created, essentially static. That is, once the challenge has been "solved" there is no remaining challenge for the student, and if the challenge is created for a competition or assessment, the challenge cannot be reused without risking plagiarism, and collusion. Security Scenario Generator (SecGen) can build complex VMs based on randomised scenarios, with a number of diverse use-cases, including: building networks of VMs with randomised services and in-thewild vulnerabilities and with themed content, which can form the basis of penetration testing activities; VMs for educational lab use; and VMs with randomised CTF challenges. SecGen has a modular architecture which can dynamically generate challenges by nesting modules, and a hints generation system, which is designed to provide scaffolding for novice security students to make progress on complex challenges. SecGen has been used for teaching at universities, and hosting a recent UK-wide CTF event
Models, methods, and tools for developing MMOG backends on commodity clouds
Online multiplayer games have grown to unprecedented scales, attracting millions of players
worldwide. The revenue from this industry has already eclipsed well-established entertainment
industries like music and films and is expected to continue its rapid growth in the future.
Massively Multiplayer Online Games (MMOGs) have also been extensively used in research
studies and education, further motivating the need to improve their development process.
The development of resource-intensive, distributed, real-time applications like MMOG backends
involves a variety of challenges. Past research has primarily focused on the development and
deployment of MMOG backends on dedicated infrastructures such as on-premise data centers
and private clouds, which provide more flexibility but are expensive and hard to set up and
maintain. A limited set of works has also focused on utilizing the Infrastructure-as-a-Service
(IaaS) layer of public clouds to deploy MMOG backends. These clouds can offer various advantages
like a lower barrier to entry, a larger set of resources, etc. but lack resource elasticity,
standardization, and focus on development effort, from which MMOG backends can greatly
benefit.
Meanwhile, other research has also focused on solving various problems related to consistency,
performance, and scalability. Despite major advancements in these areas, there is no standardized
development methodology to facilitate these features and assimilate the development of
MMOG backends on commodity clouds. This thesis is motivated by the results of a systematic
mapping study that identifies a gap in research, evident from the fact that only a handful
of studies have explored the possibility of utilizing serverless environments within commodity
clouds to host these types of backends. These studies are mostly vision papers and do
not provide any novel contributions in terms of methods of development or detailed analyses
of how such systems could be developed. Using the knowledge gathered from this mapping
study, several hypotheses are proposed and a set of technical challenges is identified, guiding
the development of a new methodology.
The peculiarities of MMOG backends have so far constrained their development and deployment
on commodity clouds despite rapid advancements in technology. To explore whether such
environments are viable options, a feasibility study is conducted with a minimalistic MMOG
prototype to evaluate a limited set of public clouds in terms of hosting MMOG backends. Foli
lowing encouraging results from this study, this thesis first motivates toward and then presents
a set of models, methods, and tools with which scalable MMOG backends can be developed
for and deployed on commodity clouds. These are encapsulated into a software development
framework called Athlos which allows software engineers to leverage the proposed development
methodology to rapidly create MMOG backend prototypes that utilize the resources of
these clouds to attain scalable states and runtimes. The proposed approach is based on a dynamic
model which aims to abstract the data requirements and relationships of many types of
MMOGs. Based on this model, several methods are outlined that aim to solve various problems
and challenges related to the development of MMOG backends, mainly in terms of performance
and scalability. Using a modular software architecture, and standardization in common development
areas, the proposed framework aims to improve and expedite the development process
leading to higher-quality MMOG backends and a lower time to market. The models and methods
proposed in this approach can be utilized through various tools during the development
lifecycle.
The proposed development framework is evaluated qualitatively and quantitatively. The thesis
presents three case study MMOG backend prototypes that validate the suitability of the proposed
approach. These case studies also provide a proof of concept and are subsequently used
to further evaluate the framework. The propositions in this thesis are assessed with respect to
the performance, scalability, development effort, and code maintainability of MMOG backends
developed using the Athlos framework, using a variety of methods such as small and large-scale
simulations and more targeted experimental setups. The results of these experiments uncover
useful information about the behavior of MMOG backends. In addition, they provide evidence
that MMOG backends developed using the proposed methodology and hosted on serverless
environments can: (a) support a very high number of simultaneous players under a given latency
threshold, (b) elastically scale both in terms of processing power and memory capacity
and (c) significantly reduce the amount of development effort. The results also show that this
methodology can accelerate the development of high-performance, distributed, real-time applications
like MMOG backends, while also exposing the limitations of Athlos in terms of code
maintainability.
Finally, the thesis provides a reflection on the research objectives, considerations on the hypotheses
and technical challenges, and outlines plans for future work in this domain
On Evaluating Commercial Cloud Services: A Systematic Review
Background: Cloud Computing is increasingly booming in industry with many
competing providers and services. Accordingly, evaluation of commercial Cloud
services is necessary. However, the existing evaluation studies are relatively
chaotic. There exists tremendous confusion and gap between practices and theory
about Cloud services evaluation. Aim: To facilitate relieving the
aforementioned chaos, this work aims to synthesize the existing evaluation
implementations to outline the state-of-the-practice and also identify research
opportunities in Cloud services evaluation. Method: Based on a conceptual
evaluation model comprising six steps, the Systematic Literature Review (SLR)
method was employed to collect relevant evidence to investigate the Cloud
services evaluation step by step. Results: This SLR identified 82 relevant
evaluation studies. The overall data collected from these studies essentially
represent the current practical landscape of implementing Cloud services
evaluation, and in turn can be reused to facilitate future evaluation work.
Conclusions: Evaluation of commercial Cloud services has become a world-wide
research topic. Some of the findings of this SLR identify several research gaps
in the area of Cloud services evaluation (e.g., the Elasticity and Security
evaluation of commercial Cloud services could be a long-term challenge), while
some other findings suggest the trend of applying commercial Cloud services
(e.g., compared with PaaS, IaaS seems more suitable for customers and is
particularly important in industry). This SLR study itself also confirms some
previous experiences and reveals new Evidence-Based Software Engineering (EBSE)
lessons
CloudSkulk: Design of a Nested Virtual Machine Based Rootkit-in-the-Middle Attack
Virtualized cloud computing services are a crucial facet in the software industry today, with clear evidence of its usage quickly accelerating. Market research forecasts an increase in cloud workloads by more than triple, 3.3-fold, from 2014 to 2019 [33]. Integrating system security is then an intrinsic concern of cloud platform system administrators that with the growth of cloud usage, is becoming increasingly relevant. People working in the cloud demand security more than ever. In this paper, we take an offensive, malicious approach at targeting such cloud environments as we hope both cloud platform system administrators and software developers of these infrastructures can advance their system securities.
A vulnerability could exist in any layer of a computer system. It is commonly believed in the security community that the battle between attackers and defenders is determined by which side can exploit these vulnerabilities and then gain control at the lower layer of a system [22]. Because of this perception, kernel level defense is proposed to defend against user-level malware [25], hypervisor-level defense is proposed to detect kernel-level malware or rootkits [36, 47, 41], hardware-level defense is proposed to defend or protect hypervisors [4, 51, 45].
Once attackers find a way to exploit a particular vulnerability and obtain a certain level of control over the victim system, retaining that control and avoiding detection becomes their top priority. To achieve this goal, various rootkits have been developed. However, existing rootkits have a common weakness: they are still detectable as long as defenders can gain control at a lower-level, such as the operating system level, the hypervisor level, or the hardware level. In this paper, we present a new type of rootkit called CloudSkulk, which is a nested virtual machine (VM) based rootkit. While nested virtualization has attracted sufficient attention from the security and cloud community, to the best of our knowledge, we are the first to reveal and demonstrate nested virtualization can be used by attackers for developing malicious rootkits. By impersonating the original hypervisor to communicate with the original guest operating system (OS) and impersonating the original guest OS to communicate with the hypervisor, CloudSkulk is hard to detect, regardless of whether defenders are at the lower-level (e.g., in the original hypervisor) or at the higher-level (e.g., in the original guest OS).
We perform a variety of performance experiments to evaluate how stealthy the proposed rootkit is at remaining unnoticed as introducing one more layer of virtualization inevitably incurs extra overhead. Our performance characterization data shows that an installation of our novel rootkit on a targeted nested virtualization environment is likely to remain undetected unless the guest user performs IO intensive-type workloads
A Case Study on Cloud Migration and Improvement of Twelve-Factor App
The Twelve-Factor app methodology was introduced in 2011, intending to raise awareness, provide shared vocabulary and offer broad conceptual solutions. In this thesis, a case study was done on two software implementations of the same business idea. The implementations were introduced and then analyzed with Twelve-Factor. Hevner's Information Systems Research Framework was used to assess the implementations, and Twelve-Factor's theoretical methodology was combined with them to form results.
The implementations were found to fulfill most of the twelve factors, although in different ways. The use of containers in the new implementation explained most of the differences. Some factors were also revealed to be standard practices today, which showed the need to abstract factors like Dependencies, Processes, Port binding, Concurrency, and Disposability.
In addition, the methodology itself was analyzed, and additions to it were introduced, conforming to the modern needs of applications that most often run containerized on cloud platforms. New additions are API First, Telemetry, Security, and Automation. API First instructs developers to prioritize building the APIs at the start of the development cycle, while Telemetry points that as much information as possible should be collected from the app to improve performance and help to solve bugs. Security introduces two different practical solutions and a guideline of following the principle of least privilege, and lastly, automation is emphasized to free up developer time.Twelve-Factor App on vuonna 2011 julkaistu metodologia, jonka tarkoituksena on nostaa tietoisuutta, tarjota jaettu sanakirja alan termistölle sekä tarjota yleinen käsitteellinen ratkaisu. Tässä työssä esitellään kaksi toteutusta samasta liiketoimintaideasta. Toteutukset esitellään ja ne analysoidaan Twelve-Factorin avulla. Hevnerin Information Systems Research Framework -mallia hyödynnettiin toteutusten arvioinnissa ja se yhdistettiin Twelve-Factor -mallin kanssa, jotta saatiin tulokset.
Työssä huomattiin, että suurin osa Twelve-Factorin kuvaamista osista toteutuivat ohjelmistoja tarkasteltaessa. Toteutuksissa oli kuitenkin eroa. Etenkin Docker konttien käyttäminen uudessa toteutuksessa selitti suurimman osan eroista. Twelve-Factorin osien toteutuminen suurilta osin näytti tarpeen abstrahoida osia mallista, jotka ovat jo tavanomaisia käytäntöjä.
Sen lisäksi metodologia itsessään analysoidaan ja siihen esitettiin lisäyksiä, jotta se vastaisi nykyaikaisten ohjelmistotuotteiden tarpeisiin, jotka ovat useimmiten kontitettuina eri pilvipalveluissa.Uusia lisäyksiä ovat API ensin -ajattelu, telemetria, tietoturva ja automaatio. API ensin -ajattelu ohjeistaa kehittäjiä priorisoimaan rajapintojen rakentamisen mahdollisimman aikaisessa vaiheessa ohjelmistoprojektia, kun taas telemetria ohjeistaa keräämään mahdollisimman paljon tietoa ohjelmiston toiminnasta, jotta voidaan kehittää tuotteen suoritustehokkuutta sekä helpottaa ohjelmistovirheiden ratkaisemista. Tietoturva esittelee kaksi erilaista käytännön ratkaisua sekä ohjenuoran noudattaa vähimpien oikeuksien periaatetta. Viimeisimpänä automaation tärkeyttä painotetaan, jotta säästetään ohjelmistokehittäjien aikaa ja ohjataan resursseja varsinaiseen arvon luomiseen
- …