242 research outputs found
The Construction of Verification Models for Embedded Systems
The usefulness of verification hinges on the quality of the verification model. Verification is useful if it increases our confidence that an artefact bahaves as expected. As modelling inherently contains non-formal elements, the qualityof models cannot be captured by purely formal means. Still, we argue that modelling is not an act of irrationalism and unpredictable geniality, but follows rational arguments, that often remain implicit. In this paper we try to identify the tacit rationalism in the model construction as performed by most people doing modelling for verification. By explicating the different phases, arguments, and design decisions in the model construction, we try to develop guidelines that help to improve the process of model construction and the quality of models
Exploring a resource allocation security protocol for secure service migration in commercial cloud environments
Recently, there has been a significant increase in the popularity of cloud computing systems that offer Cloud services such as Networks, Servers, Storage, Applications, and other available on-demand re-sources or pay-as-you-go systems with different speeds and Qualities of Service. These cloud computing environments share resources by providing virtualization techniques that enable a single user to ac-cess various Cloud Services Thus, cloud users have access to an infi-nite computing resource, allowing them to increase or decrease their resource consumption capacity as needed. However, an increasing number of Commercial Cloud Services are available in the market-place from a wide range of Cloud Service Providers (CSPs). As a result, most CSPs must deal with dynamic resource allocation, in which mobile services migrate from one cloud environment to another to provide heterogeneous resources based on user requirements. A new service framework has been proposed by Sardis about how ser-vices can be migrated in Cloud Infrastructure. However, it does not address security and privacy issues in the migration process. Fur-thermore, there is still a lack of heuristic algorithms that can check requested and available resources to allocate and deallocate before the secure migration begins. The advent of Virtual machine technol-ogy, for example, VMware, and container technology, such as Docker, LXD, and Unikernels has made the migration of services possible. As Cloud services, such as Vehicular Cloud, are now being increasingly offered in highly mobile environments, Y-Comm, a new framework for building future mobile systems, has developed proactive handover to support the mobile user. Though there are many mechanisms in place to provide support for mobile services, one way of addressing the challenges arising because of this emerging application is to move the computing resources closer to the end-users and find how much computing resources should be allocated to meet the performance re-quirements/demands. This work addresses the above challenges by proposing the development of resource allocation security protocols for secure service migration that allow the safe transfer of servers and monitoring of the capacity of requested resources to different Cloud environments. In this thesis, we propose a Resource Allocation Secu-rity Protocol for secure service migration that allows resources to be allocated efficiently is analyzed. In our research, we use two differ-ent formal modelling and verification techniques to verify an abstract protocol and validate the security properties such as secrecy, authen-tication, and key exchange for secure service migration. The new protocol has been verified in AVISPA and ProVerif formal verifier and is being implemented in a new Service Management Framework Prototype to securely manage and allocate resources in Commercial Cloud Environments. And then, a Capability-Based Secure Service Protocol (SSP) was developed to ensure that capability-based service protocol proves secrecy, authentication, and authorization, and that it can be applied to any service. A basic prototype was then devel-oped to test these ideas using a block storage system known as the Network Memory Service. This service was used as the backend of a FUSE filesystem. The results show that this approach can be safely implemented and should perform well in real environments
Virtual patient-specific treatment verification using machine learning methods to assist the dose deliverability evaluation of radiotherapy prostate plans
Machine Learning (ML) methods represent a potential tool to support and optimize virtual patient-specific plan verifications within radiotherapy workflows. However, previously reported applications did not consider the actual physical implications in the predictor’s quality and modelperformance and did not report the implementation pertinence nor their limitations. Therefore, the main goal of this thesis was to predict dose deliverability using different ML models and input predictor features, analysing the physical aspects involved in the predictions to propose areliable decision-support tool for virtual patient-specific plan verification protocols. Among the principal predictors explored in this thesis, numerical and high-dimensional features based on modulation complexity, treatment-unit parameters, and dosimetric plan parameters were all implemented by designing random forest (RF), extreme gradient boosting (XG-Boost), neural networks (NN), and convolutional neural networks (CNN) models to predict gamma passing rates (GPR) for prostate treatments. Accordingly, this research highlights three principal findings. (1) The dataset composition's heterogeneity directly impacts the quality of the predictor features and, subsequently, the model performance. (2) The models based on automatic extracted features methods (CNN models) of multi-leaf-collimator modulation maps (MM) presented a more independent and transferable prediction performance. Furthermore, (3) ML algorithms incorporated in radiotherapy workflows for virtual plan verification are required to retrieve treatment plan parameters associated with the prediction to support themodel's reliability and stability. Finally, this thesis presents how the most relevant automatically extracted features from the activation maps were considered to suggest an alternative decision support tool to comprehensively evaluate the causes of the predicted dose deliverability
Recommended from our members
Transaction behaviour in large database environments: A methodological approach
This thesis presents the CITY benchmark, a database benchmark that fairly represents On-Line Transaction Processing (OLTP) environments. It analyses the most widely used benchmarks in general putting more emphasis on the Wisconsin benchmark and the Transaction Processing Council (TPC) benchmarks (TPC-A, TPC- B and TPC-C) in particular. It also presents an empirical approach to examine the workload of those benchmarks and discovered several technical limitations in their scripts. The thesis also presents an investigation of on-line transactions in large database environments. The tested environments were three of the largest organisations in the UK, those organisations were different in objectives and activities. The investigation identified on-line transaction behaviour and defined the salient characteristics of databases in high-volume transaction environments. The findings from those studies established the basis of a transaction and set of tables that are representative of them. The CITY benchmark design is directly driven from the findings from the empirical studies. The benchmark design took into consideration all the critiques directed towards the TPC benchmarks A, B and C. It is the first benchmark that is designed as a result of studying the behaviour of on-line transactions and databases in large database environments. The CITY benchmark is mainly designed to test and compare database systems performance in high-volume transaction environments (OLTP).
The work revealed the salient characteristics of large database environments and identified a typical behaviour of on-line transaction in OLTP environments. This research has clearly shown that the TPC benchmarks are not representative to the domain of high-volume transactions environments (OLTP) and it explained why they could be misleading if used to test database management systems in this domain. Additionally, this thesis presents a database performance evaluation methodology that is based on in-depth studies in large database environments
A Case Study using SysML for safety-critical systems
This thesis presents a case study which is based on our experience and lessons learnt from modelling a control system using the state-of-the-art modelling language for systems engineering, Systems Modelling Language (SysML). The goals of this thesis are to (1) capture the structure and behaviour of a control system using SysML, (2) handling the development of safety requirements, (3) support generation of safety cases, a structured collection of arguments for system safety, by creating traceability links between requirements and model elements, (4) assess SysML capabilities in modelling control systems and supporting generation of safety cases.
This case study is part of the “ModelME!” project which is conducted at Simula Research Laboratory with industry partners. The aim of the “ModelME!” project is to devise better software engineering practices for Integrated Software-Dependent Systems in the Maritime & Energy sectors.
Based on the experiences of this and other simultaneous projects in the “ModelME!” project, a methodology for modelling control systems to support safety certification has been proposed. We use this methodology to present the SysML model, developed in this case study. The methodology takes a systematic approach and guides us through the process of designing a control system, from the first steps of capturing requirements, system functionality and environmental assumptions through the development of structural and behavioural diagrams and last, but not least the modelling of safety design, developing the requirements to avoid ambiguity and tracing model structures to the requirements.
In this thesis we create a comprehensive set of models to capture our case study from requirement, structure and behaviour points of view and present these models following the methodology mentioned above. We create traceability links between the requirements and design model elements/slices with the goal of assisting safety engineers in the generation of safety cases. Then we discuss the capabilities of SysML and our chosen tool regarding the creation of models for control systems and supporting safety case generation. Further we summarize lessons learned, potential improvements and directions for future work
Low-frequency gravitational-wave science with eLISA/NGO
We review the expected science performance of the New Gravitational-Wave
Observatory (NGO, a.k.a. eLISA), a mission under study by the European Space
Agency for launch in the early 2020s. eLISA will survey the low-frequency
gravitational-wave sky (from 0.1 mHz to 1 Hz), detecting and characterizing a
broad variety of systems and events throughout the Universe, including the
coalescences of massive black holes brought together by galaxy mergers; the
inspirals of stellar-mass black holes and compact stars into central galactic
black holes; several millions of ultracompact binaries, both detached and mass
transferring, in the Galaxy; and possibly unforeseen sources such as the relic
gravitational-wave radiation from the early Universe. eLISA's high
signal-to-noise measurements will provide new insight into the structure and
history of the Universe, and they will test general relativity in its
strong-field dynamical regime.Comment: 20 pages, 8 figures, proceedings of the 9th Amaldi Conference on
Gravitational Waves. Final journal version. For a longer exposition of the
eLISA science case, see http://arxiv.org/abs/1201.362
- …