330 research outputs found
Technical Report on Deploying a highly secured OpenStack Cloud Infrastructure using BradStack as a Case Study
Cloud computing has emerged as a popular paradigm and an attractive model for
providing a reliable distributed computing model.it is increasing attracting
huge attention both in academic research and industrial initiatives. Cloud
deployments are paramount for institution and organizations of all scales. The
availability of a flexible, free open source cloud platform designed with no
propriety software and the ability of its integration with legacy systems and
third-party applications are fundamental. Open stack is a free and opensource
software released under the terms of Apache license with a fragmented and
distributed architecture making it highly flexible. This project was initiated
and aimed at designing a secured cloud infrastructure called BradStack, which
is built on OpenStack in the Computing Laboratory at the University of
Bradford. In this report, we present and discuss the steps required in
deploying a secured BradStack Multi-node cloud infrastructure and conducting
Penetration testing on OpenStack Services to validate the effectiveness of the
security controls on the BradStack platform. This report serves as a practical
guideline, focusing on security and practical infrastructure related issues. It
also serves as a reference for institutions looking at the possibilities of
implementing a secured cloud solution.Comment: 38 pages, 19 figures
Distributed manufacturing of open source medical hardware for pandemics
Distributed digital manufacturing offers a solution to medical supply and technology shortages during pandemics. To prepare for the next pandemic, this study reviews the state-of-the-art of open hardware designs needed in a COVID-19-like pandemic. It evaluates the readiness of the top twenty technologies requested by the Government of India. The results show that the majority of the actual medical products have some open source development, however, only 15% of the supporting technologies required to produce them are freely available. The results show there is still considerable research needed to provide open source paths for the development of all the medical hardware needed during pandemics. Five core areas of future research are discussed, which include (i) technical development of a wide-range of open source solutions for all medical supplies and devices, (ii) policies that protect the productivity of laboratories, makerspaces, and fabrication facilities during a pandemic, as well as (iii) streamlining the regulatory process, (iv) developing Good-Samaritan laws to protect makers and designers of open medical hardware, as well as to compel those with knowledge that will save lives to share it, and (v) requiring all citizen-funded research to be released with free and open source licenses
Recommended from our members
Optimising Fault Tolerance in Real-time Cloud Computing IaaS Environment
YesFault tolerance is the ability of a system to respond
swiftly to an unexpected failure. Failures in a cloud computing
environment are normal rather than exceptional, but fault
detection and system recovery in a real time cloud system is a
crucial issue. To deal with this problem and to minimize the risk
of failure, an optimal fault tolerance mechanism was introduced
where fault tolerance was achieved using the combination of the
Cloud Master, Compute nodes, Cloud load balancer, Selection
mechanism and Cloud Fault handler. In this paper, we proposed
an optimized fault tolerance approach where a model is designed
to tolerate faults based on the reliability of each compute node
(virtual machine) and can be replaced if the performance is not
optimal. Preliminary test of our algorithm indicates that the rate
of increase in pass rate exceeds the decrease in failure rate and it
also considers forward and backward recovery using diverse
software tools. Our results obtained are demonstrated through
experimental validation thereby laying a foundation for a fully
fault tolerant IaaS Cloud environment, which suggests a good
performance of our model compared to current existing
approaches.Petroleum Technology Development Fund (PTDF
Development of an open technology sensor suite for assisted living: a student-led research project.
Many countries have a rapidly ageing population, placing strain on health services and creating a growing market for assistive technology for older people. We have, through a student-led, 12-week project for 10 students from a variety of science and engineering backgrounds, developed an integrated sensor system to enable older people, or those at risk, to live independently in their own homes for longer, while providing reassurance for their family and carers. We provide details on the design procedure and performance of our sensor system and the management and execution of a short-term, student-led research project. Detailed information on the design and use of our devices, including a door sensor, power monitor, fall detector, general in-house sensor unit and easy-to-use location-aware communications device, is given, with our open designs being contrasted with closed proprietary systems. A case study is presented for the use of our devices in a real-world context, along with a comparison with commercially available systems. We discuss how the system could lead to improvements in the quality of life of older users and increase the effectiveness of their associated care network. We reflect on how recent developments in open source technology and rapid prototyping increase the scope and potential for the development of powerful sensor systems and, finally, conclude with a student perspective on this team effort and highlight learning outcomes, arguing that open technologies will revolutionize the way in which technology will be deployed in academic research in the future.This is the final version of the article. It first appeared from Royal Society Publishing via http://dx.doi.org/10.1098/rsfs.2016.001
Comparison on OpenStack and OpenNebula performance to improve multi-Cloud architecture on cosmological simulation use case
With the increasing numbers of Cloud Service Providers and the migration of the Grids to the Cloud paradigm, it is necessary to be able to leverage these new resources. Moreover, a large class of High Performance Computing (HPC) applications can run these resources without (or with minor) modifications. But using these resources come with the cost of being able to interact with these new resource providers. In this paper we introduce the design of a HPC middleware that is able to use resources coming from an environment that compose of multiple Clouds as well as classical \hpc resources. Using the \diet middleware, we are able to deploy a large-scale, distributed HPC platform that spans across a large pool of resources aggregated from different providers. Furthermore, we hide to the end users the difficulty and complexity of selecting and using these new resources even when new Cloud Service Providers are added to the pool. Finally, we validate the architecture concept through cosmological simulation RAMSES. Thus we give a comparison of 2 well-known Cloud Computing Software: OpenStack and OpenNebula.Avec l'augmentation du nombre de fournisseurs de service Cloud et la migration des applications depuis les grilles de calcul vers le Cloud, il est nécessaire de pouvoir tirer parti de ces nouvelles ressources. De plus, une large classe des applications de calcul haute performance peuvent s'exécuter sur ces ressources sans modifications (ou avec des modifications mineures). Mais utiliser ces ressources vient avec le coût d'être capable d'intéragir avec des nouveaux fournisseurs de ressources. Dans ce papier, nous introduisons la conception d'un nouveau intergiciel HPC qui permet d'utiliser les ressources qui proviennent d'un environement composé de plusieurs Clouds comme des ressources classiques. En utilisant l'intergiciel \diet, nous sommes capable de déployer une plateforme HPC distribuée et large échelle qui s'étend sur un large ensemble de ressources aggrégées entre plusieurs fournisseurs Cloud. De plus, nous cachons à l'utilisateur final la difficulté et la complexité de sélectionner et d'utiliser ces nouvelles ressources quand un nouveau fournisseur de service Cloud est ajouté dans l'ensemble. Finalement, nous validons notre concept d'architecture via une application de simulation cosmologique RAMSES. Et nous fournissons une comparaison entre 2 intergiciels de Cloud: OpenStack et OpenNebula
Design and implementation aspects of open source next generation networks (NGN) test-bed software toolkits
Informations- und Kommunikationstechnologien bilden seit langem das immer wichtiger werdende Rückgrat der weltweiten Wirtschaft und Telekommunikation, in der speziell Telekommunikationsnetze und -dienste einen elementaren Anteil tragen. Durch die Konvergenz von Telekommunikations- und Internettechnologien hat sich die Telekommunikationslandschaft in der letzten Dekade drastisch verändert. Bislang geschlossene Telekommunikationsumgebungen haben sich imWandel zum sogenannten Next Generation Network (NGN) hinsichtlich unterstützter Zugangsnetztechnologien und angebotener multimedialer Anwendungen sowie der eingesetzten Protokolle und Dienste zu komplexen, hochdynamischen, Multi-Service Infrastrukturen gewandelt.
Die Kontrollschicht solcher NGNs ist dabei von übergeordneter Bedeutung, da diese zwischen den Zugangsnetzen und den Anwendungen sitzt. Der Einsatz und die Optimierung des IP-Multimedia Subsystem (IMS) wurde in diesem Kontext Jahrelang erforscht und diskutiert und es repräsentiert heute die weltweit anerkannte Kontrollplattform für feste und mobile Telekommunikationsnetze.
Die Forschung an Protokollen und Diensten in diesen NGN Umgebungen ist aufgrund der Konvergenz von Technologien, Anwendungen und Business Modellen sowie der hohen Dynamik aber kurzen Innovationszyklen hochkomplex. Der frühzeitigen Zugang zu herstellerunabhängigen – aber dicht an der Produktwelt angelehnten - Validierungsinfrastrukturen, sogenannten offenen Technologietest-beds, kurz Test-beds, ist daher für Forschungs- und Entwicklungsabteilungen unerlässlich
Die vorliegende Dissertation beschreibt die umfangreiche Forschungsarbeit des Autors auf dem Gebiet der offenen NGN Test-beds über die letzten neun Jahre und konzentriert sich dabei auf Entwurf, Entwicklung und Bereitstellung des Open Source IMS Core Projekt, das seit Jahren die Grundlage für eine Vielzahl von NGN Test-beds und zahllose NGN Forschungs- und Entwicklungsprojekte im akademischen als auch Industrienahen Umfeld rund um den Globus darstellt. Dabei wird ein großer Schwerpunkt auf die Anforderungen hinsichtlich Flexibilität, Leistung, Funktionalitätsumfang und Interoperabilität, sowie elementare Designprinzipien von Test-bedwerkzeugen gelegt.
Die Arbeit beschreibt und bewertet darĂĽberhinaus den Einsatz von Open Source Prinzipien und veranschaulicht die Vorteile dieses Ansatzes hinsichtlich Einfluss und Nachhaltigkeit der Forschung anhand des Aufbaus einer globalen Open Source IMS Core (OpenIMSCore) Forschungs-Community.
Außerdem veranschaulicht die Arbeit zum Ende die Wiederverwendbarkeit der wesentlichen angewendeten Designprinzipien an anderen maßgeblich durch den Autor entwickelten Test-bed Werkzeugen, insbesondere dem Open Evolved Packet Core (OpenEPC) für die nahtlose Integration verschiedener Breitbandnetztechnologien.Information and Communication Technologies provide for a long time already the backbone of telecommunication networks, such that communication services represent an elementary foundation of today’s globally connected economy. The telecommunication landscape has experienced dramatic transformations through the convergence of the Telecom and the Internet worlds. The previously closed telecommunication domain is currently transforming itself through the so-called NGN evolution into a highly dynamic multiservice infrastructure, supporting rich multimedia applications, as well as providing comprehensive support for various access technologies.
The control layer of such NGNs is then of paramount importance, as representing the convergent mediator between access and services. The use and the optimization of the IP-Multimedia Subsystem (IMS) was researched and considered in this domain for many years now, such that today it represents the world-wide recognized control platform for fixed and mobile NGNs.
Research on protocols and services for such NGN architectures, due to the convergence of technologies, applications and business models, as well as for enabling highly dynamic and short innovation cycles, is highly complex and requires early access to vendor independent - yet close to real life systems - validation environments, the so-called open technology test-beds.
The present thesis describes the extensive research of the author over the last nine years in the field of open NGN test-beds. It focuses on the design, development and deployment of the Open Source IMS Core project, which represents since years the foundation of numerous NGN test-beds and countless NGN Research & Development projects in the academia as well as the industry domain around the globe. A major emphasis is given for ensuring flexibility, performance, reference functionality and inter-operability, as well as satisfying elementary design principles of such test-bed toolkits.
The study also describes and evaluates the use of Open Source principles, highlighting the advantages of using it in regard to the creation, impact and sustainability of a global OpenIMSCore research community.
Moreover, the work documents that the essential design principles and methodology employed can be reused in a generic way to create test-bed toolkits in other technology domains. This is shown by introducing the OpenEPC project, which provides for seamless integration of different mobile broadband technologies
Recommended from our members
iSEA: IoT-based smartphone energy assistant for prompting energy-aware behaviors in commercial buildings
Providing personalized energy-use information to individual occupants enables the adoption of energy-aware behaviors in commercial buildings. However, the implementation of individualized feedback still remains challenging due to the difficulties in collecting personalized data, tracking personal behaviors, and delivering personalized tailored information to individual occupants. Nowadays, the Internet of Things (IoT) technologies are used in a variety of applications including real-time monitoring, control, and decision-making due to the flexibility of these technologies for fusing different data streams. In this paper, we propose a novel IoT-based smartphone energy assistant (iSEA) framework which prompts energy-aware behaviors in commercial buildings. iSEA tracks individual occupants through tracking their smartphones, uses a deep learning approach to identify their energy usage, and delivers personalized tailored feedback to impact their usage. iSEA particularly uses an energy-use efficiency index (EEI) to understand behaviors and categorize them into efficient and inefficient behaviors. The iSEA architecture includes four layers: physical, cloud, service, and communication. The results of implementing iSEA in a commercial building with ten occupants over a twelve-week duration demonstrate the validity of this approach in enhancing individualized energy-use behaviors. An average of 34% energy savings was measured by tracking occupants’ EEI by the end of the experimental period. In addition, the results demonstrate that commercial building occupants often ignore controlling over lighting systems at their departure events that leads to wasting energy during non-working hours. By utilizing the existing IoT devices in commercial buildings, iSEA significantly contributes to support research efforts into sensing and enhancing energy-aware behaviors at minimal costs
Fail Over Strategy for Fault Tolerance in Cloud Computing Environment
YesCloud fault tolerance is an important issue in cloud computing platforms and applications. In the event of an unexpected
system failure or malfunction, a robust fault-tolerant design may allow the cloud to continue functioning correctly
possibly at a reduced level instead of failing completely. To ensure high availability of critical cloud services, the
application execution and hardware performance, various fault tolerant techniques exist for building self-autonomous
cloud systems. In comparison to current approaches, this paper proposes a more robust and reliable architecture using
optimal checkpointing strategy to ensure high system availability and reduced system task service finish time. Using
pass rates and virtualised mechanisms, the proposed Smart Failover Strategy (SFS) scheme uses components such as
Cloud fault manager, Cloud controller, Cloud load balancer and a selection mechanism, providing fault tolerance via
redundancy, optimized selection and checkpointing. In our approach, the Cloud fault manager repairs faults generated
before the task time deadline is reached, blocking unrecoverable faulty nodes as well as their virtual nodes. This scheme
is also able to remove temporary software faults from recoverable faulty nodes, thereby making them available for future
request. We argue that the proposed SFS algorithm makes the system highly fault tolerant by considering forward and
backward recovery using diverse software tools. Compared to existing approaches, preliminary experiment of the SFS
algorithm indicate an increase in pass rates and a consequent decrease in failure rates, showing an overall good
performance in task allocations. We present these results using experimental validation tools with comparison to other
techniques, laying a foundation for a fully fault tolerant IaaS Cloud environment
Distributed mobility management solutions for next mobile network architectures
The architecture of current operator infrastructures is being challenged by the non-stopping growing demand of data hungry services appearing every day. While currently deployed operator networks have been able to cope with traffic demands so far, the architectures for the 5th generation of mobile networks (5G) are expected to support unprecedented traffic loads while decreasing costs associated to the network deployment and operations. Distributed Mobility Management (DMM) helps going into this direction, by flattening the network, hence improving its scalability, and enabling local access to the Internet and other communication services, like mobile-edge clouds. Initial proposals have been based on extending existing IP mobility protocols, such as Mobile IPv6 and Proxy Mobile IPv6, but these need to further evolve to comply with the requirements of future networks, which include, among others, higher flexibility. Software Defined Networking (SDN) appears as a powerful tool for operators looking forward to increased flexibility and reduced costs. In this article, we first propose a Proxy Mobile IPv6 based DMM solution which serves as a baseline for exploring the evolution of DMM towards SDN, including the identification of DMM design principles and challenges. Based on this investigation, we propose a SDN-based DMM solution which is evaluated against our baseline from analytic and experimental viewpoints.This work has been funded by the European Union’s Horizon 2020 programme under the grant agreement no. 671598 “5GCrosshaul: the 5G integrated fronthaul/backhaul”
- …