41 research outputs found

    Using OSM for real-time redeployment of VNFs based on network status

    Get PDF
    Στην παρούσα διπλωματική εργασία θα εξετάσουμε την Εικονικοποίηση δικτυακών λειτουργιών (Network Functions Virtualisation - NFV) ως την κατάλληλη αρχιτεκτονική για την υλοποίηση ενός δικτύου κατάλληλου για το Διαδίκτυο των Πραγμάτων (Internet of Things - IoT), το οποίο πρέπει να είναι ευέλικτο και επεκτάσιμο. Πιο συγκεκριμένα, θα επικεντρωθούμε στην αποτελεσματική αξιοποίηση του Open Source MANO (OSM) στην υλοποίηση μιας εφαρμογής που παρακολουθεί την κατάσταση του δικτύου των Εικονικοποιημένων δικτυακών λειτουργιών (Virtual Network Functions – VNFs) και σε περίπτωση κακής κατάστασης του δικτύου (π.χ. συμφόρηση του δικτύου) αναλαμβάνει τη μετακίνηση των επηρεαζόμενων VNFs σε κάποιον άλλο Διαχειριστή Εικονικής Υποδομής (Virtual Infrastructure Manager – VIM), για να αποτραπεί η πτώση στην απόδοση των ενεργών υπηρεσιών.In this thesis we will be examining the Network Functions Virtualisation (NFV) framework as a suitable framework for implementing a network appropriate for Internet of Things (IoT), which needs to be flexible and scalable. More precisely, we will be focusing on how Open Source MANO (OSM) can be efficiently utilized in a solution that monitors the network status of Virtual Network Functions (VNFs) and in case of bad network status (e.g. network congestion) triggers the redeployment of affected VNFs to some other Virtual Infrastructure Manager (VIM) to prevent the underperformance of running services

    Private cloud computing platforms. Analysis and implementation in a Higher Education Institution

    Get PDF
    The constant evolution of the Internet and its increasing use and subsequent entailing to private and public activities, resulting in a strong impact on their survival, originates an emerging technology. Through cloud computing, it is possible to abstract users from the lower layers to the business, focusing only on what is most important to manage and with the advantage of being able to grow (or degrades) resources as needed. The paradigm of cloud arises from the necessity of optimization of IT resources evolving in an emergent and rapidly expanding and technology. In this regard, after a study of the most common cloud platforms and the tactic of the current implementation of the technologies applied at the Institute of Biomedical Sciences of Abel Salazar and Faculty of Pharmacy of Oporto University a proposed evolution is suggested in order adorn certain requirements in the context of cloud computing.atividades privadas e públicas, traduzindo-se num forte impacto à sua sobrevivência, origina uma tecnologia emergente. Através de cloud computing, é possível abstrair os utilizadores das camadas inferiores ao negócio, focalizando apenas no que realmente é mais importante de gerir e ainda com a vantagem de poder crescer (ou diminuir) os recursos conforme as necessidades correntes. Os recursos das TI evoluíram consideravelmente na última década tendo despoletado toda uma nova consciencialização de otimização, originando o paradigma da computação em nuvem. Neste sentido, após um estudo das plataformas de cloud mais comuns, é abordado um case study das tecnologias implementadas no Instituto de Ciências Biomédicas de Abel Salazar e Faculdade de Farmácia da Universidade do Porto seguido de uma sugestão de implementação de algumas plataformas de cloud a fim de adereçar determinados requisitos do case study. Distribuições produzidas especificamente para a implementação de nuvens privadas encontram-se hoje em dia disponíveis e cujas configurações estão amplamente simplificadas. No entanto para que seja viável uma arquitetura bem implementada, quer a nível de hardware, rede, segurança eficiência e eficácia, é pertinente considerar a infraestrutura necessária como um todo. Um estudo multidisciplinar aprofundado sobre todos os temas adjacentes a esta tecnologia está intrinsecamente ligado à arquitetura de um sistema de nuvem, sob pena de se obter um sistema deficitário. É necessário um olhar mais abrangente, para além do equipamento necessário e do software utilizado, que pondere efetivamente os custos de implementação tendo em conta também os recursos humanos especializados nas diversas áreas envolvidas. A construção de um novo centro de dados, fruto da junção dos edifícios do Instituto de Ciências Biomédicas de Abel Salazar e da Faculdade de Farmácia da Universidade do Porto, possibilitou a partilha de recursos tecnológicos. Tendo em conta a infraestrutura existente, completamente escalável, e assente numa abordagem de crescimento e de virtualização, considera-se a implementação de uma nuvem privada já que os recursos existentes são perfeitamente adaptáveis a esta realidade emergente. A tecnologia de virtualização adotada, bem como o respetivo hardware (armazenamento e processamento) foi pensado numa implementação baseada no XEN Server, e considerando que existe heterogeneidade no parque dos servidores e tendo em conta a ideologia das tecnologias disponíveis (aberta e proprietária) é estudada uma abordagem distinta à implementação existente baseada na Microsoft. Dada a natureza da instituição, e dependendo dos recursos necessários e abordagem a tomar, no desenvolvimento de uma nuvem privada, poderá ser levado em conta a integração com nuvens públicas (por exemplo Google Apps), sendo que as possíveis soluções a adotar poderão ser baseadas em tecnologias abertas e/ou pagas (ou ambas). Este trabalho tem como objetivo, em última instância, o desígnio de verificar as tecnologias utilizadas atualmente e identificar potenciais soluções para que em conjunto com a infraestrutura atual, disponibilizar um serviço de nuvem privada. O trabalho inicia-se com uma explicação concisa do conceito de nuvem, comparando com outras formas de computação, expondo as suas características, revendo a sua história, explicando as suas camadas, modelos de implementação e arquiteturas. Em seguida, no capítulo do estado da arte, são abordadas as principais plataformas de computação em nuvem focando o Microsoft Azure, Google Apps, Cloud Foundry, Delta Cloud e Open Stack. São também abordadas outras plataformas que emergem fornecendo assim um olhar mais amplo para as soluções tecnológicas atuais disponíveis. Após o estado da arte, é abordado um estudo de um caso em particular, a implementação do cenário de TI do novo edifício das duas unidades orgânicas da Universidade do Porto, o Instituto de Ciências Biomédicas Abel Salazar e a Faculdade de Farmácia e sua arquitetura de nuvem privada utilizando recursos partilhados. O estudo do caso é seguido de uma sugestão de evolução da implementação, utilizando tecnologias de computação em nuvem de forma a cumprir com os requisitos necessários e integrar e agilizar a infraestrutura existente

    A framework for flexible integration in robotics and its applications for calibration and error compensation

    Get PDF
    Robotics has been considered as a viable automation solution for the aerospace industry to address manufacturing cost. Many of the existing robot systems augmented with guidance from a large volume metrology system have proved to meet the high dimensional accuracy requirements in aero-structure assembly. However, they have been mainly deployed as costly and dedicated systems, which might not be ideal for aerospace manufacturing having low production rate and long cycle time. The work described in this thesis is to provide technical solutions to improve the flexibility and cost-efficiency of such metrology-integrated robot systems. To address the flexibility, a software framework that supports reconfigurable system integration is developed. The framework provides a design methodology to compose distributed software components which can be integrated dynamically at runtime. This provides the potential for the automation devices (robots, metrology, actuators etc.) controlled by these software components to be assembled on demand for various assembly applications. To reduce the cost of deployment, this thesis proposes a two-stage error compensation scheme for industrial robots that requires only intermittent metrology input, thus allowing for one expensive metrology system to be used by a number of robots. Robot calibration is employed in the first stage to reduce the majority of robot inaccuracy then the metrology will correct the residual errors. In this work, a new calibration model for serial robots having a parallelogram linkage is developed that takes into account both geometric errors and joint deflections induced by link masses and weight of the end-effectors. Experiments are conducted to evaluate the two pieces of work presented above. The proposed framework is adopted to create a distributed control system that implements calibration and error compensation for a large industrial robot having a parallelogram linkage. The control system is formed by hot-plugging the control applications of the robot and metrology used together. Experimental results show that the developed error model was able to improve the 3 positional accuracy of the loaded robot from several millimetres to less than one millimetre and reduce half of the time previously required to correct the errors by using only the metrology. The experiments also demonstrate the capability of sharing one metrology system to more than one robot

    GENERIC AND ADAPTIVE METADATA MANAGEMENT FRAMEWORK FOR SCIENTIFIC DATA REPOSITORIES

    Get PDF
    Der rapide technologische Fortschritt hat in verschiedenen Forschungsdisziplinen zu vielfältigen Weiterentwicklungen in Datenakquise und -verarbeitung geführt. Hi- eraus wiederum resultiert ein immenses Wachstum an Daten und Metadaten, gener- iert durch wissenschaftliche Experimente. Unabhängig vom konkreten Forschungs- gebiet ist die wissenschaftliche Praxis immer stärker durch Daten und Metadaten gekennzeichnet. In der Folge intensivieren Universitäten, Forschungsgemeinschaften und Förderagenturen ihre Bemühungen, wissenschaftliche Daten effizient zu sichten, zu speichern und auszuwerten. Die wesentlichen Ziele wissenschaftlicher Daten- Repositorien sind die Etablierung von Langzeitspeicher, der Zugriff auf Daten, die Bereitstellung von Daten für die Wiederverwendung und deren Referenzierung, die Erfassung der Datenquelle zur Reproduzierbarkeit sowie die Bereitstellung von Meta- daten, Anmerkungen oder Verweisen zur Vermittlung domänenspezifischen Wis- sens, das zur Interpretation der Daten notwendig ist. Wissenschaftliche Datenspe- icher sind hochkomplexe Systeme, bestehend aus Elementen aus unterschiedlichen Forschungsfeldern, wie z. B. Algorithmen für Datenkompression und Langzeit- datenarchivierung, Frameworks für das Metadaten- und Annotations-management, Workflow-Provenance und Provenance-Interoperabilität zwischen heterogenen Work- flowsystemen, Autorisierungs und Authentifizierungsinfrastrukturen sowie Visual- isierungswerkzeuge für die Dateninterpretation. Die vorliegende Arbeit beschreibt eine modulare Architektur für ein wis- senschaftliches Datenarchiv, die Forschungsgemeinschaften darin unterstützt, ihre Daten und Metadaten gezielt über den jeweiligen Lebenszyklus hinweg zu orchestri- eren. Diese Architektur besteht aus Komponenten, die vier Forschungsfelder repräsen- tieren. Die erste Komponente ist ein Client zur Datenübertragung (“data transfer client”). Er bietet eine generische Schnittstelle für die Erfassung von Daten und den Zugriff auf Daten aus wissenschaftlichen Datenakquisesystemen. Die zweite Komponente ist das MetaStore-Framework, ein adaptives Metadaten- Management-Framework, das die Handhabung sowohl statischer als auch dynamis- cher Metadatenmodelle ermöglicht. Um beliebige Metadatenschemata behandeln zu können, basiert die Entwicklung des MetaStore-Frameworks auf dem komponen- tenbasierten dynamischen Kompositions-Entwurfsmuster (component-based dynamic composition design pattern). Der MetaStore ist außerdem mit einem Annotations- framework für die Handhabung von dynamischen Metadaten ausgestattet. Die dritte Komponente ist eine Erweiterung des MetaStore-Frameworks zur au- tomatisierten Behandlung von Provenance-Metadaten für BPEL-basierte Workflow- Management-Systeme. Der von uns entworfene und implementierte Prov2ONE Al- gorithmus übersetzt dafür die Struktur und Ausführungstraces von BPEL-Workflow- Definitionen automatisch in das Provenance-Modell ProvONE. Hierbei ermöglicht die Verfügbarkeit der vollständigen BPEL-Provenance-Daten in ProvONE nicht nur eine aggregierte Analyse der Workflow-Definition mit ihrem Ausführungstrace, sondern gewährleistet auch die Kompatibilität von Provenance-Daten aus unterschiedlichen Spezifikationssprachen. Die vierte Komponente unseres wissenschaftlichen Datenarchives ist das Provenance-Interoperabilitätsframework ProvONE - Provenance Interoperability Framework (P-PIF). Dieses gewährleistet die Interoperabilität von Provenance-Daten heterogener Provenance-Modelle aus unterschiedlichen Workflowmanagementsyste- men. P-PIF besteht aus zwei Komponenten: dem Prov2ONE-Algorithmus für SCUFL und MoML Workflow-Spezifikationen und Workflow-Management-System- spezifischen Adaptern zur Extraktion, Übersetzung und Modellierung retrospektiver Provenance-Daten in das ProvONE-Provenance-Modell. P-PIF kann sowohl Kon- trollfluss als auch Datenfluss nach ProvONE übersetzen. Die Verfügbarkeit hetero- gener Provenance-Traces in ProvONE ermöglicht das Vergleichen, Analysieren und Anfragen von Provenance-Daten aus unterschiedlichen Workflowsystemen. Wir haben die Komponenten des in dieser Arbeit vorgestellten wissenschaftlichen Datenarchives wie folgt evaluiert: für den Client zum Datentrasfer haben wir die Daten-übertragungsleistung mit dem Standard-Protokoll für Nanoskopie-Datensätze untersucht. Das MetaStore-Framework haben wir hinsichtlich der folgenden bei- den Aspekte evaluiert. Zum einen haben wir die Metadatenaufnahme und Voll- textsuchleistung unter verschiedenen Datenbankkonfigurationen getestet. Zum an- deren zeigen wir die umfassende Abdeckung der Funktionalitäten von MetaStore durch einen funktionsbasierten Vergleich von MetaStore mit bestehenden Metadaten- Management-Systemen. Für die Evaluation von P-PIF haben wir zunächst die Korrek- theit und Vollständigkeit unseres Prov2ONE-Algorithmus bewiesen und darüber hin- aus die vom Prov2ONE BPEL-Algorithmus generierten Prognose-Graphpattern aus ProvONE gegen bestehende BPEL-Kontrollflussmuster ausgewertet. Um zu zeigen, dass P-PIF ein nachhaltiges Framework ist, das sich an Standards hält, vergle- ichen wir außerdem die Funktionen von P-PIF mit denen bestehender Provenance- Interoperabilitätsframeworks. Diese Auswertungen zeigen die Überlegenheit und die Vorteile der einzelnen in dieser Arbeit entwickelten Komponenten gegenüber ex- istierenden Systemen

    A Virtual University Infrastructure For Orthopaedic Surgical Training With Integrated Simulation

    No full text
    This thesis pivots around the fulcrum of surgical, educational and technological factors. Whilst there is no single conclusion drawn, it is a multidisciplinary thesis exploring the juxtaposition of different academic domains that have a significant influence upon each other. The relationship centres on the engineering and computer science factors in learning technologies for surgery. Following a brief introduction to previous efforts developing surgical simulation, this thesis considers education and learning in orthopaedics, the design and building of a simulator for shoulder surgery. The thesis considers the assessment of such tools and embedding into a virtual learning environment. It explains how the performed experiments clarified issues and their actual significance. This leads to discussion of the work and conclusions are drawn regarding the progress of integration of distributed simulation within the healthcare environment, suggesting how future work can proceed

    Network Processors and Next Generation Networks: Design, Applications, and Perspectives

    Get PDF
    Network Processors (NPs) are hardware platforms born as appealing solutions for packet processing devices in networking applications. Nowadays, a plethora of solutions exists, with no agreement on a common architecture. Each vendor has proposed its specific solution and no official standard still exists. The common features of all proposals are a hierarchy of processors, with a general purpose processor and several units specialized for packet processing, a series of memory devices with different sizes and latencies, a low-level programmability. The target is a platform for networking applications with low time to market and high time in market, thanks to a high flexibility and a programmability simpler than that of ASICs, for example. After about ten years since the "birth" of network processors, this research activity wants to make an analytical balance of their development and usage. Many authoritative opinions suggest that NPs have been "outdated" by multicore or manycore systems, which provide general purpose environments and some specialized cores. The main reasons of these negative opinions are the hard programmability of NPs, which often requires the knowledge of private microcode, or the excessive architectural limits, such as reduced memories and minimal instruction store. Our research shows that Network Processors can be appealing for different applications in networking area, and many interesting solutions can be obtained, which present very high performance, outscoring current solutions. However, the issues of hard programming and remarkable limits exist, and they could be alleviated only by providing almost a comprehensive programming environment and a proper design in terms of processing and memory resources. More e cient solutions can be surely provided, but the experience of network processors has produced an important legacy in developing packet processing engines. In this work, we have realized many devices for networking purposes based on NP platform, in order to understand the complexity of programming, the flexibility of design, the complexity of tasks that can be implemented, the maximum depth of packet processing, the performance of such devices, the real usefulness of NPs in network devices. All these features have been accurately analyzed and will be illustrated in this thesis. Many remarkable results have been obtained, which confirm the Network Processors as appealing solutions for network devices. Moreover, the research on NPs have lead us to analyze and solve more general issues, related for instance to multiprocessor systems or to processors with no big available memory. In particular, the latter issue lead us to design many interesting data structures for set representation and membership query, which are based on randomized techniques and allow for big memory savings

    Demystifying Internet of Things Security

    Get PDF
    Break down the misconceptions of the Internet of Things by examining the different security building blocks available in Intel Architecture (IA) based IoT platforms. This open access book reviews the threat pyramid, secure boot, chain of trust, and the SW stack leading up to defense-in-depth. The IoT presents unique challenges in implementing security and Intel has both CPU and Isolated Security Engine capabilities to simplify it. This book explores the challenges to secure these devices to make them immune to different threats originating from within and outside the network. The requirements and robustness rules to protect the assets vary greatly and there is no single blanket solution approach to implement security. Demystifying Internet of Things Security provides clarity to industry professionals and provides and overview of different security solutions What You'll Learn Secure devices, immunizing them against different threats originating from inside and outside the network Gather an overview of the different security building blocks available in Intel Architecture (IA) based IoT platforms Understand the threat pyramid, secure boot, chain of trust, and the software stack leading up to defense-in-depth Who This Book Is For Strategists, developers, architects, and managers in the embedded and Internet of Things (IoT) space trying to understand and implement the security in the IoT devices/platforms

    Mapping Scholarly Communication Infrastructure: A Bibliographic Scan of Digital Scholarly Communication Infrastructure

    Get PDF
    This bibliography scan covers a lot of ground. In it, I have attempted to capture relevant recent literature across the whole of the digital scholarly communications infrastructure. I have used that literature to identify significant projects and then document them with descriptions and basic information. Structurally, this review has three parts. In the first, I begin with a diagram showing the way the projects reviewed fit into the research workflow; then I cover a number of topics and functional areas related to digital scholarly communication. I make no attempt to be comprehensive, especially regarding the technical literature; rather, I have tried to identify major articles and reports, particularly those addressing the library community. The second part of this review is a list of projects or programs arranged by broad functional categories. The third part lists individual projects and the organizations—both commercial and nonprofit—that support them. I have identified 206 projects. Of these, 139 are nonprofit and 67 are commercial. There are 17 organizations that support multiple projects, and six of these—Artefactual Systems, Atypon/Wiley, Clarivate Analytics, Digital Science, Elsevier, and MDPI—are commercial. The remaining 11—Center for Open Science, Collaborative Knowledge Foundation (Coko), LYRASIS/DuraSpace, Educopia Institute, Internet Archive, JISC, OCLC, OpenAIRE, Open Access Button, Our Research (formerly Impactstory), and the Public Knowledge Project—are nonprofit.Andrew W. Mellon Foundatio

    Collaborative, Trust-Based Security Mechanisms for a National Utility Intranet

    Get PDF
    This thesis investigates security mechanisms for utility control and protection networks using IP-based protocol interaction. It proposes flexible, cost-effective solutions in strategic locations to protect transitioning legacy and full IP-standards architectures. It also demonstrates how operational signatures can be defined to enact organizationally-unique standard operating procedures for zero failure in environments with varying levels of uncertainty and trust. The research evaluates layering encryption, authentication, traffic filtering, content checks, and event correlation mechanisms over time-critical primary and backup control/protection signaling to prevent disruption by internal and external malicious activity or errors. Finally, it shows how a regional/national implementation can protect private communities of interest and foster a mix of both centralized and distributed emergency prediction, mitigation, detection, and response with secure, automatic peer-to-peer notifications that share situational awareness across control, transmission, and reliability boundaries and prevent wide-spread, catastrophic power outages

    Third International Symposium on Space Mission Operations and Ground Data Systems, part 2

    Get PDF
    Under the theme of 'Opportunities in Ground Data Systems for High Efficiency Operations of Space Missions,' the SpaceOps '94 symposium included presentations of more than 150 technical papers spanning five topic areas: Mission Management, Operations, Data Management, System Development, and Systems Engineering. The symposium papers focus on improvements in the efficiency, effectiveness, and quality of data acquisition, ground systems, and mission operations. New technology, methods, and human systems are discussed. Accomplishments are also reported in the application of information systems to improve data retrieval, reporting, and archiving; the management of human factors; the use of telescience and teleoperations; and the design and implementation of logistics support for mission operations. This volume covers expert systems, systems development tools and approaches, and systems engineering issues
    corecore