585 research outputs found

    Blockchain in maritime cybersecurity

    Get PDF
    Blockchain technologies can be used for many different purposes from handling large amounts of data to creating better solutions for privacy protection, user authentication and a tamper proof ledger which lead to growing interest among industries. Smart contracts, fog nodes and different consensus methods create a scalable environment to secure multi-party connections with equal trust of participanting nodes’ identity. Different blockchains have multiple options for methodologies to use in different environments. This thesis has focused on Ethereum based open-source solutions that fit the remote pilotage environment the best. Autonomous vehicular networks and remote operatable devices have been a popular research topic in the last few years. Remote pilotage in maritime environment is persumed to reach its full potential with fully autonomous vessels in ten years which makes the topic interesting for all researchers. However cybersecurity in these environments is especially important because incidents can lead to financial loss, reputational damage, loss of customer and industry trust and environmental damage. These complex environments also have multiple attack vectors because of the systems wireless nature. Denial-of-service (DoS), man-in-the-middle (MITM), message or executable code injection, authentication tampering and GPS spoofing are one of the most usual attacks against large IoT systems. This is why blockchain can be used for creating a tamper proof environment with no single point-of-failure. After extensive research about best performing blockchain technologies Ethereum seemed the most preferable for decentralised maritime environment. In comparison to most of 2021 blockchain technologies that have focused on financial industries and cryptocurrencies, Ethereum has focused on decentralizing applications within many different industries. This thesis provides three Ethereum based blockchain protocol solutions and one operating system for these protocols. All have different features that add to the base blockchain technology but after extensive comparison two of these protocols perform better in means of concurrency and privacy. Hyperledger Fabric and Quorum provide many ways of tackling privacy, concurrency and parallel execution issues with consistent high throughput levels. However Hyperledger Fabric has far better throughput and concurrency management. This makes the solution of Firefly operating system with Hyperledger Fabric blockchain protocol the most preferable solution in complex remote pilotage fairway environment

    Performance Test Suite for MIT Kerberos

    Get PDF
    Tato práce se zaměřuje na vyvinutí nástrojů pro výkonnostní testování, které umožní otestovat infrastrukturu systému MIT Kerberos, zjistit její výkonnostní charakteristiky a detekovat potenciální problémy. Práce shrnuje teoretické základy protokolu Kerberos a analyzuje potenciální výkonnostní problémy v různých konfiguracích MIT Kerberosu. Dále práce obsahuje popis návrhu a implementace sady nástrojů pro distribuované testování. Pomocí implementovaných nástrojů bylo odhaleno několik výkonnostních problémů, které jsou v práci popsány spolu s návrhem jejich řešení.The aim of this thesis is to develop performance test suite, which will enable to test MIT Kerberos system infrastructure, assess gained performance characteristics and detect potential bottlenecks. This thesis summarizes necessary theoretical background of Kerberos protocol. Potential performance problems are analyzed on different MIT Kerberos configurations. This thesis describes distributed test suite design and implementation. Several performance problems were discovered using this test suite. These problems are described and some solutions are proposed.

    Implementation of a Private Cloud

    Get PDF
    The exponential growth of hardware requirements coupled with online services development costs have brought the need to create dynamic and resilient systems with networks able to handle high-density traffic. One of the emerging paradigms to achieve this is called Cloud Computing it proposes the creation of an elastic and modular computing architecture that allows dynamic allocation of hardware and network resources in order to meet the needs of applications. The creation of a Private Cloud based on the OpenStack platform implements this idea. This solution decentralizes the institution resources making it possible to aggregate resources that are physically spread across several areas of the globe and allows an optimization of computing and network resources. With this in mind, in this thesis a private cloud system was implemented that is capable of elastically leasing and releasing computing resources, allows the creation of public and private networks that connect computation instances and the launch of virtual machines that instantiate servers and services, and also isolate projects within the same system. The system expansion should start with the addition of extra nodes and the modernization of the existing ones, this expansion will also lead to the emergence of network problems which can be surpassed with the integration of Software Defined Network controllers

    EMI Security Architecture

    Get PDF
    This document describes the various architectures of the three middlewares that comprise the EMI software stack. It also outlines the common efforts in the security area that allow interoperability between these middlewares. The assessment of the EMI Security presented in this document was performed internally by members of the Security Area of the EMI project

    WEB Based Monitoring System for SFP Interface Traffic, Case Study in Riyad Network Banyuwangi

    Get PDF
    PT Riyad Network Multi Teknologi is a company that provides internet services, commonly known as an ISP (Internet Service Provider), in Banyuwangi. This company uses MikroTIK as network management, which has an SFP interface. Through this SFP, the fiber optic network is connected to a router in their network. Therefore, traffic monitoring is needed to maintain and monitor the condition of the internet services provided. In addition, this company also does not have a notification system if there is a problem on their network. This research designs and implements a monitoring system through an SFP interface combined with a troubleshooting notification system. This monitoring requirement is very necessary for continuous maintenance or monitoring of the network, so research on the SFP Interface Monitoring System on Web-Based MikroTIK Routers at this company will be very useful. This monitoring system was created using the PHP framework codeigniter 3. The purpose of this system is to make it easier to see traffic monitor data and Rx/Tx power SFP. Moreover, the results and testing of the system-built show that the system can monitor well the problem occur in the network beside that the notification can inform to the maintenance team when problem occur

    Automated Injection of Curated Knowledge Into Real-Time Clinical Systems: CDS Architecture for the 21st Century

    Get PDF
    abstract: Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR) ecosystems for purposes of orchestrating the user experiences of patients and clinicians. To date, the gap between knowledge representation and user-facing EHR integration has been considered an “implementation concern” requiring unscalable manual human efforts and governance coordination. Drafting a questionnaire engineered to meet the specifications of the HL7 CDS Knowledge Artifact specification, for example, carries no reasonable expectation that it may be imported and deployed into a live system without significant burdens. Dramatic reduction of the time and effort gap in the research and application cycle could be revolutionary. Doing so, however, requires both a floor-to-ceiling precoordination of functional boundaries in the knowledge management lifecycle, as well as formalization of the human processes by which this occurs. This research introduces ARTAKA: Architecture for Real-Time Application of Knowledge Artifacts, as a concrete floor-to-ceiling technological blueprint for both provider heath IT (HIT) and vendor organizations to incrementally introduce value into existing systems dynamically. This is made possible by service-ization of curated knowledge artifacts, then injected into a highly scalable backend infrastructure by automated orchestration through public marketplaces. Supplementary examples of client app integration are also provided. Compilation of knowledge into platform-specific form has been left flexible, in so far as implementations comply with ARTAKA’s Context Event Service (CES) communication and Health Services Platform (HSP) Marketplace service packaging standards. Towards the goal of interoperable human processes, ARTAKA’s treatment of knowledge artifacts as a specialized form of software allows knowledge engineers to operate as a type of software engineering practice. Thus, nearly a century of software development processes, tools, policies, and lessons offer immediate benefit: in some cases, with remarkable parity. Analyses of experimentation is provided with guidelines in how choice aspects of software development life cycles (SDLCs) apply to knowledge artifact development in an ARTAKA environment. Portions of this culminating document have been further initiated with Standards Developing Organizations (SDOs) intended to ultimately produce normative standards, as have active relationships with other bodies.Dissertation/ThesisDoctoral Dissertation Biomedical Informatics 201

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Web Based Candidate Assessment System

    Get PDF
    Devplex Technologies Limited is a privately owned company based in Galway Ireland. They have been operating for over two years and currently undertake contract projects for the travel and financial industries. The projects are varied and a wide range of skills are necessary. Devplex Technologies are currently undergoing expansion and intend to hire a number of new employees with varying levels of experience. Devplex Technologies also employ a high number of contractors, with varying skills and contract periods range from one month to twenty four months. The current technical leaders are all very busy with project work. The human resource manager actively advertises positions on both the internet and local newspapers which results in a large number of responses. It is difficult to sort through all the applicants as a high level of technical knowledge is required to vet them. When the human resource (HR) manager selects a number of potential candidates from the vetted curriculum vitas, phone interviews are conducted. The HR manger pools questions which have been submitted from employees who have experience in the relevant technologies. The HR manager has to decide if the candidate\u27s answers are satisfactory for the questions. The most successful candidates are then requested to attend a formal interview. Once a candidate presents for interview they are requested to take a short 10 minute written exam where they are asked to answer five questions relevant to the position they are applying for. Regardless of the outcome of the exam the candidate then proceeds to a formal interview where two or more employees from Devplex Technology interview the candidate and take note of their findings. Once the candidate has left the interview, the HR manager and interviewers meet to discuss the exam and interview and decide if the candidate should be brought for a second interview. If the candidate\u27s second interview is successful the candidate is hired. Devplex Technology interviews a high number of unsuccessful candidates resulting in wasted time and effort. Sometimes employees who are not technically strong enough can be erroneously hired. Devplex Technology wishes to reduce this workload and hire more suitable people by implementing an enterprise based candidate assessment system. The system should allow the remote assessment of potential candidates. It should also allow the HR manager to easily retrieve questions and answers on a selected topic. The system should test the candidates only on subjects which apply to the role they are hired for, the questions should progressively get harder as the candidate gets more questions correct, this will allow for a truly strong candidate achieve the highest score. The overall aim of the system is to reduce workload and help find the best possible candidate for Devplex Technologies
    • …
    corecore