10 research outputs found

    Distributed EaaS simulation using TEEs: A case study in the implementation and practical application of an embedded computer cluster

    Get PDF
    Internet of Things (IoT) devices with limited resources struggle to generate the high-quality entropy required for high-quality randomness. This results in weak cryptographic keys. As keys are a single point of failure in modern cryptography, IoT devices performing cryptographic operations may be susceptible to a variety of attacks. To address this issue, we develop an Entropy as a Service (EaaS) simulation. The purpose of EaaS is to provide IoT devices with high-quality entropy as a service so that they can use it to generate strong keys. Additionally, we utilise Trusted Execution Environments (TEEs) in the simulation. TEE is a secure processor component that provides data protection, integrity, and confidentiality for select applications running on the processor by isolating them from other system processes (including the OS). TEE thereby enhances system security. The EaaS simulation is performed on a computer cluster known as the Magi cluster. Magi cluster is a private computer cluster that has been designed, built, configured, and tested as part of this thesis to meet the requirements of Tampere University's Network and Information Security Group (NISEC). In this thesis, we explain how the Magi cluster is implemented and how it is utilised to conduct a distributed EaaS simulation utilising TEEs.Esineiden internetin (Internet of Things, IoT) laitteilla on tyypillisesti rajallisten resurssien vuoksi haasteita tuottaa tarpeeksi korkealaatuista entropiaa vahvan satunnaisuuden luomiseen. Tämä johtaa heikkoihin salausavaimiin. Koska salausavaimet ovat modernin kryptografian heikoin lenkki, IoT-laitteilla tehtävät kryptografiset operaatiot saattavat olla haavoittuvaisia useita erilaisia hyökkäyksiä vastaan. Ratkaistaksemme tämän ongelman kehitämme simulaation, joka tarjoaa IoT-laitteille vahvaa entropiaa palveluna (Entropy as a Service, EaaS). EaaS-simulaation ideana on jakaa korkealaatuista entropiaa palveluna IoT-laitteille, jotta ne pystyvät luomaan vahvoja salausavaimia. Hyödynnämme simulaatiossa lisäksi luotettuja suoritusympäristöjä (Trusted Execution Environment, TEE). TEE on prosessorilla oleva erillinen komponentti, joka tarjoaa eristetyn ja turvallisen ajoympäristön valituille ohjelmille. TEE:tä hyödyntämällä ajonaikaiselle ohjelmalle voidaan taata datan suojaus, luottamuksellisuus sekä eheys eristämällä se muista järjestelmällä ajetuista ohjelmista (mukaan lukien käyttöjärjestelmä). Näin ollen TEE parantaa järjestelmän tietoturvallisuutta. EaaS-simulaatio toteutetaan Magi-nimisellä tietokoneklusterilla. Magi on Tampereen Yliopiston Network and Information Security Group (NISEC) -tutkimusryhmän oma yksityinen klusteri, joka on suunniteltu, rakennettu, määritelty ja testattu osana tätä diplomityötä. Tässä diplomityössä käymme läpi, kuinka Magi-klusteri on toteutettu ja kuinka sillä toteutetaan hajautettu EaaS-simulaatio hyödyntäen TEE:itä

    بروتوكول سماحية عطل جديد في الشبكات التطبيقية متعددة البث

    Get PDF
    تميّزت الشبكات التطبيقية متعددة البث بسهولة انتشارها، فهي لا تتطلب أي تغيير في طبقة الشبكة، حيث يتم إرسال البيانات في هذه الشبكة عبر شجرة التغطية المبنية باستخدام الاتصال أحادي البث بين العقد النهائية، والذين هم مضيفون أحرار يمكنهم الانضمام والمغادرة متى أرادوا ذلك، أو حتى المغادرة دون إعلام أية عقدة بذلك. يسبب ذلك انفصال العقد الأبناء لعقدة مغادرة عن الشجرة، وطلب إعادة الانضمام، بمعنى آخر ستنفصل هذه العقد عن شجرة التغطية ولا يمكنها الحصول على البيانات حتى تنضم من جديد. مما يتسبب بحدوث الفوضى ضمن الشجرة المبنية، وضياع العديد من رزم البيانات والتي يمكن أن تؤثر بشكل كبير على المستخدم. أحد التحديات الرئيسة في بناء بروتوكول شبكة تطبيقية متعدد البث ذو كفاءة وفعالية هو توفير آلية لمواجهة الخروج المفاجئ لعقدة ما من شجرة التغطية دون التأثير الكبير على أداء الشجرة المبنية. وهو ماسنعتمده في هذا البحث من خلال اقتراح بروتوكول جديد لحل المشاكل المذكورة سابقاً. Application-Level Multicast Networks are easy to deployment, it does not require any change in the network layer, where data is sent to the network via the built-up coverage of the tree using a single-contact transmission of the final contract, who are the hosts are free can join / leave whenever they want it, or even to leave without telling any node so. Causing the separation of the children of the leaved node from the tree, and the request for rejoin, in other words, these nodes will be separated from the overlay tree and cannot get the data even rejoin. This causes the distortion of the constructed tree, and the loss of several packets which can significantly impact the user. One of the key challenges in building a multi-efficiently and effectively overlay multicast protocol is to provide a robust mechanism to overcome the sudden departure of a node from the overlay tree without a significant impact on the performance of the constructed tree. In this research, we propose a new protocol to solve problems presented previously

    A Policy-Based Resource Brokering Environment for Computational Grids

    Get PDF
    With the advances in networking infrastructure in general, and the Internet in particular, we can build grid environments that allow users to utilize a diverse set of distributed and heterogeneous resources. Since the focus of such environments is the efficient usage of the underlying resources, a critical component is the resource brokering environment that mediates the discovery, access and usage of these resources. With the consumer\u27s constraints, provider\u27s rules, distributed heterogeneous resources and the large number of scheduling choices, the resource brokering environment needs to decide where to place the user\u27s jobs and when to start their execution in a way that yields the best performance for the user and the best utilization for the resource provider. As brokering and scheduling are very complicated tasks, most current resource brokering environments are either specific to a particular grid environment or have limited features. This makes them unsuitable for large applications with heterogeneous requirements. In addition, most of these resource brokering environments lack flexibility. Policies at the resource-, application-, and system-levels cannot be specified and enforced to provide commitment to the guaranteed level of allocation that can help in attracting grid users and contribute to establishing credibility for existing grid environments. In this thesis, we propose and prototype a flexible and extensible Policy-based Resource Brokering Environment (PROBE) that can be utilized by various grid systems. In designing PROBE, we follow a policy-based approach that provides PROBE with the intelligence to not only match the user\u27s request with the right set of resources, but also to assure the guaranteed level of the allocation. PROBE looks at the task allocation as a Service Level Agreement (SLA) that needs to be enforced between the resource provider and the resource consumer. The policy-based framework is useful in a typical grid environment where resources, most of the time, are not dedicated. In implementing PROBE, we have utilized a layered architecture and façade design patterns. These along with the well-defined API, make the framework independent of any architecture and allow for the incorporation of different types of scheduling algorithms, applications and platform adaptors as the underlying environment requires. We have utilized XML as a base for all the specification needs. This provides a flexible mechanism to specify the heterogeneous resources and user\u27s requests along with their allocation constraints. We have developed XML-based specifications by which high-level internal structures of resources, jobs and policies can be specified. This provides interoperability in which a grid system can utilize PROBE to discover and use resources controlled by other grid systems. We have implemented a prototype of PROBE to demonstrate its feasibility. We also describe a test bed environment and the evaluation experiments that we have conducted to demonstrate the usefulness and effectiveness of our approach

    Economic-based Distributed Resource Management and Scheduling for Grid Computing

    Full text link
    Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economic-based systems for peer-to-peer distributed computing by developing users' quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications

    Robust applications in time-shared distributed systems

    Get PDF

    The Impact of Novel Computing Architectures on Large-Scale Distributed Web Information Retrieval Systems

    Get PDF
    Web search engines are the most popular mean of interaction with the Web. Realizing a search engine which scales even to such issues presents many challenges. Fast crawling technology is needed to gather the Web documents. Indexing has to process hundreds of gigabytes of data efficiently. Queries have to be handled quickly, at a rate of thousands per second. As a solution, within a datacenter, services are built up from clusters of common homogeneous PCs. However, Information Retrieval (IR) has to face issues raised by the growing amount of Web data, as well as the number of new users. In response to these issues, cost-effective specialized hardware is available nowadays. In our opinion, this hardware is ideal for migrating distributed IR systems to computer clusters comprising heterogeneous processors in order to respond their need of computing power. Toward this end, we introduce K-model, a computational model to properly evaluate algorithms designed for such hardware. We study the impact of K-model rules on algorithm design. To evaluate the benefits of using K-model in evaluating algorithms, we compare the complexity of a solution built using our properly designed techniques, and the existing ones. Although in theory competitors are more efficient than us, empirically, K-model is able to prove because our solutions have been shown to be faster than the state-of-the-art implementations

    Uma interface para refinamento de pesquisas de políticas de segurança em ambientes de grid services

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Ciência da Computação.A computação em grid (ou computação em grade) consiste em uma forma de computação distribuída onde o foco principal é o compartilhamento coordenado de recursos em larga escala e resolução de problemas em organizações virtuais dinâmicas e multi-institucionais. Porém, tal compartilhamento deve ser altamente controlado, para garantir a segurança dos recursos envolvidos. Este trabalho se baseia na especificação OGSA (Open Grid Service Architecture) proposta pelo GGF (Global Grid Forum), em particular no Globus Toolkit 3 que a implementa, e apresenta uma proposta de extensão ao módulo de monitoramento e descoberta de recursos (MDS), para filtrar os resultados retornados baseado nos atributos do usuário e nas políticas do recurso

    Component performance modeling and scheduling strategies on grids

    Get PDF
    Doppelpromotion: Institut für Roboterforschung Dortmund und der Universität Pis

    Design and optimization of optical grids and clouds

    Get PDF

    A distributed intelligent network based on CORBA and SCTP

    Get PDF
    The telecommunications services marketplace is undergoing radical change due to the rapid convergence and evolution of telecommunications and computing technologies. Traditionally telecommunications service providers’ ability to deliver network services has been through Intelligent Network (IN) platforms. The IN may be characterised as envisioning centralised processing of distributed service requests from a limited number of quasi-proprietary nodes with inflexible connections to the network management system and third party networks. The nodes are inter-linked by the operator’s highly reliable but expensive SS.7 network. To leverage this technology as the core of new multi-media services several key technical challenges must be overcome. These include: integration of the IN with new technologies for service delivery, enhanced integration with network management services, enabling third party service providers and reducing operating costs by using more general-purpose computing and networking equipment. In this thesis we present a general architecture that defines the framework and techniques required to realise an open, flexible, middleware (CORBA)-based distributed intelligent network (DIN). This extensible architecture naturally encapsulates the full range of traditional service network technologies, for example IN (fixed network), GSM-MAP and CAMEL. Fundamental to this architecture are mechanisms for inter-working with the existing IN infrastructure, to enable gradual migration within a domain and inter-working between IN and DIN domains. The DIN architecture compliments current research on third party service provision, service management and integration Internet-based servers. Given the dependence of such a distributed service platform on the transport network that links computational nodes, this thesis also includes a detailed study of the emergent IP-based telecommunications transport protocol of choice, Stream Control Transmission Protocol (SCTP). In order to comply with the rigorous performance constraints of this domain, prototyping, simulation and analytic modelling of the DIN based on SCTP have been carried out. This includes the first detailed analysis of the operation of SCTP congestion controls under a variety of network conditions leading to a number of suggested improvements in the operation of the protocol. Finally we describe a new analytic framework for dimensioning networks with competing multi-homed SCTP flows in a DIN. This framework can be used for any multi-homed SCTP network e.g. one transporting SIP or HTTP
    corecore