2,352 research outputs found

    A Probabilistic Approach for the System-Level Design of Multi-ASIP Platforms

    Get PDF

    Design and Implementation of a System for Data Traffic Management in a Real-Time Processing Farm Operated at 1 MHz

    Get PDF
    The majority of contemporary high-energy physics experiments study rare phenomena, which necessitates real-time high-throughput data processing to reduce the raw detector data rate of several Tbyte/s to a rate which is feasible for storage and detailed analysis. Unique trigger systems select the physical events relevant to the experiment. Typically, data fragments corresponding to the same event and originating from multiple detector data sources need to be assembled in a specific location before being processed further. The resulting communication model can lead to congestions and to inefficient system utilization if data are transferred without supervision since numerous sources are attempting to use common interconnect and computing recourses concurrently. This thesis deals with the measures taken to ensure a congestion-free, load-balanced operation of a real-time trigger farm processing data packets as small as several kbytes at a megahertz rate. The input data are initially split among multiple data feeds and need to be assembled and processed within a few milliseconds. The processing farm is built around commodity PCs which are interconnected with a commercial high-speed low-latency network implementing a torus topology. The thesis presents a system for data traffic management based on a global traffic supervisor and a dedicated control network. The former allocates distributed computing resources dynamically in order to avoid network congestions as well as to balance the load of the system. The latter communicates supervising information to all data feeds in order to initiate a controlled data transfer. A congestion-free system operation is demonstrated in a farm prototype with an integrated hardware-based implementation of the traffic shaping system. Based on parameters measured in the prototype, simulation results of a large-scale processing farm are presented. Both the prototype and the simulation results demonstrate that the system is capable of transferring input data initially split among multiple PCI-based feeding nodes, each one transmitting sub-fragments of 128 bytes, to a specific remote shared memory location at a rate beyond 2 MHz. The obtained results demonstrate the applicability of multicomputer systems based on commodity components for high-rate, low-latency trigger processing if certain care is taken in organizing the actual data transfers. This organization has to ensure efficient event building and appropriate allocation of the available processing resources

    Design of protocols for high performance in a networked computing environment

    Get PDF

    Desenvolvimento de um sistema de gestão técnica centralizado

    Get PDF
    A building management system has user confort and comodity, as well as reduction of energy consumption, as its main goals. To accomplish this, it is necessary to integrate sensors and actuators as to control and retrieve information about the physical processes of a building. These processes include control over illumination and temperature of a room, and even access control. The information, after processed, allows a more intelligent and efficient way of controlling electronic and mechanical systems of a building, such as HVAC and illumination, while also trying to reduce energy expenditure. The emergence of IoT allowed to increment the number of low level devices on these systems, thanks to their cost reduction, increased performance and improved connectivity. To better make use of the new paradigm, it is required a modern system with multi-protocol capabilities, as well as tools for data processing and presentation. Therefore, the most relevant industrial and building automation technologies were studied, as to define a modern, IoT compatible, architecture and choose its constituting software platforms. InfluxDB, EdgeX Foundry and Node-Red were the selected technologies for the database, gateway and dashboard, respectively, as they closely align with the requirements set. This way, a demonstrator was developed in order to assess a systems’s operation, using these technologies, as well as to evaluate EdgeX’s performance for jitter and latency. From the obtained results, it was verified that, although versatile and complete, this platform underperforms for real-time applications and high reading rate workloads.Um Sistema de Gestão Centralizado tem por objetivo aumentar a comodidade e conforto dos utilizadores de um edifício, ao mesmo tempo que tenta reduzir os consumos energéticos do mesmo. Para isso, torna-se necessário integrar sensores e atuadores para controlar e recolher informação acerca dos processos físicos existentes. Nestes processos estão incluídos a iluminação e temperatura de, por exemplo, uma sala, ou até controlo de acesso. Esta informação, após processamento, permite, de uma maneira mais inteligente e eficiente, controlar os sistemas eletrónicos e mecânicos de um edifício, tais como os sistemas de AVAC ou iluminação, tentando, simultaneamente, diminuir gastos energéticos. O aparecimento do IoT, tornou possível o aumento do número de dispositivos de baixo nível nestes sistemas, graças à redução de custo e aumento de performance e conectividade que estes têm sofrido. Para melhor usufruir deste paradigma, é necessário um sistema moderno, com capacidade de conexão multi-protocolo e ferramentas para processamento e apresentação de informação. Neste sentido, fez-se um estudo das tecnologias mais relevantes da área da automação industrial e de edifícios, de modo a definir uma arquitetura moderna compatível com IoT e a escolher as plataformas de software que a constituem. InfluxDB, EdgeX Foundry e Node-Red foram as tecnologias escolhidas para a base de dados, gateway e dashboard, respetivamente, por serem as que mais se aproximaram dos requisitos definidos. Assim, foi desenvolvido um demonstrador que permitiu verificar o funcionamento de um sistema com a utilização destas tecnologias, assim como avaliar a performance da plataforma EdgeX em termos de jitter e latência. Verificou-se a partir dos resultados obtidos, que embora versátil e completa, esta plataforma ficou aquém do que se pretendia, tanto para aplicações real-time, como para as que necessitem de uma taxa de leitura de sensores elevada.Mestrado em Engenharia Eletrónica e Telecomunicaçõe

    Reliable and energy efficient resource provisioning in cloud computing systems

    Get PDF
    Cloud Computing has revolutionized the Information Technology sector by giving computing a perspective of service. The services of cloud computing can be accessed by users not knowing about the underlying system with easy-to-use portals. To provide such an abstract view, cloud computing systems have to perform many complex operations besides managing a large underlying infrastructure. Such complex operations confront service providers with many challenges such as security, sustainability, reliability, energy consumption and resource management. Among all the challenges, reliability and energy consumption are two key challenges focused on in this thesis because of their conflicting nature. Current solutions either focused on reliability techniques or energy efficiency methods. But it has been observed that mechanisms providing reliability in cloud computing systems can deteriorate the energy consumption. Adding backup resources and running replicated systems provide strong fault tolerance but also increase energy consumption. Reducing energy consumption by running resources on low power scaling levels or by reducing the number of active but idle sitting resources such as backup resources reduces the system reliability. This creates a critical trade-off between these two metrics that are investigated in this thesis. To address this problem, this thesis presents novel resource management policies which target the provisioning of best resources in terms of reliability and energy efficiency and allocate them to suitable virtual machines. A mathematical framework showing interplay between reliability and energy consumption is also proposed in this thesis. A formal method to calculate the finishing time of tasks running in a cloud computing environment impacted with independent and correlated failures is also provided. The proposed policies adopted various fault tolerance mechanisms while satisfying the constraints such as task deadlines and utility values. This thesis also provides a novel failure-aware VM consolidation method, which takes the failure characteristics of resources into consideration before performing VM consolidation. All the proposed resource management methods are evaluated by using real failure traces collected from various distributed computing sites. In order to perform the evaluation, a cloud computing framework, 'ReliableCloudSim' capable of simulating failure-prone cloud computing systems is developed. The key research findings and contributions of this thesis are: 1. If the emphasis is given only to energy optimization without considering reliability in a failure prone cloud computing environment, the results can be contrary to the intuitive expectations. Rather than reducing energy consumption, a system ends up consuming more energy due to the energy losses incurred because of failure overheads. 2. While performing VM consolidation in a failure prone cloud computing environment, a significant improvement in terms of energy efficiency and reliability can be achieved by considering failure characteristics of physical resources. 3. By considering correlated occurrence of failures during resource provisioning and VM allocation, the service downtime or interruption is reduced significantly by 34% in comparison to the environments with the assumption of independent occurrence of failures. Moreover, measured by our mathematical model, the ratio of reliability and energy consumption is improved by 14%

    Framework for simulation of fault tolerant heterogeneous multiprocessor system-on-chip

    Full text link
    Due to the ever growing requirement in high performance data computation, current Uniprocessor systems fall short of hand to meet critical real-time performance demands in (i) high throughput (ii) faster processing time (iii) low power consumption (iv) design cost and time-to-market factors and more importantly (v) fault tolerant processing. Shifting the design trend to MPSOCs is a work-around to meet these challenges. However, developing efficient fault tolerant task scheduling and mapping techniques requires optimized algorithms that consider the various scenarios in Multiprocessor environments. Several works have been done in the past few years which proposed simulation based frameworks for scheduling and mapping strategies that considered homogenous systems and error avoidance techniques. However, most of these works inadequately describe today\u27s MPSOC trend because they were focused on the network domain and didn\u27t consider heterogeneous systems with fault tolerant capabilities; In order to address these issues, this work proposes (i) a performance driven scheduling algorithm (PD SA) based on simulated annealing technique (ii) an optimized Homogenous-Workload-Distribution (HWD) Multiprocessor task mapping algorithm which considers the dynamic workload on processors and (iii) a dynamic Fault Tolerant (FT) scheduling/mapping algorithm to employ robust application processing system. The implementation was accompanied by a heterogeneous Multiprocessor system simulation framework developed in systemC/C++. The proposed framework reads user data, set the architecture, execute input task graph and finally generate performance variables. This framework alleviates previous work issues with respect to (i) architectural flexibility in number-of-processors, processor types and topology (ii) optimized scheduling and mapping strategies and (iii) fault-tolerant processing capability focusing more on the computational domain; A set of random as well as application specific STG benchmark suites were run on the simulator to evaluate and verify the performance of the proposed algorithms. The simulations were carried out for (i) scheduling policy evaluation (ii) fault tolerant evaluation (iii) topology evaluation (iv) Number of processor evaluation (v) Mapping policy evaluation and (vi) Processor Type evaluation. The results showed that PD scheduling algorithm showed marginally better performance than EDF with respect to utilization, Execution-Time and Power factors. The dynamic Fault Tolerant implementation showed to be a viable and efficient strategy to meet real-time constraints without posing significant system performance degradation. Torus topology gave better performance than Tile with respect to task completion time and power factors. Executing highly heterogeneous Tasks showed higher power consumption and execution time. Finally, increasing the number of processors showed a decrease in average Utilization but improved task completion time and power consumption; Based on the simulation results, the system designer can compare tradeoffs between a various design choices with respect to the performance requirement specifications. In general, designing an optimized Multiprocessor scheduling and mapping strategy with added fault tolerant capability will enable to develop efficient Multiprocessor systems which meet future performance goal requirements. This is the substance of this work

    Ein mehrschichtiges sicheres Framework für Fahrzeugsysteme

    Get PDF
    In recent years, significant developments were introduced within the vehicular domain, evolving the vehicles to become a network of many embedded systems distributed throughout the car, known as Electronic Control Units (ECUs). Each one of these ECUs runs a number of software components that collaborate with each other to perform various vehicle functions. Modern vehicles are also equipped with wireless communication technologies, such as WiFi, Bluetooth, and so on, giving them the capability to interact with other vehicles and roadside infrastructure. While these improvements have increased the safety of the automotive system, they have vastly expanded the attack surface of the vehicle and opened the door for new potential security risks. The situation is made worse by a lack of security mechanisms in the vehicular system which allows the escalation of a compromise in one of the non-critical sub-systems to threaten the safety of the entire vehicle and its passengers. This dissertation focuses on providing a comprehensive framework that ensures the security of the vehicular system during its whole life-cycle. This framework aims to prevent the cyber-attacks against different components by ensuring secure communications among them. Furthermore, it aims to detect attacks which were not prevented successfully, and finally, to respond to these attacks properly to ensure a high degree of safety and stability of the system.In den letzten Jahren wurden bedeutende Entwicklungen im Bereich der Fahrzeuge vorgestellt, die die Fahrzeuge zu einem Netzwerk mit vielen im gesamten Fahrzeug verteile integrierte Systeme weiterentwickelten, den sogenannten Steuergeräten (ECU, englisch = Electronic Control Units). Jedes dieser Steuergeräte betreibt eine Reihe von Softwarekomponenten, die bei der Ausführung verschiedener Fahrzeugfunktionen zusammenarbeiten. Moderne Fahrzeuge sind auch mit drahtlosen Kommunikationstechnologien wie WiFi, Bluetooth usw. ausgestattet, die ihnen die Möglichkeit geben, mit anderen Fahrzeugen und der straßenseitigen Infrastruktur zu interagieren. Während diese Verbesserungen die Sicherheit des Fahrzeugsystems erhöht haben, haben sie die Angriffsfläche des Fahrzeugs erheblich vergrößert und die Tür für neue potenzielle Sicherheitsrisiken geöffnet. Die Situation wird durch einen Mangel an Sicherheitsmechanismen im Fahrzeugsystem verschärft, die es ermöglichen, dass ein Kompromiss in einem der unkritischen Subsysteme die Sicherheit des gesamten Fahrzeugs und seiner Insassen gefährdet kann. Diese Dissertation konzentriert sich auf die Entwicklung eines umfassenden Rahmens, der die Sicherheit des Fahrzeugsystems während seines gesamten Lebenszyklus gewährleistet. Dieser Rahmen zielt darauf ab, die Cyber-Angriffe gegen verschiedene Komponenten zu verhindern, indem eine sichere Kommunikation zwischen ihnen gewährleistet wird. Darüber hinaus zielt es darauf ab, Angriffe zu erkennen, die nicht erfolgreich verhindert wurden, und schließlich auf diese Angriffe angemessen zu reagieren, um ein hohes Maß an Sicherheit und Stabilität des Systems zu gewährleisten

    Conserve and Protect Resources in Software-Defined Networking via the Traffic Engineering Approach

    Get PDF
    Software Defined Networking (SDN) is revolutionizing the architecture and operation of computer networks and promises a more agile and cost-efficient network management. SDN centralizes the network control logic and separates the control plane from the data plane, thus enabling flexible management of networks. A network based on SDN consists of a data plane and a control plane. To assist management of devices and data flows, a network also has an independent monitoring plane. These coexisting network planes have various types of resources, such as bandwidth utilized to transmit monitoring data, energy spent to power data forwarding devices and computational resources to control a network. Unwise management, even abusive utilization of these resources lead to the degradation of the network performance and increase the Operating Expenditure (Opex) of the network owner. Conserving and protecting limited network resources is thus among the key requirements for efficient networking. However, the heterogeneity of the network hardware and network traffic workloads expands the configuration space of SDN, making it a challenging task to operate a network efficiently. Furthermore, the existing approaches usually lack the capability to automatically adapt network configurations to handle network dynamics and diverse optimization requirements. Addtionally, a centralized SDN controller has to run in a protected environment against certain attacks. This thesis builds upon the centralized management capability of SDN, and uses cross-layer network optimizations to perform joint traffic engineering, e.g., routing, hardware and software configurations. The overall goal is to overcome the management complexities in conserving and protecting resources in multiple functional planes in SDN when facing network heterogeneities and system dynamics. This thesis presents four contributions: (1) resource-efficient network monitoring, (2) resource-efficient data forwarding, (3) using self-adaptive algorithms to improve network resource efficiency, and (4) mitigating abusive usage of resources for network controlling. The first contribution of this thesis is a resource-efficient network monitoring solution. In this thesis, we consider one specific type of virtual network management function: flow packet inspection. This type of the network monitoring application requires to duplicate packets of target flows and send them to packet monitors for in-depth analysis. To avoid the competition for resources between the original data and duplicated data, the network operators can transmit the data flows through physically (e.g., different communication mediums) or virtually (e.g., distinguished network slices) separated channels having different resource consumption properties. We propose the REMO solution, namely Resource Efficient distributed Monitoring, to reduce the overall network resource consumption incurred by both types of data, via jointly considering the locations of the packet monitors, the selection of devices forking the data packets, and flow path scheduling strategies. In the second contribution of this thesis, we investigate the resource efficiency problem in hybrid, server-centric data center networks equipped with both traditional wired connections (e.g., InfiniBand or Ethernet) and advanced high-data-rate wireless links (e.g., directional 60GHz wireless technology). The configuration space of hybrid SDN equipped with both wired and wireless communication technologies is massively large due to the complexity brought by the device heterogeneity. To tackle this problem, we present the ECAS framework to reduce the power consumption and maintain the network performance. The approaches based on the optimization models and heuristic algorithms are considered as the traditional way to reduce the operation and facility resource consumption in SDN. These approaches are either difficult to directly solve or specific for a particular problem space. As the third contribution of this thesis, we investigates the approach of using Deep Reinforcement Learning (DRL) to improve the adaptivity of the management modules for network resource and data flow scheduling. The goal of the DRL agent in the SDN network is to reduce the power consumption of SDN networks without severely degrading the network performance. The fourth contribution of this thesis is a protection mechanism based upon flow rate limiting to mitigate abusive usage of the SDN control plane resource. Due to the centralized architecture of SDN and its handling mechanism for new data flows, the network controller can be the failure point due to the crafted cyber-attacks, especially the Control-Plane- Saturation (CPS) attack. We proposes an In-Network Flow mAnagement Scheme (INFAS) to effectively reduce the generation of malicious control packets depending on the parameters configured for the proposed mitigation algorithm. In summary, the contributions of this thesis address various unique challenges to construct resource-efficient and secure SDN. This is achieved by designing and implementing novel and intelligent models and algorithms to configure networks and perform network traffic engineering, in the protected centralized network controller
    corecore