836 research outputs found

    A data-oriented network architecture

    Get PDF
    In the 25 years since becoming commercially available, the Internet has grown into a global communication infrastructure connecting a significant part of mankind and has become an important part of modern society. Its impressive growth has been fostered by innovative applications, many of which were completely unforeseen by the Internet's inventors. While fully acknowledging ingenuity and creativity of application designers, it is equally impressive how little the core architecture of the Internet has evolved during this time. However, the ever evolving applications and growing importance of the Internet have resulted in increasing discordance between the Internet's current use and its original design. In this thesis, we focus on four sources of discomfort caused by this divergence. First, the Internet was developed around host-to-host applications, such as telnet and ftp, but the vast majority of its current usage is service access and data retrieval. Second, while the freedom to connect from any host to any other host was a major factor behind the success of the Internet, it provides little protection for connected hosts today. As a result, distributed denial of service attacks against Internet services have become a common nuisance, and are difficult to resolve within the current architecture. Third, Internet connectivity is becoming nearly ubiquitous and reaches increasingly often mobile devices. Moreover, connectivity is expected to extend its reach to even most extreme places. Hence, applications' view to network has changed radically; it's commonplace that they are offered intermittent connectivity at best and required to be smart enough to use heterogeneous network technologies. Finally, modern networks deploy so-called middleboxes both to improve performance and provide protection. However, when doing so, the middleboxes have to impose themselves between the communication end-points, which is against the design principles of the original Internet and a source of complications both for the management of networks and design of application protocols. In this thesis, we design a clean-slate network architecture that is a better fit with the current use of the Internet. We present a name resolution system based on name-based routing. It matches with the service access and data retrieval oriented usage of the Internet, and takes the network imposed middleboxes properly into account. We then propose modest addressing-related changes to the network layer as a remedy for the denial of service attacks. Finally, we take steps towards a data-oriented communications API that provides better decoupling for applications from the network stack than the original Sockets API does. The improved decoupling both simplifies applications and allows them to be unaffected by evolving network technologies: in this architecture, coping with intermittent connectivity and heterogenous network technologies is a burden of the network stack

    Analysis and design of power delivery networks exploiting simulation tools and numerical optimization techniques

    Get PDF
    A higher performance of computing systems is being demanded year after year, driving the digital industry to fiercely compete for offering the fastest computer system at the lowest cost. In addition, as computing system performance is growing, power delivery networks (PDN) and power integrity (PI) designs are getting increasingly more relevance due to the faster speeds and more parallelism required to obtain the required performance growth. The largest data throughput at the lowest power consumption is a common goal for most of the commercial computing systems. As a consequence of this performance growth and power delivery tradeoffs, the complexity involved in analyzing and designing PDN in digital systems is being increased. This complexity drives longer design cycle times when using traditional design tools. For this reason, the need of using more efficient design methods is getting more relevance in order to keep designing and launching products in a faster manner to the market. This trend pushes PDN designers to look for methodologies to simplify analysis and reduce design cycle times. The main objective for this Master’s thesis is to propose alternative methods by exploiting reliable simulation approaches and efficient numerical optimization techniques to analyze and design PDN to ensure power integrity. This thesis explores the use of circuital models and electromagnetic (EM) field solvers in combination with numerical optimization methods, including parameter extraction (PE) formulations. It also establishes a sound basis for using space mapping (SM) methodologies in future developments, in a way that we exploit the advantages of the most accurate and powerful models, such as 3D full-wave EM simulators, but conserving the simplicity and low computational resourcing of the analytical, circuital, and empirical models

    Streamlining the Design and Use of Array Coils for In Vivo Magnetic Resonance Imaging of Small Animals

    Get PDF
    Small-animal models such as rodents and non-human primates play an important pre-clinical role in the study of human disease, with particular application to cancer, cardiovascular, and neuroscience models. To study these animal models, magnetic resonance imaging (MRI) is advantageous as a non-invasive technique due to its versatile contrast mechanisms, large and flexible field of view, and straightforward comparison/translation to human applications. However, signal-to-noise ratio (SNR) limits the practicality of achieving the high-resolution necessary to image the smaller features of animals in an amount of time suitable for in vivo animal MRI. In human MRI, it is standard to achieve an increase in SNR through the use of array coils; however, the design, construction, and use of array coils for animal imaging remains challenging due to copper-loss related issues from small array elements and design complexities of incorporating multiple elements and associated array hardware in a limited space. In this work, a streamlined strategy for animal coil array design, construction, and use is presented and the use for multiple animal models is demonstrated. New matching network circuits, materials, assembly techniques, body-restraining systems and integrated mechanical designs are demonstrated for streamlining high-resolution MRI of both anesthetized and awake animals. The increased SNR achieved with the arrays is shown to enable high-resolution in vivo imaging of mice and common marmosets with a reduced time for experimental setup

    NASA SpaceCube Next-Generation Artificial-Intelligence Computing for STP-H9-SCENIC on ISS

    Get PDF
    Recently, Artificial Intelligence (AI) and Machine Learning (ML) capabilities have seen an exponential increase in interest from academia and industry that can be a disruptive, transformative development for future missions. Specifically, AI/ML concepts for edge computing can be integrated into future missions for autonomous operation, constellation missions, and onboard data analysis. However, using commercial AI software frameworks onboard spacecraft is challenging because traditional radiation-hardened processors and common spacecraft processors cannot provide the necessary onboard processing capability to effectively deploy complex AI models. Advantageously, embedded AI microchips being developed for the mobile market demonstrate remarkable capability and follow similar size, weight, and power constraints that could be imposed on a space-based system. Unfortunately, many of these devices have not been qualified for use in space. Therefore, Space Test Program - Houston 9 - SpaceCube Edge-Node Intelligent Collaboration (STP-H9-SCENIC) will demonstrate inflight, cutting-edge AI applications on multiple space-based devices for next-generation onboard intelligence. SCENIC will characterize several embedded AI devices in a relevant space environment and will provide NASA and DoD with flight heritage data and lessons learned for developers seeking to enable AI/ML on future missions. Finally, SCENIC also includes new CubeSat form-factor GPS and SDR cards for guidance and navigation

    An Open Framework for Highly Concurrent Real-Time Hardware-in-the-Loop Simulation

    Get PDF
    Hardware-in-the-loop (HIL) real-time simulation is becoming a significant tool in prototyping complex, highly available systems. The HIL approach permits testing of hardware prototypes of components that would be extremely costly or difficult to test in the deployed environment. In power system simulation, key issues are the ability to wrap the systems of equations (such as Partial Differential Equations) describing the deployed environment into real-time software models, provide low synchronization overhead between the hardware and software, and reduce reliance on proprietary platforms. This paper introduces an open source HIL simulation framework that can be ported to any standard Unix-like system on any shared-memory multiprocessor computer, requires minimal operating system scheduler controls, enables an asynchronous user interface, and allows for an arbitrary number of secondary control components. The framework is implemented in a soft real-time HIL simulation of a power transmission network with physical Flexible AC Transmission System (FACTS) devices. Performance results are given that demonstrate a low synchronization overhead of the framework

    ZeroComm: Decentralized, Secure and Trustful Group Communication

    Get PDF
    In the context of computer networks, decentralization is a network architecture that distributes both workload and control of a system among a set of coequal participants. Applications based on such networks enhance trust involved in communication by eliminating the external author- ities with self-interests, including governments and tech companies. The decentralized model delegates the ownership of data to individual users and thus mitigates undesirable behaviours such as harvesting personal information by external organizations. Consequently, decentral- ization has been adopted as the key feature in the next generation of the Internet model which is known as Web 3.0. DIDComm is a set of abstract protocols which enables secure messaging with decentralization and thus serves for the realization of Web 3.0 networks. It standardizes and transforms existing network applications to enforce secure, trustful and decentralized com- munication. Prior work on DIDComm has only been restricted to pair-wise communication and hence it necessitates a feasible strategy for adapting the Web 3.0 concepts in group-oriented networks. Inspired by the demand for a group communication model in Web 3.0, this study presents Zero- Comm which preserves decentralization, security and trust throughout the fundamental opera- tions of a group such as messaging and membership management. ZeroComm is built atop the publisher-subscriber pattern which serves as a messaging architecture for enabling communi- cation among multiple members based on the subjects of their interests. This is realized in our implementation through ZeroMQ, a low-level network library that facilitates the construction of advanced and distributed messaging patterns. The proposed solution leverages DIDComm protocols to deliver safe communication among group members at the expense of performance and efficiency. ZeroComm offers two different modes of group communication based on the organization of relationships among members with a compromise between performance and security. Our quantitative analysis shows that the proposed model performs efficiently for the messaging operation whereas joining a group is a relatively exhaustive procedure due to the es- tablishment of secure and decentralized relationships among members. ZeroComm primarily serves as a low-level messaging framework but can be extended with advanced features such as message ordering, crash recovery of members and secure routing of messages

    NoCs:a Short History of Success and a Long Future

    Get PDF
    The broad application of NoCs in IC design has been enabled by NoC synthesis tools that evolved from university prototypes to full commercial synthesis flows. NoC embodiments are ubiquitously present in circuits and systems. As systems evolve to include new components and features, NoCs will play even a more important role as the smart connect that can enable heterogeneity. Thus this field will both evolve in diversity of implementations as well in the search of both higher performance and lower power solutions

    Co-simulation techniques based on virtual platforms for SoC design and verification in power electronics applications

    Get PDF
    En las últimas décadas, la inversión en el ámbito energético ha aumentado considerablemente. Actualmente, existen numerosas empresas que están desarrollando equipos como convertidores de potencia o máquinas eléctricas con sistemas de control de última generación. La tendencia actual es usar System-on-chips y Field Programmable Gate Arrays para implementar todo el sistema de control. Estos dispositivos facilitan el uso de algoritmos de control más complejos y eficientes, mejorando la eficiencia de los equipos y habilitando la integración de los sistemas renovables en la red eléctrica. Sin embargo, la complejidad de los sistemas de control también ha aumentado considerablemente y con ello la dificultad de su verificación. Los sistemas Hardware-in-the-loop (HIL) se han presentado como una solución para la verificación no destructiva de los equipos energéticos, evitando accidentes y pruebas de alto coste en bancos de ensayo. Los sistemas HIL simulan en tiempo real el comportamiento de la planta de potencia y su interfaz para realizar las pruebas con la placa de control en un entorno seguro. Esta tesis se centra en mejorar el proceso de verificación de los sistemas de control en aplicaciones de electrónica potencia. La contribución general es proporcionar una alternativa a al uso de los HIL para la verificación del hardware/software de la tarjeta de control. La alternativa se basa en la técnica de Software-in-the-loop (SIL) y trata de superar o abordar las limitaciones encontradas hasta la fecha en el SIL. Para mejorar las cualidades de SIL se ha desarrollado una herramienta software denominada COSIL que permite co-simular la implementación e integración final del sistema de control, sea software (CPU), hardware (FPGA) o una mezcla de software y hardware, al mismo tiempo que su interacción con la planta de potencia. Dicha plataforma puede trabajar en múltiples niveles de abstracción e incluye soporte para realizar co-simulación mixtas en distintos lenguajes como C o VHDL. A lo largo de la tesis se hace hincapié en mejorar una de las limitaciones de SIL, su baja velocidad de simulación. Se proponen diferentes soluciones como el uso de emuladores software, distintos niveles de abstracción del software y hardware, o relojes locales en los módulos de la FPGA. En especial se aporta un mecanismo de sincronizaron externa para el emulador software QEMU habilitando su emulación multi-core. Esta aportación habilita el uso de QEMU en plataformas virtuales de co-simulacion como COSIL. Toda la plataforma COSIL, incluido el uso de QEMU, se ha analizado bajo diferentes tipos de aplicaciones y bajo un proyecto industrial real. Su uso ha sido crítico para desarrollar y verificar el software y hardware del sistema de control de un convertidor de 400 kVA
    • …
    corecore