173 research outputs found

    High-level automation of custom hardware design for high-performance computing

    Get PDF
    This dissertation focuses on efficient generation of custom processors from high-level language descriptions. Our work exploits compiler-based optimizations and transformations in tandem with high-level synthesis (HLS) to build high-performance custom processors. The goal is to offer a common multiplatform high-abstraction programming interface for heterogeneous compute systems where the benefits of custom reconfigurable (or fixed) processors can be exploited by the application developers. The research presented in this dissertation supports the following thesis: In an increasingly heterogeneous compute environment it is important to leverage the compute capabilities of each heterogeneous processor efficiently. In the case of FPGA and ASIC accelerators this can be achieved through HLS-based flows that (i) extract parallelism at coarser than basic block granularities, (ii) leverage common high-level parallel programming languages, and (iii) employ high-level source-to-source transformations to generate high-throughput custom processors. First, we propose a novel HLS flow that extracts instruction level parallelism beyond the boundary of basic blocks from C code. Subsequently, we describe FCUDA, an HLS-based framework for mapping fine-grained and coarse-grained parallelism from parallel CUDA kernels onto spatial parallelism. FCUDA provides a common programming model for acceleration on heterogeneous devices (i.e. GPUs and FPGAs). Moreover, the FCUDA framework balances multilevel granularity parallelism synthesis using efficient techniques that leverage fast and accurate estimation models (i.e. do not rely on lengthy physical implementation tools). Finally, we describe an advanced source-to-source transformation framework for throughput-driven parallelism synthesis (TDPS), which appropriately restructures CUDA kernel code to maximize throughput on FPGA devices. We have integrated the TDPS framework into the FCUDA flow to enable automatic performance porting of CUDA kernels designed for the GPU architecture onto the FPGA architecture

    Design of Attitude Control Actuators for a Simulated Spacecraft

    Get PDF
    The Air Force Institute of Technology\u27s attitude dynamics simulator, SimSat, is used for hardware-in-the-loop validation of new satellite control algorithms. To provide the capability to test algorithms for control moment gyroscopes, SimSat needed a control moment gyroscope array. The goal of this research was to design, construct, test, and validate a control moment gyroscope array for SimSat. The array was required to interface with SimSat\u27s existing structure, power supply, and electronics. The array was also required to meet maneuver specifications and disturbance rejection specifications. First, the array was designed with initial sizing estimates based on requirements and vehicle size. Next, the vehicle and control dynamics were modeled to determine control moment gyroscope requirements and provide a baseline for validation. Control moment gyroscopes were then built, calibrated, and installed on the vehicle. The actuators were then validated against the dynamics model. Testing shows minor deviation from the expected behavior as a result of small misalignments from the theoretical design. Once validation was complete, the array was tested against the performance specifications. The performance tests indicated that the control moment gyroscope array is capable of meeting specification

    Gestión de jerarquías de memoria híbridas a nivel de sistema

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadoras y Automática y de Ku Leuven, Arenberg Doctoral School, Faculty of Engineering Science, leída el 11/05/2017.In electronics and computer science, the term ‘memory’ generally refers to devices that are used to store information that we use in various appliances ranging from our PCs to all hand-held devices, smart appliances etc. Primary/main memory is used for storage systems that function at a high speed (i.e. RAM). The primary memory is often associated with addressable semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary memory but also other purposes in computers and other digital electronic devices. The secondary/auxiliary memory, in comparison provides program and data storage that is slower to access but offers larger capacity. Examples include external hard drives, portable flash drives, CDs, and DVDs. These devices and media must be either plugged in or inserted into a computer in order to be accessed by the system. Since secondary storage technology is not always connected to the computer, it is commonly used for backing up data. The term storage is often used to describe secondary memory. Secondary memory stores a large amount of data at lesser cost per byte than primary memory; this makes secondary storage about two orders of magnitude less expensive than primary storage. There are two main types of semiconductor memory: volatile and nonvolatile. Examples of non-volatile memory are ‘Flash’ memory (sometimes used as secondary, sometimes primary computer memory) and ROM/PROM/EPROM/EEPROM memory (used for firmware such as boot programs). Examples of volatile memory are primary memory (typically dynamic RAM, DRAM), and fast CPU cache memory (typically static RAM, SRAM, which is fast but energy-consuming and offer lower memory capacity per are a unit than DRAM). Non-volatile memory technologies in Si-based electronics date back to the 1990s. Flash memory is widely used in consumer electronic products such as cellphones and music players and NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. The rapid increase of leakage currents in Silicon CMOS transistors with scaling poses a big challenge for the integration of SRAM memories. There is also the case of susceptibility to read/write failure with low power schemes. As a result of this, over the past decade, there has been an extensive pooling of time, resources and effort towards developing emerging memory technologies like Resistive RAM (ReRAM/RRAM), STT-MRAM, Domain Wall Memory and Phase Change Memory(PRAM). Emerging non-volatile memory technologies promise new memories to store more data at less cost than the expensive-to build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. These new memory technologies combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the non-volatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. The research and information on these Non-Volatile Memory (NVM) technologies has matured over the last decade. These NVMs are now being explored thoroughly nowadays as viable replacements for conventional SRAM based memories even for the higher levels of the memory hierarchy. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional(3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years...En el campo de la informática, el término ‘memoria’ se refiere generalmente a dispositivos que son usados para almacenar información que posteriormente será usada en diversos dispositivos, desde computadoras personales (PC), móviles, dispositivos inteligentes, etc. La memoria principal del sistema se utiliza para almacenar los datos e instrucciones de los procesos que se encuentre en ejecución, por lo que se requiere que funcionen a alta velocidad (por ejemplo, DRAM). La memoria principal está implementada habitualmente mediante memorias semiconductoras direccionables, siendo DRAM y SRAM los principales exponentes. Por otro lado, la memoria auxiliar o secundaria proporciona almacenaje(para ficheros, por ejemplo); es más lenta pero ofrece una mayor capacidad. Ejemplos típicos de memoria secundaria son discos duros, memorias flash portables, CDs y DVDs. Debido a que estos dispositivos no necesitan estar conectados a la computadora de forma permanente, son muy utilizados para almacenar copias de seguridad. La memoria secundaria almacena una gran cantidad de datos aun coste menor por bit que la memoria principal, siendo habitualmente dos órdenes de magnitud más barata que la memoria primaria. Existen dos tipos de memorias de tipo semiconductor: volátiles y no volátiles. Ejemplos de memorias no volátiles son las memorias Flash (algunas veces usadas como memoria secundaria y otras veces como memoria principal) y memorias ROM/PROM/EPROM/EEPROM (usadas para firmware como programas de arranque). Ejemplos de memoria volátil son las memorias DRAM (RAM dinámica), actualmente la opción predominante a la hora de implementar la memoria principal, y las memorias SRAM (RAM estática) más rápida y costosa, utilizada para los diferentes niveles de cache. Las tecnologías de memorias no volátiles basadas en electrónica de silicio se remontan a la década de1990. Una variante de memoria de almacenaje por carga denominada como memoria Flash es mundialmente usada en productos electrónicos de consumo como telefonía móvil y reproductores de música mientras NAND Flash solid state disks(SSDs) están progresivamente desplazando a los dispositivos de disco duro como principal unidad de almacenamiento en computadoras portátiles, de escritorio e incluso en centros de datos. En la actualidad, hay varios factores que amenazan la actual predominancia de memorias semiconductoras basadas en cargas (capacitivas). Por un lado, se está alcanzando el límite de integración de las memorias Flash, lo que compromete su escalado en el medio plazo. Por otra parte, el fuerte incremento de las corrientes de fuga de los transistores de silicio CMOS actuales, supone un enorme desafío para la integración de memorias SRAM. Asimismo, estas memorias son cada vez más susceptibles a fallos de lectura/escritura en diseños de bajo consumo. Como resultado de estos problemas, que se agravan con cada nueva generación tecnológica, en los últimos años se han intensificado los esfuerzos para desarrollar nuevas tecnologías que reemplacen o al menos complementen a las actuales. Los transistores de efecto campo eléctrico ferroso (FeFET en sus siglas en inglés) se consideran una de las alternativas más prometedores para sustituir tanto a Flash (por su mayor densidad) como a DRAM (por su mayor velocidad), pero aún está en una fase muy inicial de su desarrollo. Hay otras tecnologías algo más maduras, en el ámbito de las memorias RAM resistivas, entre las que cabe destacar ReRAM (o RRAM), STT-RAM, Domain Wall Memory y Phase Change Memory (PRAM)...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Orbital research centrifuge. Experiment performance options and cost, volume 3. Space shuttle compatible Experiment Performance Options (EPOS)

    Get PDF
    Cost comparisons and experiment performance options for space shuttle orbital research centrifug

    3rd Many-core Applications Research Community (MARC) Symposium. (KIT Scientific Reports ; 7598)

    Get PDF
    This manuscript includes recent scientific work regarding the Intel Single Chip Cloud computer and describes approaches for novel approaches for programming and run-time organization

    Development of Digital Control Systems for Wearable Mechatronic Devices: Applications in Musculoskeletal Rehabilitation of the Upper Limb

    Get PDF
    The potential for wearable mechatronic systems to assist with musculoskeletal rehabilitation of the upper limb has grown with the technology. One limiting factor to realizing the benefits of these devices as motion therapy tools is within the development of digital control solutions. Despite many device prototypes and research efforts in the surrounding fields, there are a lack of requirements, details, assessments, and comparisons of control system characteristics, components, and architectures in the literature. Pairing this with the complexity of humans, the devices, and their interactions makes it a difficult task for control system developers to determine the best solution for their desired applications. The objective of this thesis is to develop, evaluate, and compare control system solutions that are capable of tracking motion through the control of wearable mechatronic devices. Due to the immaturity of these devices, the design, implementation, and testing processes for the control systems is not well established. In order to improve the efficiency and effectiveness of these processes, control system development and evaluation tools have been proposed. The Wearable Mechatronics-Enabled Control Software framework was developed to enable the implementation and comparison of different control software solutions presented in the literature. This framework reduces the amount of restructuring and modification required to complete these development tasks. An integration testing protocol was developed to isolate different aspects of the control systems during testing. A metric suite is proposed that expands on the existing literature and allows for the measurement of more control characteristics. Together, these tools were used ii ABSTRACT iii to developed, evaluate, and compare control system solutions. Using the developed control systems, a series of experiments were performed that involved tracking elbow motion using wearable mechatronic elbow devices. The accuracy and repeatability of the motion tracking performances, the adaptability of the control models, and the resource utilization of the digital systems were measured during these experiments. Statistical analysis was performed on these metrics to compare between experimental factors. The results of the tracking performances show some of the highest accuracies for elbow motion tracking with these devices. The statistical analysis revealed many factors that significantly impact the tracking performance, such as visual feedback, motion training, constrained motion, motion models, motion inputs, actuation components, and control outputs. Furthermore, the completion of the experiments resulted in three first-time studies, such as the comparison of muscle activation models and the quantification of control system task timing and data storage needs. The successes of these experiments highlight that accurate motion tracking, using biological signals of the user, is possible, but that many more efforts are needed to obtain control solutions that are robust to variations in the motion and characteristics of the user. To guide the future development of these control systems, a national survey was conducted of therapists regarding their patient data collection and analysis methods. From the results of this survey, a series of requirements for software systems, that allow therapists to interact with the control systems of these devices, were collected. Increasing the participation of therapists in the development processes of wearable assistive devices will help to produce better requirements for developers. This will allow the customization of control systems for specific therapies and patient characteristics, which will increase the benefit and adoption rate of these devices within musculoskeletal rehabilitation programs

    Performance of modified jatropha oil in combination with hexagonal boron nitride particles as a bio-based lubricant for green machining

    Get PDF
    This study evaluates the machining performance of newly developed modified jatropha oils (MJO1, MJO3 and MJO5), both with and without hexagonal boron nitride (hBN) particles (ranging between 0.05 and 0.5 wt%) during turning of AISI 1045 using minimum quantity lubrication (MQL). The experimental results indicated that, viscosity improved with the increase in MJOs molar ratio and hBN concentration. Excellent tribological behaviours is found to correlated with a better machining performance were achieved by MJO5a with 0.05 wt%. The MJO5a sample showed the lowest values of cutting force, cutting temperature and surface roughness, with a prolonged tool life and less tool wear, qualifying itself to be a potential alternative to the synthetic ester, with regard to the environmental concern

    Development of a Python Library for Processing Seismic Time Series

    Get PDF
    Earthquakes occur around the world every day. This natural phenomena can result in enormous destruction and loss of life. However, at the same time, it is the primary source for studying Earth, the active planet. The seismic waves generated by earthquakes propagate deep into the Earth, carrying considerable information about the Earth’s structure, from the shallow depths in the crust to the core. The information transferred by seismic waves needs advanced signal processing and inversion tools to be converted into useful information about the Earths inner structures, from local to global scales. The ever­evolving interest for investigating more accurately the terrestrial system led to the development of advanced signal processing algorithms to extract optimal information from the recorded seismic waveforms. These algorithms use advanced numerical modeling to extract optimal information from the different seismic phases generated by earthquakes. The development of algorithms from a mathematical­physical point of view is of great interest; on the other hand, developing a platform for their implementation is also significant. This research aims to build a bridge between the development of purely theoretical ideas in seismology and their functional implementation. In this dissertation SeisPolPy, a high quality Python­based library for processing seismic waveforms is developed. It consists of the latest polarization analysis and filter algorithms to extract different seismic phases in the recorded seismograms. The algorithms range from the most common algorithms in the literature to a newly developed method, sparsity­promoting time­frequency filtering. In addition, the focus of the work is on the generation of high­quality synthetic seismic data for testing and evaluating the algorithms. SeisPolPy library, aims to provide seismology community a tool for separation of seismic phases by using high­resolution polarization analysis and filtering techniques. The research work is carried out within the framework of the Seismicity and HAzards of the sub­saharian Atlantic Margin (SHAZAM) project that requires high quality algorithms able to process the limited seismic data available in the Gulf of Guinea, the study area of the SHAZAM project.Terramotos ocorrem todos os dias em todo o mundo. Esta fenomeno natural pode vir a resultar numa enorme destruição e perda de vidas. No entanto, ao mesmo tempo, é a principal fonte para o estudo da Terra, o planeta activo. As ondas sísmicas geradas pelos terramotos propagam­se profundamente na Terra, levando informação considerável sobre a estrutura da Terra, desde as zonas de menor profundidade da crosta até ao núcleo. A informação transferida por ondas sísmicas necessita de processamento avançado de sinais e ferramentas de inversão para ser convertida em informação util sobre a estrutura interna da Terra, desde escalas locais a globais. O interesse sempre crescente em investigar com maior precisão o sistema terrestre levou ao desenvolvimento de algoritmos avançados de processamento de sinais para extrair informação óptima das formas de ondas sísmicas registadas. Estes algoritmos fazem uso de modelos numéricos avançados para extrair informação óptima das diferentes fases sísmicas geradas pelos terramotos. O desenvolvimento de algoritmos de um ponto de vista matemático­físico é de grande interesse; por outro lado, o desenvolvimento de uma plataforma para a sua implementação é também significativo. Esta investigação visa construir uma ponte entre o desenvolvimento de ideias puramente teóricas em sismologia e a sua implementação funcional. Com o decorrer desta dissertação foi desenvolvido o SeisPolPy, uma biblioteca de alta qualidade baseada em Python para o processamento de formas de ondas sísmicas. Consiste na mais recente análise de polarização e algoritmos de filtragem para extrair diferentes fases sísmicas nos sismogramas registados. Os algoritmos variam desde os algoritmos mais comuns na literatura até um método recentemente desenvolvido, que promove a frequência de filtragem por tempo e frequência. Além disso, o foco do trabalho é a geração de dados sísmicos sintéticos de alta qualidade para testar e avaliar os algoritmos. A biblioteca SeisPolPy, visa fornecer à comunidade sismológica uma ferramenta para a separação das fases sísmicas, utilizando técnicas de análise de polarização e filtragem de alta resolução. O trabalho de investigação é realizado no âmbito do projecto SHAZAM que requer algoritmos de alta qualidade que possuam a capacidade de processar os dados sísmicos, limitados, disponíveis no Golfo da Guiné, a área de estudo do projecto

    Design and verification of Guidance, Navigation and Control systems for space applications

    Get PDF
    In the last decades, systems have strongly increased their complexity in terms of number of functions that can be performed and quantity of relationships between functions and hardware as well as interactions of elements and disciplines concurring to the definition of the system. The growing complexity remarks the importance of defining methods and tools that improve the design, verification and validation of the system process: effectiveness and costs reduction without loss of confidence in the final product are the objectives that have to be pursued. Within the System Engineering context, the modern Model and Simulation based approach seems to be a promising strategy to meet the goals, because it reduces the wasted resources with respect to the traditional methods, saving money and tedious works. Model Based System Engineering (MBSE) starts from the idea that it is possible at any moment to verify, through simulation sessions and according to the phase of the life cycle, the feasibility, the capabilities and the performances of the system. Simulation is used during the engineering process and can be classified from fully numerical (i.e. all the equipment and conditions are reproduced as virtual model) to fully integrated hardware simulation (where the system is represented by real hardware and software modules in their operational environment). Within this range of simulations, a few important stages can be defined: algorithm in the loop (AIL), software in the loop (SIL), controller in the loop (CIL), hardware in the loop (HIL), and hybrid configurations among those. The research activity, in which this thesis is inserted, aims at defining and validating an iterative methodology (based on Model and Simulation approach) in support of engineering teams and devoted to improve the effectiveness of the design and verification of a space system with particular interest in Guidance Navigation and Control (GNC) subsystem. The choice of focusing on GNC derives from the common interest and background of the groups involved in this research program (ASSET at Politecnico di Torino and AvioSpace, an EADS company). Moreover, GNC system is sufficiently complex (demanding both specialist knowledge and system engineer skills) and vital for whatever spacecraft and, last but not least the verification of its behavior is difficult on ground because strong limitations on dynamics and environment reproduction arise. Considering that the verification should be performed along the entire product life cycle, a tool and a facility, a simulator, independent from the complexity level of the test and the stage of the project, is needed. This thesis deals with the design of the simulator, called StarSim, which is the real heart of the proposed methodology. It has been entirely designed and developed from the requirements definition to the software implementation and hardware construction, up to the assembly, integration and verification of the first simulator release. In addition, the development of this technology met the modern standards on software development and project management. StarSim is a unique and self-contained platform: this feature allows to mitigate the risk of incompatibility, misunderstandings and loss of information that may arise using different software, simulation tools and facilities along the various phases. Modularity, flexibility, speed, connectivity, real time operation, fidelity with real world, ease of data management, effectiveness and congruence of the outputs with respect to the inputs are the sought-after features in the StarSim design. For every iteration of the methodology, StarSim guarantees the possibility to verify the behavior of the system under test thanks to the permanent availability of virtual models, that substitute all those elements not yet available and all the non-reproducible dynamics and environmental conditions. StarSim provides a furnished and user friendly database of models and interfaces that cover different levels of detail and fidelity, and supports the updating of the database allowing the user to create custom models (following few, simple rules). Progressively, pieces of the on board software and hardware can be introduced without stopping the process of design and verification, avoiding delays and loss of resources. StarSim has been used for the first time with the CubeSats belonging to the e-st@r program. It is an educational project carried out by students and researchers of the “CubeSat Team Polito” in which StarSim has been mainly used for the payload development, an Active Attitude Determination and Control System, but StarSim’s capabilities have also been updated to evaluate functionalities, operations and performances of the entire satellite. AIL, SIL, CIL, HIL simulations have been performed along all the phases of the project, successfully verifying a great number of functional and operational requirements. In particular, attitude determination algorithms, control laws, modes of operation have been selected and verified; software has been developed step by step and the bugs-free executable files have been loaded on the micro-controller. All the interfaces and protocols as well as data and commands handling have been verified. Actuators, logic and electrical circuits have been designed, built and tested and sensors calibration has been performed. Problems such as real time and synchronization have been solved and a complete hardware in the loop simulation test campaign both for A-ADCS standalone and for the entire satellite has been performed, verifying the satisfaction of a great number of CubeSat functional and operational requirements. The case study represents the first validation of the methodology with the first release of StarSim. It has been proven that the methodology is effective in demonstrating that improving the design and verification activities is a key point to increase the confidence level in the success of a space mission

    A Contribution to Validation and Testing of Non-Compliant Docking Contact Dynamics of Small and Rigid Satellites Using Hardware-In-The-Loop Simulation

    Get PDF
    Spacecraft (S/C) docking is the last and most challenging phase in the contact closure of two separately flying S/C. The design and testing of S/C docking missions using software-multibody simulations need to be complemented by Hardware-In-The-Loop (HIL) simulation using the real docking hardware. The docking software multibody simulation is challenged by the proper modeling of contact forces, whereas the HIL docking simulation is challenged by proper inclusion of the real contact forces. Existing docking HIL simulators ignore back-reaction force modeling due to the large S/C sizes, or use compliance devices to reduce impact, which alters the actual contact force. This dissertation aims to design a docking HIL testbed to verify docking contact dynamics for small and rigid satellites by simulating the real contact forces without artificial compliance. HIL simulations of docking contact dynamics are challenged mainly by: I. HIL simulation quality: quality of realistic contact dynamics simulation relies fundamentally on the quality of HIL testbed actuation and sensing instrumentation (non-instantaneous, time delays, see Fig. 1) II. HIL testbed design: HIL design optimization requires a justified HIL performance prediction, based on a representative HIL testbed simulation (Fig. 2), where appropriate simulation of contact dynamics is the most difficult and sophisticated task. The goal of this dissertation is to carry out a systematic investigation of the technically possible HIL docking contact dynamics simulation performances, in order to define an appropriate approach for testing of docking contact dynamics of small and rigid satellites without compliance and using HIL simulation. In addition, based on the investigations, the software simulation results shall be validated using an experimental HIL setup. To achieve that, multibody dynamics models of docking S/C were built, after carrying out an extensive contact dynamics research to select the most representative contact model. Furthermore, performance analysis models of the HIL testbed were built. In the dissertation, a detailed parametric analysis was carried out on the available models’ design-spaces (e.g., spacecraft, HIL testbed building-blocks and contact dynamics), to study their impacts on the HIL fidelity and errors (see Fig. 1). This was done using a generic HIL design-tool, which was developed within this work. The results were then used to identify the technical requirements of an experimental 1-Degree-of-Freedom (DOF) HIL testbed, which was conceived, designed, implemented and finally utilized to test and validate the selected docking contact dynamics model. The results of this work showed that the generic multibody-dynamics spacecraft docking model is a practical tool to model, study and analyze docking missions, to identify the properties of successful and failed docking scenarios before it takes place in space. Likewise, the 'Generic HIL Testbed Framework Analysis Tool' is an effective tool for carrying out performance analysis of HIL testbed design, which allows to estimate the testbed’s fidelity and predict HIL errors. Moreover, the results showed that in order to build a 6DOF HIL docking testbed without compliance, it is important to study and analyze the errors’s sources in an impact and compensate for them. Otherwise, the required figure-of-merits of the instruments of the HIL testbed would be extremely challenging to be realized. In addition, the results of the experimental HIL simulation (i.e., real impacts between various specimen) serve as a useful contribution to the advancement of contact dynamics modeling
    corecore