340 research outputs found

    Machine Learning for Microprocessor Performance Bug Localization

    Full text link
    The validation process for microprocessors is a very complex task that consumes substantial engineering time during the design process. Bugs that degrade overall system performance, without affecting its functional correctness, are particularly difficult to debug given the lack of a golden reference for bug-free performance. This work introduces two automated performance bug localization methodologies based on machine learning that aims to aid the debugging process. Our results show that, the evaluated microprocessor core performance bugs whose average IPC impact is greater than 1%, our best-performing technique is able to localize the exact microarchitectural unit of the bug \sim77\% of the time, while achieving a top-3 unit accuracy (out of 11 possible locations) of over 90% for bugs with the same average IPC impact. The proposed system in our simulation setup requires only a few seconds to perform a bug location inference, which leads to a reduced debugging time.Comment: 12 pages, 6 figure

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Perfusion bioreactor for liver bioengineering

    Get PDF
    End-stage organ failure has grown to become one of the key challenges for the medical community because of the high number of patients in waiting list for a transplant and the severe shortage of suitable organ donors. These, together with population ageing, have created an accumulation phenomenon of patients which increases the severity of the problem. New techniques for organ preservation, organ recovery from organs not suitable for transplant, and organ recellularization attempt to tackle this problem, appearing as some of the most promising solutions. The aim of this bachelor thesis is to continue the development of a complex liver perfusion bioreactor in order to design and develop an efficient and repeatable method for organ perfusion, decellularization and recellularization, with the final objective being the creation of a perfusion bioreactor for liver bioengineering able to be used for organ perfusion and preservation, organ decellularization and organ recellularization, able to preserve cell structure, functionality, growth and control differentiation for up to 4 weeks, while avoiding contamination and automating as much as possible the process. In order to do this, the bioreactor will include many sensors and data acquisition systems as well as control systems for pressure, flow rate, and temperature among others.Ingeniería Biomédic

    Static verification tool improvement in ASIC design flow: Tool Evaluation using a real design

    Get PDF
    Verification is now the most time-consuming step in the design flow for digital circuits. Design organizations are constantly researching improvements to accelerate verification tasks so that a functional and efficient silicon can be released to a demanding market to improve the company’s competitive position. Today, EDA (Electronic Design Automation) tools are a part of the development of each designed circuit and contribute to the verification work. Automating and simplifying the verification flow will help focus on resolving the underlying system issues. The company is interested in improving its verification flow to take advantage of the features available in the EDA market. Recognizing more synchronization structures, an improved hierarchical verification flow and incremental verification flow could potentially improve verification process throughput in the company. This study examines how changing the tool improves the static verification flow for the company. This research examines the background of EDA tools and the most relevant theory to understand the tool roles and the static rule checks they make. In addition, the deployment of static verification tools will be discussed, and clock-domain crossing and lint checking tools will be introduced from an EDA toolkit. A modular inspection flow compatible with the server infrastructure will be built for this purpose. The company’s proprietary and problematic synchronization structures will be implemented as an interface-level script, so that the inspection software understands the used structures to be suitable for specific use-cases. Design constraints are developed to improve the accuracy of the results. In addition, this study measures the performance of the programs. The use of computing resources is measured in the company’s design environment and compared to the current verification flow. In addition, the quantity and quality of the reported messages are compared to the old flow, and the user experience and correctness of the results are briefly assessed. In the study, configured software checks are performed on the company’s own subsystem under development, and the suitability of the software for the organization’s purposes is determined by examining the results. Flow performance, duration, and utilization of computational resources are measured with the generated script and software reports. In addition, the functionality and user-friendliness of the software's graphical user interface are briefly reviewed. The research finds that the clock-domain crossing verification flow is accelerated by over three times and the lint verification flow by a quarter. The inspections detect almost three times more potential issues in the code. The new tool flow requires under a third of the disk space and system memory consumed by the old verification flow. In addition, it is observed that the new software is overall more pleasant to use: The software is perceived to be somewhat more challenging to learn, but in return it provides more information to solve the underlying issues in the code. Finally, it is concluded that the company should consider adding the tool to complement its verification tool belt due to the performance, verification thoroughness and more moderate use of computation server resources

    Second CLIPS Conference Proceedings, volume 1

    Get PDF
    Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems

    Automatisoitu Suomi 100 satelliitin funktionaalinen järjestelmän integraatiotestaus

    Get PDF
    A large portion of launched CubeSats have failed early on their missions. Potential source of failures has been identified from statistical data of CubeSat missions as being inadequate functional system integration testing. In this thesis test automation was used to perform functional system integration testing for the Suomi 100 CubeSat. Reusable software library, called CubeSatAutomation, was developed for test automation and testing was conducted with a widely used open source test automation framework known as Robot Framework. With the performed tests proper functionality was verified for essential satellite features such as radio communication, telemetry, safe resets and battery recharging through the solar panels among others. The testing however identified certain issues in the integration of the payload radio instrument. The tests included the “Day in the life” testing and it is possible to anticipate that this test can increase the overall success rate of CubeSat missions. A testing guideline that includes this test is recommended to be added to the CubeSat project.Suuri osa avaruuteen laukaistuista CubeSat –satelliiteista on epäonnistunut jo mission alkaessa. Toteutuneiden CubeSat –missioiden tilastojen pohjalta onkin esitetty, että yksi merkittävä epäonnistumisen syy on ollut puutteellinen järjestelmän funktionaalinen integraatiotestaus. Tässä työssä Suomi 100 –piensatelliitille suoritettiin testiautomaatiota käyttäen funktionaalisia järjestelmän integraatiotestejä. Työssä kehitettiin uudelleenkäytettävä testiautomaatiokirjasto, nimeltään CubeSatAutomation, ja tätä voidaan käyttää vapaasti jaettavissa olevan Robot Framework testaustyökalun kanssa. Testien avulla varmennettiin satelliitin perusominaisuuksien toimivuus, esimerkiksi radiokommunikaatio, telemetria, turvalliset uudelleenkäynnistykset ja akkujen latautuminen aurinkopaneelien avulla. Testit toivat esille mm. satelliitin radiomittalaitehyötykuorman integraatioon liittyviä ongelmia. Satelliitille tehtiin myös niin kutsuttuja "Päivä satelliitin elämässä”-testejä. On oletettavaa, että tämän testin suorittaminen piensatelliiteille pienentäisi aikaisen epäonnistumisen todennäköisyyttä. Työn tuloksena on suositus CubeSat –satelliitti konseptiin lisättäväksi ohjeistusta järjestelmän integraatiotestaamisesta, johon testi sisältyy

    Observation mechanisms for in-field software-based self-test

    Get PDF
    When electronic systems are used in safety critical applications, as in the space, avionic, automotive or biomedical areas, it is required to maintain a very low probability of failures due to faults of any kind. Standards and regulations play a significant role, forcing companies to devise and adopt solutions able to achieve predefined targets in terms of dependability. Different techniques can be used to reduce fault occurrence or to minimize the probability that those faults produce critical failures (e.g., by introducing redundancy). Unfortunately, most of these techniques have a severe impact on the cost of the resulting product and, in some cases, the probability of failures is too large anyway. Hence, a solution commonly used in several scenarios lies on periodically performing a test able to detect the occurrence of any fault before it produces a failure (in-field test). This solution is normally based on forcing the processor inside the Device Under Test to execute a properly written test program, which is able to activate possible faults and to make their effects visible in some observable locations. This approach is also called Software-Based Self-Test, or SBST. If compared with testing in an end of manufacturing scenario, in-field testing has strong limitations in terms of access to the system inputs and outputs because Design for Testability structures and testing equipment are usually not available. As a consequence there are reduced possibilities to activate the faults and to observe their effects. This reduced observability particularly affects the ability to detect performance faults, i.e. faults that modify the timing but not the final value of computations. This kind of faults are hard to detect by only observing the final content of predefined memory locations, that is the usual test result observation method used in-field. Initially, the present work was focused on fault tolerance techniques against transient faults induced by ionizing radiation, the so called Single Event Upsets (SEUs). The main contribution of this early stage of the thesis lies in the experimental validation of the feasibility of achieving a safe system by using an architecture that combines task-level redundancy with already available IP cores, thus minimizing the development time. Task execution is replicated and Memory Protection is used to guarantee that any SEU may affect one and only one of the replicas. A proof of concept implementation was developed and validated using fault injection. Results outline the effectiveness of the architecture, and the overhead analysis shows that the proposed architecture is effective in reducing the resource occupation with respect to N-modular redundancy, at an affordable cost in terms of application execution time. The main part of the thesis is focused on in-field software-based self-test of permanent faults. A set of observation methods exploiting existing or ad-hoc hardware is proposed, aimed at obtaining a better coverage, in particular of performance faults. An extensive quantitative evaluation of the proposed methods is presented, including a comparison with the observation methods traditionally used in end of manufacturing and in-field testing. Results show that the proposed methods are a good complement to the traditionally used final memory content observation. Moreover, they show that an adequate combination of these complementary methods allows for achieving nearly the same fault coverage achieved when continuously observing all the processor outputs, which is an observation method commonly used for production test but usually not available in-field. A very interesting by-product of what is described above is a detailed description of how to compute the fault coverage achieved by functional in-field tests using a conventional fault simulator, a tool that is usually applied in an end of manufacturing testing scenario. Finally, another relevant result in the testing area is a method to detect permanent faults inside the cache coherence logic integrated in each cache controller of a multi-core system, based on the concurrent execution of a test program by the different cores in a coordinated manner. By construction, the method achieves full fault coverage of the static faults in the addressed logic.Cuando se utilizan sistemas electrónicos en aplicaciones críticas como en las áreas biomédica, aeroespacial o automotriz, se requiere mantener una muy baja probabilidad de malfuncionamientos debidos a cualquier tipo de fallas. Los estándares y normas juegan un papel importante, forzando a los desarrolladores a diseñar y adoptar soluciones que sean capaces de alcanzar objetivos predefinidos en cuanto a seguridad y confiabilidad. Pueden utilizarse diferentes técnicas para reducir la ocurrencia de fallas o para minimizar la probabilidad de que esas fallas produzcan mal funcionamientos críticos, por ejemplo a través de la incorporación de redundancia. Lamentablemente, muchas de esas técnicas afectan en gran medida el costo de los productos y, en algunos casos, la probabilidad de malfuncionamiento sigue siendo demasiado alta. En consecuencia, una solución usada a menudo en varios escenarios consiste en realizar periódicamente un test que sea capaz de detectar la ocurrencia de una falla antes de que esta produzca un mal funcionamiento (test en campo). En general, esta solución se basa en forzar a un procesador existente dentro del dispositivo bajo prueba a ejecutar un programa de test que sea capaz de activar las posibles fallas y de hacer que sus efectos sean visibles en puntos observables. A esta metodología también se la llama auto-test basado en software, o en inglés Software-Based Self-Test (SBST). Si se lo compara con un escenario de test de fin de fabricación, el test en campo tiene fuertes limitaciones en términos de posibilidad de acceso a las entradas y salidas del sistema, porque usualmente no se dispone de equipamiento de test ni de la infraestructura de Design for Testability. En consecuencia se tiene menos posibilidades de activar las fallas y de observar sus efectos. Esta observabilidad reducida afecta particularmente la habilidad para detectar fallas de performance, es decir fallas que modifican la temporización pero no el resultado final de los cálculos. Este tipo de fallas es difícil de detectar por la sola observación del contenido final de lugares de memoria, que es el método usual que se utiliza para observar los resultados de un test en campo. Inicialmente, el presente trabajo estuvo enfocado en técnicas para tolerar fallas transitorias inducidas por radiación ionizante, llamadas en inglés Single Event Upsets (SEUs). La principal contribución de esa etapa inicial de la tesis reside en la validación experimental de la viabilidad de obtener un sistema seguro, utilizando una arquitectura que combina redundancia a nivel de tareas con el uso de módulos hardware (IP cores) ya disponibles, que minimiza en consecuencia el tiempo de desarrollo. Se replica la ejecución de las tareas y se utiliza protección de memoria para garantizar que un SEU pueda afectar a lo sumo a una sola de las réplicas. Se desarrolló una implementación para prueba de concepto que fue validada mediante inyección de fallas. Los resultados muestran la efectividad de la arquitectura, y el análisis de los recursos utilizados muestra que la arquitectura propuesta es efectiva en reducir la ocupación con respecto a la redundancia modular con N réplicas, a un costo accesible en términos de tiempo de ejecución. La parte principal de esta tesis se enfoca en el área de auto-test en campo basado en software para la detección de fallas permanentes. Se propone un conjunto de métodos de observación utilizando hardware existente o ad-hoc, con el fin de obtener una mejor cobertura, en particular de las fallas de performance. Se presenta una extensa evaluación cuantitativa de los métodos propuestos, que incluye una comparación con los métodos tradicionalmente utilizados en tests de fin de fabricación y en campo. Los resultados muestran que los métodos propuestos son un buen complemento del método tradicionalmente usado que consiste en observar el valor final del contenido de memoria. Además muestran que una adecuada combinación de estos métodos complementarios permite alcanzar casi los mismos valores de cobertura de fallas que se obtienen mediante la observación continua de todas las salidas del procesador, método comúnmente usado en tests de fin de fabricación, pero que usualmente no está disponible en campo. Un subproducto muy interesante de lo arriba expuesto es la descripción detallada del procedimiento para calcular la cobertura de fallas lograda mediante tests funcionales en campo por medio de un simulador de fallas convencional, una herramienta que usualmente se aplica en escenarios de test de fin de fabricación. Finalmente, otro resultado relevante en el área de test es un método para detectar fallas permanentes dentro de la lógica de coherencia de cache que está integrada en el controlador de cache de cada procesador en un sistema multi procesador. El método está basado en la ejecución de un programa de test en forma coordinada por parte de los diferentes procesadores. Por construcción, el método cubre completamente las fallas de la lógica mencionad

    Design of electronic systems for automotive sensor conditioning

    Get PDF
    This thesis deals with the development of sensor systems for automotive, mainly targeting the exploitation of the new generation of Micro Electro-Mechanical Sensors (MEMS), which achieve a dramatic reduction of area and power consumption but at the same time require more complexity in the sensor conditioning interface. Several issues concerning the development of automotive ASICs are presented, together with an overview of automotive electronics market and its main sensor applications. The state of the art for sensor interfaces design (the generic sensor interface concept), consists in sharing the same electronics among similar sensor applications, thus saving cost and time-to-market but also implementing a sub-optimal system with area and power overheads. A Platform Based Design methodology is proposed to overcome the limitations of generic sensor interfaces, by keeping the platform generality at the highest design layers and pursuing the maximum optimization and performances in the platform customization for a specific sensor. A complete design flow is presented (up to the ASIC implementation for gyro sensor conditioning), together with examples regarding IP development for reuse and low power optimization of third party designs. A further evolution of Platform Based Design has been achieved by means of implementation into silicon of the ISIF (Intelligent Sensor InterFace) platform. ISIF is a highly programmable mixed-signal chip which allows a substantial reduction of design space exploration time, as it can implement in a short time a wide class of sensor conditioning architectures. Thus it lets the designers evaluate directly on silicon the impact of different architectural choices, as well as perform feasibility studies, sensor evaluations and accurate estimation of the resulting dedicated ASIC performances. Several case studies regarding fast prototyping possibilities with ISIF are presented: a magneto-resistive position sensor, a biosensor (which produces pA currents in presence of surface chemical reactions) and two capacitive inertial sensors, a gyro and a low-g YZ accelerometer. The accelerometer interface has also been implemented in miniboards of about 3 cm2 (with ISIF and sensor dies bonded together) and a series of automatic trimming and characterization procedures have been developed in order to evaluate sensor and interface behaviour over the automotive temperature range, providing a valuable feedback for the implementation of a dedicated accelerometer interface

    Analysis, design and implementation of a Node.js Web Application for personal link management using a single link in social networks.

    Get PDF
    [Abstract] The main objective of this project is the development of a Web Application, using a modern technology stack and a scalable system architecture, to provide Social Network users a place to store all their important links for their followers to access in a seemingly transparent manner from their user profiles. This is in some cases necessary as some social networks, of which Instagram is the prime example, block the usage of links/URLs anywhere in their application, except for the usage of a single link as a user’s Webpage inside their profile information. The project will also focus on providing useful insights and analytics to the profile owner based on the metadata collected from visitors’ clicks on the link collection.[Resumen] El objetivo principal de este proyecto es el desarrollo de una Aplicación Web, utilizando un stack de tecnologías moderno, y una arquitectura de sistema escalable, para proveer a los usuarios de las Redes Sociales de un lugar donde almacenar todos sus links importantes para que sus seguidores puedan acceder a ellos de una manera aparentemente transparente desde sus perfiles de usuario. Esto en algunos casos es necesario, ya que algunas redes sociales, de las cuales Instagram es el mejor ejemplo, bloquean la utilización de links en cualquier sitio de la aplicación, excepto la utilización de un único link como la página web del usuario dentro de su perfil de usuario. El proyecto también tiene como objetivo proporcionar al usuario de nuestra aplicación con datos y analíticas basadas en los metadatos recolectados de los clicks en la colección de links de la aplicación.Traballo fin de grao. Enxeñaría Informática. Curso 2020/202
    corecore