1,551 research outputs found

    VIDEO KINEMATIC EVALUATION OF THE HEART (VI.KI.E.): AN IDEA, A PROJECT, A REALITY

    Get PDF
    Introduction: The technological development of the last 20 years pledges the intensity of efforts for implementing novel imaging contactless modalities that accelerate the translation from the research bench to the patient bedside, especially in the cardiac field. In this work, a novel intraoperative cardiac imaging approach, named Video Kinematic Evaluation (Vi.Ki.E.), is presented and explained in detail. This technology is able to monitor, contactless, the cardiac mechanics and deformation in-situ during heart surgery. Cardiac kinematics have been deeply evaluated ranging from the experimental animal approach to the human myocardial pathologies in both left and right ventricles. Methods: Vi.Ki.E. can be defined \u201cas simple as innovative\u201d. It only consists of a high-speed camera placed upon an exposed beating heart in-situ to record cardiac cycles. Afterwards a tracker software is used on the recorded video to follow the epicardial tissue movements. This tracker provides information about trajectories of the epicardium and, thanks to a custom-made algorithm, the technology supplies heart mechanical information such as: Force of contraction or cardiac fatigue, Energy expenditure, Contraction velocity, displacement of the marker and epicardial torsion. This approach has been tested on 21 rats (9 ischemia/reperfusion and/or for validation, 12 for the gender difference study) and on 37 patients who underwent different surgery between 2015 and 2019. In detail 10 patients underwent Coronary Artery Bypass Grafting, 12 underwent Valve Replacement after Tetralogy of Fallot correction surgery, 6 implanted a Left Ventricular Assist Device (1 is moved in the case study section), 6 patients with Hypoplastic Heart Syndrome underwent GLENN or FONTAN surgery, 2 patients underwent Heart Transplantation and finally 1 patient underwent double valve replacement (this patient is moved into case study section). Results: The patients\u2019 results demonstrated that the Vi.Ki.E. technology was able to discriminate, with statistic potency, the kinematic differences before and after the surgery in real-time, suggesting possible clinical implications in the treatment of the patients before the chest closure and/or in the intensive care unit. As it concerns the experimental animals, the results are the basics of the validation technology. Some of them were used as accepted model in comparison with the Vi.Ki.E. results on patients. Conclusions: In conclusion, this study has shown that Vi.Ki.E. is a safe and contactless technology with promising possible clinical application. The ease in the evaluation and the algorithm-based approach makes Video Kinematic Evaluation a widespread technique from cellular level to human cases covering the entire experimental field with in-vivo evaluation and possibly Langendorff/Working Heart approaches

    VIDEO KINEMATIC EVALUATION OF THE HEART (VI.KI.E.): AN IDEA, A PROJECT, A REALITY

    Get PDF
    Introduction: The technological development of the last 20 years pledges the intensity of efforts for implementing novel imaging contactless modalities that accelerate the translation from the research bench to the patient bedside, especially in the cardiac field. In this work, a novel intraoperative cardiac imaging approach, named Video Kinematic Evaluation (Vi.Ki.E.), is presented and explained in detail. This technology is able to monitor, contactless, the cardiac mechanics and deformation in-situ during heart surgery. Cardiac kinematics have been deeply evaluated ranging from the experimental animal approach to the human myocardial pathologies in both left and right ventricles. Methods: Vi.Ki.E. can be defined \u201cas simple as innovative\u201d. It only consists of a high-speed camera placed upon an exposed beating heart in-situ to record cardiac cycles. Afterwards a tracker software is used on the recorded video to follow the epicardial tissue movements. This tracker provides information about trajectories of the epicardium and, thanks to a custom-made algorithm, the technology supplies heart mechanical information such as: Force of contraction or cardiac fatigue, Energy expenditure, Contraction velocity, displacement of the marker and epicardial torsion. This approach has been tested on 21 rats (9 ischemia/reperfusion and/or for validation, 12 for the gender difference study) and on 37 patients who underwent different surgery between 2015 and 2019. In detail 10 patients underwent Coronary Artery Bypass Grafting, 12 underwent Valve Replacement after Tetralogy of Fallot correction surgery, 6 implanted a Left Ventricular Assist Device (1 is moved in the case study section), 6 patients with Hypoplastic Heart Syndrome underwent GLENN or FONTAN surgery, 2 patients underwent Heart Transplantation and finally 1 patient underwent double valve replacement (this patient is moved into case study section). Results: The patients\u2019 results demonstrated that the Vi.Ki.E. technology was able to discriminate, with statistic potency, the kinematic differences before and after the surgery in real-time, suggesting possible clinical implications in the treatment of the patients before the chest closure and/or in the intensive care unit. As it concerns the experimental animals, the results are the basics of the validation technology. Some of them were used as accepted model in comparison with the Vi.Ki.E. results on patients. Conclusions: In conclusion, this study has shown that Vi.Ki.E. is a safe and contactless technology with promising possible clinical application. The ease in the evaluation and the algorithm-based approach makes Video Kinematic Evaluation a widespread technique from cellular level to human cases covering the entire experimental field with in-vivo evaluation and possibly Langendorff/Working Heart approaches

    Voltage stacking for near/sub-threshold operation

    Get PDF

    Decompose and Conquer: Addressing Evasive Errors in Systems on Chip

    Full text link
    Modern computer chips comprise many components, including microprocessor cores, memory modules, on-chip networks, and accelerators. Such system-on-chip (SoC) designs are deployed in a variety of computing devices: from internet-of-things, to smartphones, to personal computers, to data centers. In this dissertation, we discuss evasive errors in SoC designs and how these errors can be addressed efficiently. In particular, we focus on two types of errors: design bugs and permanent faults. Design bugs originate from the limited amount of time allowed for design verification and validation. Thus, they are often found in functional features that are rarely activated. Complete functional verification, which can eliminate design bugs, is extremely time-consuming, thus impractical in modern complex SoC designs. Permanent faults are caused by failures of fragile transistors in nano-scale semiconductor manufacturing processes. Indeed, weak transistors may wear out unexpectedly within the lifespan of the design. Hardware structures that reduce the occurrence of permanent faults incur significant silicon area or performance overheads, thus they are infeasible for most cost-sensitive SoC designs. To tackle and overcome these evasive errors efficiently, we propose to leverage the principle of decomposition to lower the complexity of the software analysis or the hardware structures involved. To this end, we present several decomposition techniques, specific to major SoC components. We first focus on microprocessor cores, by presenting a lightweight bug-masking analysis that decomposes a program into individual instructions to identify if a design bug would be masked by the program's execution. We then move to memory subsystems: there, we offer an efficient memory consistency testing framework to detect buggy memory-ordering behaviors, which decomposes the memory-ordering graph into small components based on incremental differences. We also propose a microarchitectural patching solution for memory subsystem bugs, which augments each core node with a small distributed programmable logic, instead of including a global patching module. In the context of on-chip networks, we propose two routing reconfiguration algorithms that bypass faulty network resources. The first computes short-term routes in a distributed fashion, localized to the fault region. The second decomposes application-aware routing computation into simple routing rules so to quickly find deadlock-free, application-optimized routes in a fault-ridden network. Finally, we consider general accelerator modules in SoC designs. When a system includes many accelerators, there are a variety of interactions among them that must be verified to catch buggy interactions. To this end, we decompose such inter-module communication into basic interaction elements, which can be reassembled into new, interesting tests. Overall, we show that the decomposition of complex software algorithms and hardware structures can significantly reduce overheads: up to three orders of magnitude in the bug-masking analysis and the application-aware routing, approximately 50 times in the routing reconfiguration latency, and 5 times on average in the memory-ordering graph checking. These overhead reductions come with losses in error coverage: 23% undetected bug-masking incidents, 39% non-patchable memory bugs, and occasionally we overlook rare patterns of multiple faults. In this dissertation, we discuss the ideas and their trade-offs, and present future research directions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147637/1/doowon_1.pd

    A sense of self for power side-channel signatures: instruction set disassembly and integrity monitoring of a microcontroller system

    Get PDF
    Cyber-attacks are on the rise, costing billions of dollars in damages, response, and investment annually. Critical United States National Security and Department of Defense weapons systems are no exception, however, the stakes go well beyond financial. Dependence upon a global supply chain without sufficient insight or control poses a significant issue. Additionally, systems are often designed with a presumption of trust, despite their microelectronics and software-foundations being inherently untrustworthy. Achieving cybersecurity requires coordinated and holistic action across disciplines commensurate with the specific systems, mission, and threat. This dissertation explores an existing gap in low-level cybersecurity while proposing a side-channel based security monitor to support attack detection and the establishment of trusted foundations for critical embedded systems. Background on side-channel origins, the more typical side-channel attacks, and microarchitectural exploits are described. A survey of related side-channel efforts is provided through side-channel organizing principles. The organizing principles enable comparison of dissimilar works across the side-channel spectrum. We find that the maturity of existing side-channel security monitors is insufficient, as key transition to practice considerations are often not accounted for or resolved. We then document the development, maturation, and assessment of a power side-channel disassembler, Time-series Side-channel Disassembler (TSD), and extend it for use as a security monitor, TSD-Integrity Monitor (TSD-IM). We also introduce a prototype microcontroller power side-channel collection fixture, with benefits to experimentation and transition to practice. TSD-IM is finally applied to a notional Point of Sale (PoS) application for proof of concept evaluation. We find that TSD and TSD-IM advance state of the art for side-channel disassembly and security monitoring in open literature. In addition to our TSD and TSD-IM research on microcontroller signals, we explore beneficial side-channel measurement abstractions as well as the characterization of the underlying microelectronic circuits through Impulse Signal Analysis (ISA). While some positive results were obtained, we find that further research in these areas is necessary. Although the need for a non-invasive, on-demand microelectronics-integrity capability is supported, other methods may provide suitable near-term alternatives to ISA

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Aggressive undervolting of FPGAs : power & reliability trade-offs

    Get PDF
    In this work, we evaluate aggressive undervolting, i.e., voltage underscaling below the nominal level to reduce the energy consumption of Field Programmable Gate Arrays (FPGAs). Usually, voltage guardbands are added by chip vendors to ensure the worst-case process and environmental scenarios. Through experimenting on several FPGA architectures, we con¿rm a large voltage guardband for several FPGA components, which in turn, delivers signi¿cant power savings. However, further undervolting below the voltage guardband may cause reliability issues as the result of the circuit delay increase, and faults might start to appear. We extensively characterize the behavior of these faults in terms of the rate, location, type, as well as sensitivity to environmental temperature, primarily focusing on FPGA on-chip memories, or Block RAMs (BRAMs). Understanding this behavior can allow to deploy ef¿cient mitigation techniques, and in turn, FPGA-based designs can be improved for better energy, reliability, and performance trade-offs. Finally, as a case study, we evaluate a typical FPGA-based Neural Network (NN) accelerator when the FPGA voltage is underscaled. In consequence, the substantial NN energy savings come with the cost of NN accuracy loss. To attain power savings without NN accuracy loss below the voltage guardband gap, we proposed an application-aware technique and we also, evaluated the built-in Error-Correcting Code (ECC) mechanism. Hence, First, we developed an application-dependent BRAMs placement technique that relies on the deterministic behavior of undervolting faults, and mitigates these faults by mapping the most reliability sensitive NN parameters to BRAM blocks that are relatively more resistant to undervolting faults. Second, as a more general technique, we applied the built-in ECC of BRAMs and observed a signi¿cant fault coverage capability thanks to the behavior of undervolting faults, with a negligible power consumption overhead.En este trabajo, evaluamos el reducir el voltaje en forma agresiva, es decir, bajar la tensión por debajo del nivel nominal para reducir el consumo de energía en Field Programmable Gate Arrays (FPGA). Por lo general, los vendedores de chips establecen margen de seguridad al voltaje para garantizar el funcionamiento de los mismos en el peor de los casos y en los peores escenarios ambientales. Mediante la experimentación en varias arquitecturas FPGA, confirmamos que hay un margen de seguridad de voltaje grande en varios de los componentes de la FPGA, que a su vez, nos ofrece ahorros de energía significativos. Sin embargo, un trabajar a un voltaje por debajo del margen de seguridad del voltaje puede causar problemas de confiabilidad a medida ya que aumenta el retardo del circuito y pueden comenzar a aparecer fallos. Caracterizamos ampliamente el comportamiento de estos fallos en términos de velocidad, ubicación, tipo, así como la sensibilidad a la temperatura ambiental, centrándonos principalmente en memorias internas de la FPGA, o Block RAM (BRAM). Comprender este comportamiento puede permitir el desarrollo de técnicas eficientes de mitigación y, a su vez, mejorar los diseños basados en FPGA para obtener ahorros en energía, una mayor confiabilidad y un mayor rendimiento. Finalmente, como caso de estudio, evaluamos un acelerador típico de Redes Neuronales basado en FPGA cuando el voltaje de la FPGA esta por debajo del nivel mínimo de seguridad. En consecuencia, los considerables ahorros de energía de la red neuronal vienen asociados con la pérdida de precisión de la red neuronal. Para obtener ahorros de energía sin una pérdida de precisión en la red neuronal por debajo del margen de seguridad del voltaje, proponemos una técnica que tiene en cuenta la aplicación, asi mismo, evaluamos el mecanismo integrado en las BRAMs de Error Correction Code (ECC). Por lo tanto, en primer lugar, desarrollamos una técnica de colocación de BRAM dependiente de la aplicación que se basa en el comportamiento determinista de las fallos cuando la FPGA funciona por debajo del margen de seguridad, y se mitigan estos fallos asignando los parámetros de la red neuronal más sensibles a producir fallos a los bloques BRAM que son relativamente más resistentes a los fallos. En segundo lugar, como técnica más general, aplicamos el ECC incorporado de los BRAM y observamos una capacidad de cobertura de fallos significativo gracias a las características de comportamiento de fallos, con una sobrecoste de consumo de energía insignificantePostprint (published version

    The Journal of Conventional Weapons Destruction, Issue 24.2 (2020)

    Get PDF
    Editorial: HMA and COVID-19: A Donor\u27s Perspective Editorial: Time To Focus on Real Minefield Data Mine Action Information Management in Iraq and Northeast Syria IMAS 10.60 Update: Investigation and Reporting of Accidents and Incidents The Mine Free Sarajevo Project SALW in Bosnia and Herzegovina and the DRC Gender and Diversity in Mine Action Victim Assistance in Ukraine Landmines in the American Civil War Risk Education in Colombia R&D: The Odyssey2025 Projec
    corecore