93 research outputs found
The prognostic value of magnetic resonance imaging in moderate and severe traumatic brain injury : a Systematic Review and Meta-Analysis
Les traumatismes craniocérébraux constituent une cause importante de mortalité et de morbidité à travers le monde, et représentent un fardeau socioéconomique important dans les pays développés en raison de l'incapacité résiduelle post-traumatique dont souffrent les patients après leur traumatisme. Malgré la fréquence élevée d'issues cliniques défavorables à long terme, il existe actuellement peu d'indicateurs pronostiques permettant de guider le clinicien dans la prise en charge aiguë de ces patients et de conseiller leurs familles et proches. Plus de quatre décennies d'études observationnelles ont examiné l'utilisation de l'imagerie par résonance magnétique effectuée en phase aiguë dans son rôle potentiel à distinguer rapidement l'issue clinique post-traumatisme à long terme chez ces patients. Le présent travail vise donc à déterminer la valeur pronostique de l'imagerie par résonance magnétique effectuée en phase aiguë de traitement suite à un traumatisme craniocérébral modéré ou grave chez l'adulte, en utilisant une méthodologie de revue systématique et méta-analyse pronostique, afin d'identifier toutes les études évaluant la relation entre les modèles de lésions identifiés par résonance magnétique et l'issue clinique à long terme. Nos travaux ont identifié 58 études individuelles. Après méta-analyse, les lésions localisées dans le tronc cérébral se sont révélées être associées à une mortalité augmentée (toutes causes confondues) et une issue neurologique défavorable alors que les lésions compatibles avec une lésion axonale diffuse ont été associées à une augmentation du risque d'issue neurologique défavorable. Deux échelles de classement basées sur la gravité de la lésion ont été associées à des issues neurologiques de plus en plus défavorables au fur et à mesure de l'augmentation du nombre de structures cérébrales caudales touchées, confirmant ainsi l'importance des lésions profondes. Ces résultats démontrent l'utilité pronostique de l'imagerie par résonance magnétique effectuée rapidement après un traumatisme craniocérébral et indiquent la nécessité d'entreprendre des études pronostiques de cohorte de haute qualité et bien contrôlées, en raison du risque élevé de biais dans la littérature actuelle.Traumatic brain injury is a major cause of mortality and morbidity worldwide and represents a significant socioeconomic burden in developed nations due to residual post-trauma disability among survivors. Despite high rates of long-term unfavourable outcome, few prognostic indicators currently exist to guide early clinical management and counsel family and friends of patients. Over four decades of observational studies have examined the potential role of early magnetic resonance imaging of the brain to distinguish long-term clinical outcome by examining lesion patterns identifiable soon after trauma. This present work thus aims to determine the prognostic value of early magnetic resonance imaging following moderate or severe traumatic brain injury in adults by employing prognostic systematic review and meta-analysis methodology to identify all published studies assessing the relationship between magnetic resonance lesion patterns and long-term clinical outcome. Our search identified 58 individual studies; following meta-analysis, lesions located in the brainstem were associated with all-cause mortality and unfavourable neurological outcome while shear injury patterns compatible with diffuse axonal injury anywhere in the brain were associated with increased risk of unfavourable neurological outcome. Two scoring systems based on lesion depth were associated with progressively worse neurological outcomes as more caudal cerebral structures were affected, confirming the importance of deep lesions. These findings demonstrate the prognostic utility of magnetic resonance imaging early following traumatic brain injury and indicate the need for high quality, well-controlled, prognostic cohort studies given the elevated risk of bias in the current body of literature
Energy-Efficient and Reliable Computing in Dark Silicon Era
Dark silicon denotes the phenomenon that, due to thermal and power constraints, the fraction of transistors that can operate at full frequency is decreasing in each technology generation. Moore’s law and Dennard scaling had been backed and coupled appropriately for five decades to bring commensurate exponential performance via single core and later muti-core design. However, recalculating Dennard scaling for recent small technology sizes shows that current ongoing multi-core growth is demanding exponential thermal design power to achieve linear performance increase. This process hits a power wall where raises the amount of dark or dim silicon on future multi/many-core chips more and more. Furthermore, from another perspective, by increasing the number of transistors on the area of a single chip and susceptibility to internal defects alongside aging phenomena, which also is exacerbated by high chip thermal density, monitoring and managing the chip reliability before and after its activation is becoming a necessity. The proposed approaches and experimental investigations in this thesis focus on two main tracks: 1) power awareness and 2) reliability awareness in dark silicon era, where later these two tracks will combine together. In the first track, the main goal is to increase the level of returns in terms of main important features in chip design, such as performance and throughput, while maximum power limit is honored. In fact, we show that by managing the power while having dark silicon, all the traditional benefits that could be achieved by proceeding in Moore’s law can be also achieved in the dark silicon era, however, with a lower amount. Via the track of reliability awareness in dark silicon era, we show that dark silicon can be considered as an opportunity to be exploited for different instances of benefits, namely life-time increase and online testing. We discuss how dark silicon can be exploited to guarantee the system lifetime to be above a certain target value and, furthermore, how dark silicon can be exploited to apply low cost non-intrusive online testing on the cores. After the demonstration of power and reliability awareness while having dark silicon, two approaches will be discussed as the case study where the power and reliability awareness are combined together. The first approach demonstrates how chip reliability can be used as a supplementary metric for power-reliability management. While the second approach provides a trade-off between workload performance and system reliability by simultaneously honoring the given power budget and target reliability
Run-time modelling of energy consumption in mobile robots : Technical report - 2024
In recent years, the deployment of mobile robots in various industries, from manufacturing to healthcare, has seen a significant rise. This necessitates an in-depth understanding of their energy consumption patterns to optimize efficiency and sustainability. This report presents a comprehensive platform developed to measure and analyze the electric current consumed by both the computational and mechanical components of mobile robots. At first, the project focuses on developing a mobile rover by integrating NVIDIA Jetson Nano, Pixhawk 4 Flight Controller along PM07 Power management Board on a Reely rover. Secondly, a runtime current measuring system has been developed using HSTS016L current sensor, ADS1115 ADC along with NVIDIA Jetson Nano. The primary aim of this study is to provide a detailed and accurate model for run-time energy consumption measurement. Additionally, this report covers the methodology for measuring system dynamics i.e. distance, velocity, acceleration and jerk using GPS data, processed through both the Haversine method and Python’s Pyproj library. This dual-method approach enhances the accuracy of distance measurement, crucial for correlating energy consumption with the rover’s operational parameters. The study can be further extended to study real-time energy consumption of autonomous robots in different autonomous applications.
Keywords: Energy Consumption, Mobile Robots, Jetson Nano, System Dynamic
Run-time Resource Management in CMPs Handling Multiple Aging Mechanisms
Abstract—Run-time resource management is fundamental for efficient execution of workloads on Chip Multiprocessors. Application- and system-level requirements (e.g. on performance vs. power vs. lifetime reliability) are generally conflicting each other, and any decision on resource assignment, such as core allocation or frequency tuning, may positively affect some of them while penalizing some others. Resource assignment decisions can be perceived in few instants of time on performance and power consumption, but not on lifetime reliability. In fact, this latter changes very slowly based on the accumulation of effects of various decisions over a long time horizon. Moreover, aging mechanisms are various and have different causes; most of them, such as Electromigration (EM), are subject to temperature levels, while Thermal Cycling (TC) is caused mainly by temperature variations (both amplitude and frequency). Mitigating only EM may negatively affect TC and vice versa. We propose a resource orchestration strategy to balance the performance and power consumption constraints in the short-term and EM and TC aging in the long-term. Experimental results show that the proposed approach improves the average Mean Time To Failure at least by 17% and 20% w.r.t. EM and TC, respectively, while providing same performance level of the nominal counterpart and guaranteeing the power budget
Thread-level Parallelism in Fault Simulation of Deep Neural Networks on Multi-Processor Systems
High-performance fault simulation is one of the essential and preliminary tasks in the process of online and offline testing of machine learning (ML) hardware. Deep neural networks (DNN), as one of the essential parts of ML programs, are widely used in many critical and non-critical applications in Systems-on-Chip and ASIC designs. Through fault simulation for DNNs, by increasing the number of neurons, the fault simulation time increases exponentially. However, the software architecture of neural networks and the lack of dependency between neurons in each inference layer provide significant opportunity for parallelism of the fault simulation time in a multi-processor platform. In this paper, a multi-thread technique for hierarchical fault simulation of neural network is proposed, targeting both permanent and transient faults. During the process of fault simulation the neurons for each inference layer will be distributed among the executing threads. Since in the process of hierarchical fault simulation, the faulty neuron demands proportionally enormous computation comparing to behavioural model of non-faulty neurons, the faulty neuron will be assigned to one thread while the rest of the neurons will be divided among the remaining threads. Experimental results confirm the time efficiency of the proposed fault simulation technique on multi-processor architectures
Heterogeneous parallelization for object detection and tracking in UAVs.
Recent technical advancements in both fields of unmanned aerial vehicles (UAV) control and artificial intelligence (AI) have made a certain realm of applications possible. However, one of the main problems in integration of these two areas is the bottle-neck of computing AI applications on UAV's resource limited platform. One of the main solution for this problem is that AI and control software from one side and computing hardware mounted on UAV from the other side be adopted together based on the main constraints of the resource limited computing platform on UAV. Basically, the target constraints of such adaptation are performance, energy efficiency, and accuracy. In this paper, we propose a strategy to integrate and adopt the commonly used object detection and tracking algorithm and UAV control software to be executed on a heterogeneous resource limited computing units on a UAV. For object detection, a convolutional neural network (CNN) algorithm is used. For object tracking, a novel algorithm is proposed that can execute along with object tracking via sequential stream data. For UAV control, a Gain-Scheduled PID controller is designed that steers the UAV by continuously manipulation of the actuators based on the stream data from the tracking unit and dynamics of the UAV. All the algorithms are adopted to be executed on a heterogeneous platform including NVIDIA Jetson TX2 embedded computer and an ARM Cortex M4. The observation from real-time operation of the platform shows that using the proposed platform reduces the power consumption by 53.69% in contrast with other existing methods while having marginal penalty for object detection and tracking parts
Energy-efficient Post-failure Reconfiguration of Swarms of Unmanned Aerial Vehicles
In this paper, the reconfiguration of swarms of unmanned aerial vehicles after simultaneous failures of multiple nodes is considered. The objectives of the post-failure reconfiguration are to provide collision avoidance and smooth energy-efficient movement. To incorporate such a mechanism, three different failure recovery algorithms are proposed namely thin-plate spline, distance- and time-optimal algorithms. These methods are tested on six swarms, with two variations on failing nodes for each swarm. Simulation results of reconfiguration show that the execution of such algorithms maintains the desired formations with respect to avoiding collisions at run-time. Also, the results show the effectiveness concerning the distance travelled, kinetic energy, and energy efficiency. As expected, the distance-optimal algorithm gives the shortest movements, and the time-optimal algorithm gives the most energy-efficient movements. The thinplate spline is also found to be energy-efficient and has less computational cost than the other two proposed methods. Despite the suggested heuristics, these are combinatorial in nature and might be hard to use in practice. Furthermore, the use of the regularization parameter λ in thin-plate spline is also investigated, and it is found that too large values on λ can lead to incorrect locations, including multiple nodes on the same location. In fact, it is found that using λ = 0 worked well in all cases.</p
- …