2,648 research outputs found

    Editorial: Remote Ischemic Conditioning (Pre, Per, and Post) as an Emerging Strategy of Neuroprotection in Ischemic Stroke

    Get PDF
    EDITORIAL articleThis study was funded by Carlos III Health Institute and cofunded by European Union (ERDF A way to make Europe) Project (PI17-01725) and the RICORS Research Network to FP, NIH Funding (R01 NS099455, 1UO1NS113356, and R01 NS112511) to DH, Italian Ministry of Health - PRIN 2017CY3J3W to SB. French National Minsitry of Health Grant 2014 AOR13032 to FP, TE is the Chief Investigator for the Remote ischaemic conditioning after stroke trial (RECAST), RECAST-2, and RECAST-3 funded through the NIHR Efficacy and Mechanism Evaluation (EME) Programme, Award ID NIHR128240

    Early Classification of Pathological Heartbeats on Wireless Body Sensor Nodes

    Get PDF
    Smart Wireless Body Sensor Nodes (WBSNs) are a novel class of unobtrusive, battery-powered devices allowing the continuous monitoring and real-time interpretation of a subject's bio-signals, such as the electrocardiogram (ECG). These low-power platforms, while able to perform advanced signal processing to extract information on heart conditions, are usually constrained in terms of computational power and transmission bandwidth. It is therefore essential to identify in the early stages which parts of an ECG are critical for the diagnosis and, only in these cases, activate on demand more detailed and computationally intensive analysis algorithms. In this work, we present a comprehensive framework for real-time automatic classification of normal and abnormal heartbeats, targeting embedded and resource-constrained WBSNs. In particular, we provide a comparative analysis of different strategies to reduce the heartbeat representation dimensionality, and therefore the required computational effort. We then combine these techniques with a neuro-fuzzy classification strategy, which effectively discerns normal and pathological heartbeats with a minimal run time and memory overhead. We prove that, by performing a detailed analysis only on the heartbeats that our classifier identifies as abnormal, a WBSN system can drastically reduce its overall energy consumption. Finally, we assess the choice of neuro-fuzzy classification by comparing its performance and workload with respect to other state-of-the-art strategies. Experimental results using the MIT-BIH Arrhythmia database show energy savings of as much as 60% in the signal processing stage, and 63% in the subsequent wireless transmission, when a neuro-fuzzy classification structure is employed, coupled with a dimensionality reduction technique based on random projections

    Hardware/Software Approach for Code Synchronization in Low-Power Multi-Core Sensor Nodes

    Get PDF
    Latest embedded bio-signal analysis applications, targeting low-power Wireless Body Sensor Nodes (WBSNs), present conflicting requirements. On one hand, bio-signal analysis applications are continuously increasing their demand for high computing capabilities. On the other hand, long-term signal processing in WBSNs must be provided within their highly constrained energy budget. In this context, parallel processing effectively increases the power efficiency of WBSNs, but only if the execution can be properly synchronized among computing elements. To address this challenge, in this work we propose a hardware/software approach to synchronize the execution of bio-signal processing applications in multi-core WBSNs. This new approach requires little hardware resources and very few adaptations in the source code. Moreover, it provides the necessary flexibility to execute applications with an arbitrarily large degree of complexity and parallelism, enabling considerable reductions in power consumption for all multi-core WBSN execution conditions. Experimental results show that a multi-core WBSN architecture using the illustrated approach can obtain energy savings of up to 40%, with respect to an equivalent singlecore architecture, when performing advanced bio-signal analysi

    Spatial Molecular and Cellular Determinants of STAT3 Activation in Liver Fibrosis Progression in Non-alcoholic Fatty Liver Disease

    Get PDF
    BACKGROUND & AIMS: The prevalence of non-alcoholic fatty liver disease (NAFLD) and its severe form, non-alcoholic steatohepatitis (NASH), is increasing. Individuals with NASH often develop liver fibrosis and advanced liver fibrosis is the main determinant of mortality in individuals with NASH. We and others have reported that STAT3 contributes to liver fibrosis and hepatocellular carcinoma in mice. METHODS: Here, we explored whether STAT3 activation in hepatocyte and non-hepatocyte areas, measured by phospho-STAT3 (pSTAT3), is associated with liver fibrosis progression in 133 patients with NAFLD. We further characterized the molecular and cellular determinants of STAT3 activation by integrating spatial distribution and transcriptomic changes in fibrotic NAFLD livers.Results: pSTAT3 scores in non-hepatocyte areas progressively increased with fibrosis severity (r = 0.53, CONCLUSION: Increased understanding of the spatial dependence of STAT3 signaling in NASH and liver fibrosis progression could lead to novel targeted treatment approaches. IMPACT AND IMPLICATIONS: Advanced liver fibrosis is the main determinant of mortality in patients with NASH. This study showed using liver biopsies from 133 patients with NAFLD, that STAT3 activation in non-hepatocyte areas is strongly associated with fibrosis severity, inflammation, and progression to NASH. STAT3 activation was enriched in hepatic progenitor cells (HPCs) and sinusoidal endothelial cells (SECs), as determined by innovative technologies interrogating the spatial distribution of pSTAT3. Finally, STAT3 inhibition in mice resulted in reduced liver fibrosis and depletion of HPCs, suggesting that STAT3 activation in HPCs contributes to their expansion and fibrogenesis in NAFLD

    Enhancement Pattern Mapping for Early Detection of Hepatocellular Carcinoma in Patients with Cirrhosis

    Get PDF
    BACKGROUND AND AIMS: Limited methods exist to accurately characterize the risk of malignant progression of liver lesions. Enhancement pattern mapping (EPM) measures voxel-based root mean square deviation (RMSD) of parenchyma and the contrast-to-noise (CNR) ratio enhances in malignant lesions. This study investigates the utilization of EPM to differentiate between HCC versus cirrhotic parenchyma with and without benign lesions. METHODS: Patients with cirrhosis undergoing MRI surveillance were studied prospectively. Cases (n=48) were defined as patients with LI-RADS 3 and 4 lesions who developed HCC during surveillance. Controls (n=99) were patients with and without LI-RADS 3 and 4 lesions who did not develop HCC. Manual and automated EPM signals of liver parenchyma between cases and controls were quantitatively validated on an independent patient set using cross validation with manual methods avoiding parenchyma with artifacts or blood vessels. RESULTS: With manual EPM, RMSD of 0.37 was identified as a cutoff for distinguishing lesions that progress to HCC from background parenchyma with and without lesions on pre-diagnostic scans (median time interval 6.8 months) with an area under the curve (AUC) of 0.83 (CI: 0.73-0.94) and a sensitivity, specificity, and accuracy of 0.65, 0.97, and 0.89, respectively. At the time of diagnostic scans, a sensitivity, specificity, and accuracy of 0.79, 0.93, and 0.88 were achieved with manual EPM with an AUC of 0.89 (CI: 0.82-0.96). EPM RMSD signals of background parenchyma that did not progress to HCC in cases and controls were similar (case EPM: 0.22 ± 0.08, control EPM: 0.22 ± 0.09, p=0.8). Automated EPM produced similar quantitative results and performance. CONCLUSION: With manual EPM, a cutoff of 0.37 identifies quantifiable differences between HCC cases and controls approximately six months prior to diagnosis of HCC with an accuracy of 89%

    Heterogeneous Image-based Classification Using Distributional Data Analysis

    Full text link
    Diagnostic imaging has gained prominence as potential biomarkers for early detection and diagnosis in a diverse array of disorders including cancer. However, existing methods routinely face challenges arising from various factors such as image heterogeneity. We develop a novel imaging-based distributional data analysis (DDA) approach that incorporates the probability (quantile) distribution of the pixel-level features as covariates. The proposed approach uses a smoothed quantile distribution (via a suitable basis representation) as functional predictors in a scalar-on-functional quantile regression model. Some distinctive features of the proposed approach include the ability to: (i) account for heterogeneity within the image; (ii) incorporate granular information spanning the entire distribution; and (iii) tackle variability in image sizes for unregistered images in cancer applications. Our primary goal is risk prediction in Hepatocellular carcinoma that is achieved via predicting the change in tumor grades at post-diagnostic visits using pre-diagnostic enhancement pattern mapping (EPM) images of the liver. Along the way, the proposed DDA approach is also used for case versus control diagnosis and risk stratification objectives. Our analysis reveals that when coupled with global structural radiomics features derived from the corresponding T1-MRI scans, the proposed smoothed quantile distributions derived from EPM images showed considerable improvements in sensitivity and comparable specificity in contrast to classification based on routinely used summary measures that do not account for image heterogeneity. Given that there are limited predictive modeling approaches based on heterogeneous images in cancer, the proposed method is expected to provide considerable advantages in image-based early detection and risk prediction.Comment: 16, 2 figures, 3 table

    Design Methods for Parallel Hardware Implementation of Multimedia Iterative Algorithms

    Get PDF
    Traditionally, parallel implementations of multimedia algorithms are carried out manually, since the automation of this task is very difficult due to the complex dependencies that generally exist between different elements of the data set. Moreover, there is a wide family of iterative multimedia algorithms that cannot be executed with satisfactory performance on Multi-Processor Systems-on-Chip or Graphics Processing Units. For this reason, new methods to design custom hardware circuits that exploit the intrinsic parallelism of multimedia algorithms are needed. As a consequence, in this paper, we propose a novel design method for the definition of hardware systems optimized for a particular class of multimedia iterative algorithms. We have successfully applied the proposed approach to several real-world case studies, such as iterative convolution filters and the Chambolle algorithm, and the proposed design method has been able to automatically implement, for each one of them, a parallel architecture able to meet real-time performance (up to 72 frames per second for the Chambolle algorithm), with on-chip memory requirements from 2 to 3 orders of magnitude smaller than the state-of-the art approaches

    Parallelizing the Chambolle Algorithm for Performance-Optimized Mapping on FPGA Devices

    Get PDF
    The performance and the efficiency of recent computing platforms have been deeply influenced by the widespread adoption of hardware accelerators, such as Graphics Processing Units (GPUs) or Field Programmable Gate Arrays (FPGAs), which are often employed to support the tasks of General Purpose Processors (GPP). One of the main advantages of these accelerators over their sequential counterparts (GPPs) is their ability of performing massive parallel computation. However, in order to exploit this competitive edge, it is necessary to extract the parallelism from the target algorithm to be executed, which is in general a very challenging task. This concept is demonstrated, for instance, by the poor performance achieved on relevant multimedia algorithms, such as Chambolle, which is a well-known algorithm employed for the optical flow estimation. The implementations of this algorithm that can be found in the state of the art are generally based on GPUs, but barely improve the performance that can be obtained with a powerful GPP. In this paper, we propose a novel approach to extract the parallelism from computation-intensive multimedia algorithms, which includes an analysis of their dependency schema and an assessment of their data reuse. We then perform a thorough analysis of the Chambolle algorithm, providing a formal proof of its inner data dependencies and locality properties. Then, we exploit the considerations drawn from this analysis by proposing an architectural template that takes advantage of the fine-grained parallelism of FPGA devices. Moreover, since the proposed template can be instantiated with different parameters, we also propose a design metric, the expansion rate, to help the designer in the estimation of the efficiency and performance of the different instances, making it possible to select the right one before the implementation phase. We finally show, by means of experimental results, how the proposed analysis and parallelization approach leads to the design of efficient and high-performance FPGA-based implementations that are orders of magnitude faster than the state-of-the-art ones

    A High–Performance Parallel Implementation of the Chambolle Algorithm

    Get PDF
    The determination of the optical flow is a central problem in image processing, as it allows to describe how an image changes over time by means of a numerical vector field. The estimation of the optical flow is however a very complex problem, which has been faced using many different mathematical approaches. A large body of work has been recently published about variational methods, following the technique for total variation minimization proposed by Chambolle. Still, their hardware implementations do not offer good performances in terms of frames that can be processed per time unit, mainly because of the complex dependency scheme among the data. In this work, we propose a highly parallel and accelerated FPGA implementation of the Chambolle algorithm, which splits the original image into a set of overlapping sub-frames and efficiently exploits the reuse of intermediate results. We validate our hardware on large frames (up to 1024 Ă— 768), and the proposed approach largely outperforms the state-of-the-art implementations, reaching up to 76Ă— speedups as well as realtime frame rates even at high resolutions
    • …
    corecore