5,761 research outputs found

    Selection of sensors by a new methodology coupling a classification technique and entropy criteria

    Get PDF
    Complex industrial processes invest a lot of money in sensors and automation devices to monitor and supervise the process in order to guarantee the production quality and the plant and operators safety. Fault detection is one of the multiple tasks of process monitoring and it critically depends on the sensors that measure the significant process variables. Nevertheless, most of the works on fault detection and diagnosis found in literature emphasis more on developing procedures to perform diagnosis given a set of sensors, and less on determining the actual location of sensors for efficient identification of faults. A methodology based on learning and classification techniques and on the information quantity measured by the Entropy concept, is proposed in order to address the problem of sensor location for fault identification. The proposed methodology has been applied to a continuous intensified reactor, the "Open Plate Reactor (OPR)", developed by Alfa Laval and studied at the Laboratory of Chemical Engineering of Toulouse. The different steps of the methodology are explained through its application to the carrying out of an exothermic reaction

    Radiogenomics in clear cell renal cell carcinoma: correlations between advanced CT imaging (texture analysis) and microRNAs expression

    Get PDF
    Purpose: A relevant challenge for the improvement of clear cell renal cell carcinoma management could derive from the identification of novel molecular biomarkers that could greatly improve the diagnosis, prognosis, and treatment choice of these neoplasms. In this study, we investigate whether quantitative parameters obtained from computed tomography texture analysis may correlate with the expression of selected oncogenic microRNAs. Methods: In a retrospective single-center study, multiphasic computed tomography examination (with arterial, portal, and urographic phases) was performed on 20 patients with clear cell renal cell carcinoma and computed tomography texture analysis parameters such as entropy, kurtosis, skewness, mean, and standard deviation of pixel distribution were measured using multiple filter settings. These quantitative data were correlated with the expression of selected microRNAs (miR-21-5p, miR-210-3p, miR-185-5p, miR-221-3p, miR-145-5p). Both the evaluations (microRNAs and computed tomography texture analysis) were performed on matched tumor and normal corticomedullar tissues of the same patients cohort. Results: In this pilot study, we evidenced that computed tomography texture analysis has robust parameters (eg, entropy, mean, standard deviation) to distinguish normal from pathological tissues. Moreover, a higher coefficient of determination between entropy and miR-21-5p expression was evidenced in tumor versus normal tissue. Interestingly, entropy and miR-21-5p show promising correlation in clear cell renal cell carcinoma opening to a radiogenomic strategy to improve clear cell renal cell carcinoma management. Conclusion: In this pilot study, a promising correlation between microRNAs and computed tomography texture analysis has been found in clear cell renal cell carcinoma. A clear cell renal cell carcinoma can benefit from noninvasive evaluation of texture parameters in adjunction to biopsy results. In particular, a promising correlation between entropy and miR-21-5p was found

    Multi-dimensional profiling of elderly at-risk for Alzheimer's disease in a differential framework

    Get PDF
    International audienceThe utility of EEG in Alzheimer’s disease (AD) research has been demonstrated over several decades in numerous studies. EEG markers have been employed successfully to investigate AD-related alterations in prodromal AD and AD dementia. Preclinical AD is a recent concept and a novel target for clinical research. This project tackles two issues: first, AD prediction at the preclinical sta ge, by exploiting the multimodal INSIGHT-preAD database, acquired at the Pitié-Salpetrière Hospital; second, an automatic AD diagnosis in a differential framework, by exploiting another large-scale EEG database, acquired at Charles-Foix Hospital. In this project, we will investigate AD predictors at preclinical stage, using EEG data of only subjective Memory Complainers in order to establish a cognitive profiling of elderly at-risk. We will also identify EEG markers for AD detection at early stages in a di fferential diagnosis context. The correlation between EEG markers and clinical biomarkers will be also assessed for a better characterization of the retrieved profiles and a better understanding on the severity of the cognitive disorder. The exploited larg e-scale complementary data offer the opportunity to investigate the full spectrum of the AD neuro-degeneration changes in the brain, using a big data approach and multimodal patient profiling based on resting-state EEG marker

    Cashtag piggybacking: uncovering spam and bot activity in stock microblogs on Twitter

    Full text link
    Microblogs are increasingly exploited for predicting prices and traded volumes of stocks in financial markets. However, it has been demonstrated that much of the content shared in microblogging platforms is created and publicized by bots and spammers. Yet, the presence (or lack thereof) and the impact of fake stock microblogs has never systematically been investigated before. Here, we study 9M tweets related to stocks of the 5 main financial markets in the US. By comparing tweets with financial data from Google Finance, we highlight important characteristics of Twitter stock microblogs. More importantly, we uncover a malicious practice - referred to as cashtag piggybacking - perpetrated by coordinated groups of bots and likely aimed at promoting low-value stocks by exploiting the popularity of high-value ones. Among the findings of our study is that as much as 71% of the authors of suspicious financial tweets are classified as bots by a state-of-the-art spambot detection algorithm. Furthermore, 37% of them were suspended by Twitter a few months after our investigation. Our results call for the adoption of spam and bot detection techniques in all studies and applications that exploit user-generated content for predicting the stock market

    tf-Darshan: Understanding Fine-grained I/O Performance in Machine Learning Workloads

    Full text link
    Machine Learning applications on HPC systems have been gaining popularity in recent years. The upcoming large scale systems will offer tremendous parallelism for training through GPUs. However, another heavy aspect of Machine Learning is I/O, and this can potentially be a performance bottleneck. TensorFlow, one of the most popular Deep-Learning platforms, now offers a new profiler interface and allows instrumentation of TensorFlow operations. However, the current profiler only enables analysis at the TensorFlow platform level and does not provide system-level information. In this paper, we extend TensorFlow Profiler and introduce tf-Darshan, both a profiler and tracer, that performs instrumentation through Darshan. We use the same Darshan shared instrumentation library and implement a runtime attachment without using a system preload. We can extract Darshan profiling data structures during TensorFlow execution to enable analysis through the TensorFlow profiler. We visualize the performance results through TensorBoard, the web-based TensorFlow visualization tool. At the same time, we do not alter Darshan's existing implementation. We illustrate tf-Darshan by performing two case studies on ImageNet image and Malware classification. We show that by guiding optimization using data from tf-Darshan, we increase POSIX I/O bandwidth by up to 19% by selecting data for staging on fast tier storage. We also show that Darshan has the potential of being used as a runtime library for profiling and providing information for future optimization.Comment: Accepted for publication at the 2020 International Conference on Cluster Computing (CLUSTER 2020

    Oximetry use in obstructive sleep apnea

    Get PDF
    Producción CientíficaIntroduction. Overnight oximetry has been proposed as an accessible, simple, and reliable technique for obstructive sleep apnea syndrome (OSAS) diagnosis. From visual inspection to advanced signal processing, several studies have demonstrated the usefulness of oximetry as a screening tool. However, there is still controversy regarding the general application of oximetry as a single screening methodology for OSAS. Areas covered. Currently, high-resolution portable devices combined with pattern recognition-based applications are able to achieve high performance in the detection this disease. In this review, recent studies involving automated analysis of oximetry by means of advanced signal processing and machine learning algorithms are analyzed. Advantages and limitations are highlighted and novel research lines aimed at improving the screening ability of oximetry are proposed. Expert commentary. Oximetry is a cost-effective tool for OSAS screening in patients showing high pretest probability for the disease. Nevertheless, exhaustive analyses are still needed to further assess unattended oximetry monitoring as a single diagnostic test for sleep apnea, particularly in the pediatric population and in especial groups with significant comorbidities. In the following years, communication technologies and big data analysis will overcome current limitations of simplified sleep testing approaches, changing the detection and management of OSAS.This research has been partially supported by the projects DPI2017-84280-R and RTC-2015-3446-1 from Ministerio de Economía, Industria y Competitividad and European Regional Development Fund (FEDER), the project 66/2016 of the Sociedad Española de Neumología y Cirugía Torácica (SEPAR), and the project VA037U16 from the Consejería de Educación de la Junta de Castilla y León and FEDER. D. Álvarez was in receipt of a Juan de la Cierva grant IJCI-2014-22664 from the Ministerio de Economía y Competitividad
    corecore