169 research outputs found

    Growth in Stress

    Get PDF
    We propose a new global risk index, Growth-in-Stress (GiS), that measures the expected decrease in a country GDP growth as the global factors, which drive world growth, are subject to stressful conditions. Stress is measured as the 95% contours of the joint probability distribution of the factors. With GDP growth rates of a sample of 87 countries in the World Bank and International Monetary Fund databases and for the period 1985 to 2015, we extract three global factors: a first world growth factor driven mainly by all industrial and emerging countries; a second factor driven by “other developing” countries in Africa and America; and a third factor that is mostly related to East Asian economies. We find that the average GiS across industrialized, emerging and other developing countries has been going down from 1987. Post 2008 financial crisis, mainly from 2011 on, the world overall has fallen in a state-of-complacency with the average GiS falling quite dramatically to reach the lowest levels of risk (0-1% potential drop in growth) in 2015. However, the dispersion within groups is quite wide. It is the smallest among industrialized countries and the largest among emerging and other developing countries. We also measure the factor stress on different quantiles of the DGP growth distribution of each country. We calculate an Armageddon-type event as we seek to find the 5% GiS quantile under 95% extreme factor events and find that it can be as large as an annual 20% drop in growth.Financial Support from the Spanish Government contract grant ECO2015-70331-C2-2-R (MINECO/FEDER) is gratefully acknowledged. We are also grateful for very helpful comments to participants at Time Series Workshop meeting (2018, Zaragoza) and at Conference on Statistical Methods for Big Data (2018, Madrid). González-Rivera acknowledges the financial support of the UC-Riverside Academic Senate grants. Any remaining errors are obviously our responsibility

    Accurate Subsampling Intervals of Principal Components Factors

    Get PDF
    In the context of Dynamic Factor Models (DFMs), one of the most popular procedures for factor extraction is Principal Components (PC). Measuring the uncertainty associated to PC factor estimates should be part of interpreting them. However, the asymptotic distribution of PC factors could not be an appropriate approximation to the finite sample distribution for the sample sizes and cross-sectional dimensions usually encountered in practice. The main problem is that parameter uncertainty is not taken into account. We show that several bootstrap procedures proposed in the context of DFM with goals related to inference are not appropriate to measure the uncertainty of PC factor estimates. In this paper, we propose an asymptotically valid subsampling procedure designed with this purpose. The finite sample properties of the proposed procedure are analyzed and compared with those of the asymptotic and alternative extant bootstrap procedures. The results are empirically illustrated obtaining confidence intervals of the underlying factor in a system of Spanish macroeconomic variables.Both authors acknowledge financial support from the Spanish Ministry of Education and Science, research project ECO2015-70331-C2-2-R

    Portable Instrument for Hemoglobin Determination Using Room-Temperature Phosphorescent Carbon Dots

    Get PDF
    A portable reconfigurable platform for hemoglobin determination based on inner filter quenching of room-temperature phosphorescent carbon dots (CDs) in the presence of H2O2 is described. The electronic setup consists of a light-emitting diode (LED) as the carbon dot optical exciter and a photodiode as a light-to-current converter integrated in the same instrument. The reconfigurable feature provides adaptability to use the platform as an analytical probe for CDs coming from different batches with some variations in luminescence characteristics. The variables of the reaction were optimized, such as pH, concentration of reagents, and response time; as well as the variables of the portable device, such as LED voltage, photodiode sensitivity, and adjustment of the measuring range by a reconfigurable electronic system. The portable device allowed the determination of hemoglobin with good sensitivity, with a detection limit of 6.2 nM and range up to 125 nM.MINECO (Spain) CTQ2016-78754-C2-1-REuropean Union (EU

    Novel methodology for detecting and localizing cancer area in histopathological images based on overlapping patches

    Get PDF
    This work has been partially supported by the Project PID2021-128317OB-I0, funded by the MCIN/AEI/ 10.13039/501100011033 and ‘‘ERDF A way of making Europe". Funding for open access charge: Universidad de Granada / CBUA. All authors approved the final version of manuscript to be published.Cancer disease is one of the most important pathologies in the world, as it causes the death of millions of people, and the cure of this disease is limited in most cases. Rapid spread is one of the most important features of this disease, so many efforts are focused on its early-stage detection and localization. Medicine has made numerous advances in the recent decades with the help of artificial intelligence (AI), reducing costs and saving time. In this paper, deep learning models (DL) are used to present a novel method for detecting and localizing cancerous zones in WSI images, using tissue patch overlay to improve performance results. A novel overlapping methodology is proposed and discussed, together with different alternatives to evaluate the labels of the patches overlapping in the same zone to improve detection performance. The goal is to strengthen the labeling of different areas of an image with multiple overlapping patch testing. The results show that the proposed method improves the traditional framework and provides a different approach to cancer detection. The proposed method, based on applying 3x3 step 2 average pooling filters on overlapping patch labels, provides a better result with a 12.9% correction percentage for misclassified patches on the HUP dataset and 15.8% on the CINIJ dataset. In addition, a filter is implemented to correct isolated patches that were also misclassified. Finally, a CNN decision threshold study is performed to analyze the impact of the threshold value on the accuracy of the model. The alteration of the threshold decision along with the filter for isolated patches and the proposed method for overlapping patches, corrects about 20% of the patches that are mislabeled in the traditional method. As a whole, the proposed method achieves an accuracy rate of 94.6%.MCIN/AEI/ 10.13039/501100011033/ PID2021-128317OB-I0ERDF A way of making EuropeUniversidad de Granada / CBU

    Intelligent system based on genetic programming for atrial fibrillation classification

    Get PDF
    This article focuses on the development of intelligent classifiers in the area of biomedicine, focusing on the problem of diagnosing cardiac diseases based on the electrocardiogram (ECG), or more precisely, on the differentiation of the types of atrial fibrillations. First of all, we will study the ECG, and the treatment of the ECG in order to work with it with this specific pathology. In order to achieve this we will study different ways of elimination, in the best possible way, of any activity that is not caused by the auriculars. We will study and imitate the ECG treatment methodologies and the characteristics extracted from the electrocardiograms that were used by the researchers who obtained the best results in the Physionet Challenge, where the classification of ECG recordings according to the type of atrial fibrillation (AF) that they showed, was realized. We will extract a great amount of characteristics, partly those used by these researchers and additional characteristics that we consider to be important for the distinction previously mentioned. A new method based on evolutionary algorithms will be used to realize a selection of the most relevant characteristics and to obtain a classifier that will be capable of distinguishing the different types of this pathology

    Performance comparison between multi‑center histopathology datasets of a weakly‑supervised deep learning model for pancreatic ductal adenocarcinoma detection

    Get PDF
    Background Pancreatic ductal carcinoma patients have a really poor prognosis given its difficult early detection and the lack of early symptoms. Digital pathology is routinely used by pathologists to diagnose the disease. However, visually inspecting the tissue is a time-consuming task, which slows down the diagnostic procedure. With the advances occurred in the area of artificial intelligence, specifically with deep learning models, and the growing availability of public histology data, clinical decision support systems are being created. However, the generalization capabilities of these systems are not always tested, nor the integration of publicly available datasets for pancreatic ductal carcinoma detection (PDAC). Methods In this work, we explored the performace of two weakly-supervised deep learning models using the two more widely available datasets with pancreatic ductal carcinoma histology images, The Cancer Genome Atlas Project (TCGA) and the Clinical Proteomic Tumor Analysis Consortium (CPTAC). In order to have sufficient training data, the TCGA dataset was integrated with the Genotype-Tissue Expression (GTEx) project dataset, which contains healthy pancreatic samples. Results We showed how the model trained on CPTAC generalizes better than the one trained on the integrated dataset, obtaining an inter-dataset accuracy of 90.62% ± 2.32 and an outer-dataset accuracy of 92.17% when evaluated on TCGA + GTEx. Furthermore, we tested the performance on another dataset formed by tissue micro-arrays, obtaining an accuracy of 98.59%. We showed how the features learned in an integrated dataset do not differentiate between the classes, but between the datasets, noticing that a stronger normalization might be needed when creating clinical decision support systems with datasets obtained from different sources. To mitigate this effect, we proposed to train on the three available datasets, improving the detection performance and generalization capabilities of a model trained only on TCGA + GTEx and achieving a similar performance to the model trained only on CPTAC. Conclusions The integration of datasets where both classes are present can mitigate the batch effect present when integrating datasets, improving the classification performance, and accurately detecting PDAC across different datasets.Spanish Ministry of Sciences, Innovation and Universities under Project PID2021-128317OB-I00Junta de Andalucia P20-0016

    Integration of RNA-Seq data with heterogeneous microarray data for breast cancer profiling

    Get PDF
    Background: Nowadays, many public repositories containing large microarray gene expression datasets are available. However, the problem lies in the fact that microarray technology are less powerful and accurate than more recent Next Generation Sequencing technologies, such as RNA-Seq. In any case, information from microarrays is truthful and robust, thus it can be exploited through the integration of microarray data with RNA-Seq data. Additionally, information extraction and acquisition of large number of samples in RNA-Seq still entails very high costs in terms of time and computational resources.This paper proposes a new model to find the gene signature of breast cancer cell lines through the integration of heterogeneous data from different breast cancer datasets, obtained from microarray and RNA-Seq technologies. Consequently, data integration is expected to provide a more robust statistical significance to the results obtained. Finally, a classification method is proposed in order to test the robustness of the Differentially Expressed Genes when unseen data is presented for diagnosis. Results: The proposed data integration allows analyzing gene expression samples coming from different technologies. The most significant genes of the whole integrated data were obtained through the intersection of the three gene sets, corresponding to the identified expressed genes within the microarray data itself, within the RNA-Seq data itself, and within the integrated data from both technologies. This intersection reveals 98 possible technology-independent biomarkers. Two different heterogeneous datasets were distinguished for the classification tasks: a training dataset for gene expression identification and classifier validation, and a test dataset with unseen data for testing the classifier. Both of them achieved great classification accuracies, therefore confirming the validity of the obtained set of genes as possible biomarkers for breast cancer. Through a feature selection process, a final small subset made up by six genes was considered for breast cancer diagnosis. Conclusions: This work proposes a novel data integration stage in the traditional gene expression analysis pipeline through the combination of heterogeneous data from microarrays and RNA-Seq technologies. Available samples have been successfully classified using a subset of six genes obtained by a feature selection method. Consequently, a new classification and diagnosis tool was built and its performance was validated using previously unseen samples.This work was supported by Project TIN2015-71873-R (Spanish Ministry of Economy and Competitiveness -MINECO- and the European Regional Development Fund -ERDF)

    Resistive Switching and Charge Transport in Laser-Fabricated Graphene Oxide Memristors: A Time Series and Quantum Point Contact Modeling Approach

    Get PDF
    This work investigates the sources of resistive switching (RS) in recently reported laser-fabricated graphene oxide memristors by means of two numerical analysis tools linked to the Time Series Statistical Analysis and the use of the Quantum Point Contact Conduction model. The application of both numerical procedures points to the existence of a filament connecting the electrodes that may be interrupted at a precise point within the conductive path, resulting in resistive switching phenomena. These results support the existing model attributing the memristance of laser-fabricated graphene oxide memristors to the modification of a conductive path stoichiometry inside the graphene oxide.The authors thank the support of the Spanish Ministry of Science, Innovation and Universities under projects TEC2017-89955-P, TEC2017-84321-C4-3-R, MTM2017-88708-P and project PGC2018-098860-B-I00 (MCIU/AEI/FEDER, UE), and the predoctoral grant FPU16/01451

    Flexible Laser-Reduced Graphene Oxide Thermistor for Ubiquitous Electronics

    Get PDF
    This work presents a versatile sensing platform, intended for ubiquitous and flexible electronics based on a laser reduced-Graphene-Oxide thermistor. This technique enables the fast and ecological production of reduced Graphene Oxide without the need of masks or expensive lithography processes. The final transducer is fabricated on a flexible plastic substrate in order to use it as a superficial patch. Finally, a full demonstrator, which integrates this flexible thermistor with a low power System on Chip with wireless transmission, is presented.This work has been partially supported by the Spanish Ministry of Education, Culture and Sport (MECD) through the pre-doctoral grant FPU16/01451 , the National Excellence Research Project TEC2017-89955-P and the University of Granada through the scholarship ”Initiation to Research
    corecore