733 research outputs found

    Electric consumption pattern from big data

    Get PDF
    From the concept of smart grid, reaching an efficient and reliable network is a task that implies several stages and sub-stages with a defined and specific mission. In this way, the intelligent measurement stage conformed by the smart meters obtains the information of electrical consumption from the users or consumers (residential, commercial, and industrial). For this purpose, a smart metering infrastructure made of wireless telecommunications and fiber optic has been generated allows to guarantee the connectivity of the smart meters and the central office of electric companies. This paper aims to describe the use of MapReduce as a technique to obtain information about the load curve at an appropriate time to obtain trends and statistics related to the pattern of residential electricity consumption

    Efficiency of mining algorithms in academic indicators

    Get PDF
    Data Mining is the process of analyzing data using automated methodologies to find hidden patterns [1]. Data mining processes aim at the use of the dataset generated by a process or business in order to obtain information that supports decision making at executive levels [2] [3] through the automation of the process of finding predictable information in large databases and answer to questions that traditionally required intense manual analysis [4]. Due to its definition, data mining is applicable to educational processes, and an example of that is the emergence of a research branch named Educational Data Mining, in which patterns and prediction search techniques are used to find information that contributes to improving educational quality [5]. This paper presents a performance study of data mining algorithms: Decision Tree and Logistic Regression, applied to data generated by the academic function at a higher education institution

    Forecast of operational data in electric energy plants using adaptive algorithm

    Get PDF
    Traditional time series methods offer models whose parameters remain constant over time. However, industrial supply and demand processes require timely decisions based on a dynamic reality. A change in configuration, turning off, or on a production line or process, modifies the problem and the variables to be predicted. Decision support systems must dynamically adapt in order to respond quickly and appropriately to operations and their processes. This methodology is based on obtaining, for each period, the model that best fits the data, evaluating many alternatives and using statistical learning techniques. In this way, the model will adapt to the data in practice and make decisions based on experience. With three months of testing for the estimation of variables associated with supply and demand processes, predictions that differ less than 8 hundredths (less than 0.08) or 0.1% of the measured value were obtained. This indicates that data science and statistical learning represent an important area of research for variable prediction and process optimization

    Method for the recovery of images in databases of Rice grains from visual content

    Get PDF
    This paper presents a method for detecting and identifying defects in polished rice grains from their scanned image using an expert system. The sample used is designed to contain specimens with the most common defects. Digital image processing techniques were used to identify different types of visible defects in rice grains that affect the quality of the sample. The proposed method has advantages over manual identification such as reduced analysis times, repeatability of results, eliminates subjectivity in identification, records and stores information, uses easily accessible equipment and has a relatively low cost

    Classification of digitized documents applying neural networks

    Get PDF
    The exponential increase of the information available in digital format during the last years and the expectations of future growth make it necessary for the organization of information in order to improve the search and access to relevant data. For this reason, it is important to research and implement an automatic text classiïŹcation system that allows the organization of documents according to their corresponding category by using neural networks with supervised learning. In such a way, a faster process can be carried out in a timely and cost-efïŹcient way. The criteria for classifying documents are based on the deïŹned categories

    Interlaboratory Reproducibility in Growth and Reporter Expression in the Cyanobacterium Synechocystis sp. PCC 6803

    Get PDF
    In recent years, a plethora of new synthetic biology tools for use in cyanobacteria have been published; however, their reported characterizations often cannot be reproduced, greatly limiting the comparability of results and hindering their applicability. In this interlaboratory study, the reproducibility of a standard microbiological experiment for the cyanobacterial model organism Synechocystis sp. PCC 6803 was assessed. Participants from eight different laboratories quantified the fluorescence intensity of mVENUS as a proxy for the transcription activity of the three promoters PJ23100, PrhaBAD, and PpetE over time. In addition, growth rates were measured to compare growth conditions between laboratories. By establishing strict and standardized laboratory protocols, reflecting frequently reported methods, we aimed to identify issues with state-of-the-art procedures and assess their effect on reproducibility. Significant differences in spectrophotometer measurements across laboratories from identical samples were found, suggesting that commonly used reporting practices of optical density values need to be supplemented by cell count or biomass measurements. Further, despite standardized light intensity in the incubators, significantly different growth rates between incubators used in this study were observed, highlighting the need for additional reporting requirements of growth conditions for phototrophic organisms beyond the light intensity and CO2 supply. Despite the use of a regulatory system orthogonal to Synechocystis sp. PCC 6803, PrhaBAD, and a high level of protocol standardization, ∌32% variation in promoter activity under induced conditions was found across laboratories, suggesting that the reproducibility of other data in the field of cyanobacteria might be affected similarly

    Towards precision medicine: defining and characterizing adipose tissue dysfunction to identify early immunometabolic risk in symptom-free adults from the GEMM family study

    Get PDF
    Interactions between macrophages and adipocytes are early molecular factors influencing adipose tissue (AT) dysfunction, resulting in high leptin, low adiponectin circulating levels and low-grade metaflammation, leading to insulin resistance (IR) with increased cardiovascular risk. We report the characterization of AT dysfunction through measurements of the adiponectin/leptin ratio (ALR), the adipo-insulin resistance index (Adipo-IRi), fasting/postprandial (F/P) immunometabolic phenotyping and direct F/P differential gene expression in AT biopsies obtained from symptom-free adults from the GEMM family study. AT dysfunction was evaluated through associations of the ALR with F/P insulin-glucose axis, lipid-lipoprotein metabolism, and inflammatory markers. A relevant pattern of negative associations between decreased ALR and markers of systemic low-grade metaflammation, HOMA, and postprandial cardiovascular risk hyperinsulinemic, triglyceride and GLP-1 curves was found. We also analysed their plasma non-coding microRNAs and shotgun lipidomics profiles finding trends that may reflect a pattern of adipose tissue dysfunction in the fed and fasted state. Direct gene differential expression data showed initial patterns of AT molecular signatures of key immunometabolic genes involved in AT expansion, angiogenic remodelling and immune cell migration. These data reinforce the central, early role of AT dysfunction at the molecular and systemic level in the pathogenesis of IR and immunometabolic disorders

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð„with constraintsð ð ð„ „ ðandðŽð„ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis

    Development and validation of HERWIG 7 tunes from CMS underlying-event measurements

    Get PDF
    This paper presents new sets of parameters (“tunes”) for the underlying-event model of the HERWIG7 event generator. These parameters control the description of multiple-parton interactions (MPI) and colour reconnection in HERWIG7, and are obtained from a fit to minimum-bias data collected by the CMS experiment at s=0.9, 7, and 13Te. The tunes are based on the NNPDF 3.1 next-to-next-to-leading-order parton distribution function (PDF) set for the parton shower, and either a leading-order or next-to-next-to-leading-order PDF set for the simulation of MPI and the beam remnants. Predictions utilizing the tunes are produced for event shape observables in electron-positron collisions, and for minimum-bias, inclusive jet, top quark pair, and Z and W boson events in proton-proton collisions, and are compared with data. Each of the new tunes describes the data at a reasonable level, and the tunes using a leading-order PDF for the simulation of MPI provide the best description of the dat

    Measurement of prompt open-charm production cross sections in proton-proton collisions at root s=13 TeV

    Get PDF
    The production cross sections for prompt open-charm mesons in proton-proton collisions at a center-of-mass energy of 13TeV are reported. The measurement is performed using a data sample collected by the CMS experiment corresponding to an integrated luminosity of 29 nb(-1). The differential production cross sections of the D*(+/-), D-+/-, and D-0 ((D) over bar (0)) mesons are presented in ranges of transverse momentum and pseudorapidity 4 < p(T) < 100 GeV and vertical bar eta vertical bar < 2.1, respectively. The results are compared to several theoretical calculations and to previous measurements.Peer reviewe
    • 

    corecore