5,589 research outputs found

    Development of Neurofuzzy Architectures for Electricity Price Forecasting

    Get PDF
    In 20th century, many countries have liberalized their electricity market. This power markets liberalization has directed generation companies as well as wholesale buyers to undertake a greater intense risk exposure compared to the old centralized framework. In this framework, electricity price prediction has become crucial for any market player in their decision‐making process as well as strategic planning. In this study, a prototype asymmetric‐based neuro‐fuzzy network (AGFINN) architecture has been implemented for short‐term electricity prices forecasting for ISO New England market. AGFINN framework has been designed through two different defuzzification schemes. Fuzzy clustering has been explored as an initial step for defining the fuzzy rules while an asymmetric Gaussian membership function has been utilized in the fuzzification part of the model. Results related to the minimum and maximum electricity prices for ISO New England, emphasize the superiority of the proposed model over well‐established learning‐based models

    Evolving Ensemble Fuzzy Classifier

    Full text link
    The concept of ensemble learning offers a promising avenue in learning from data streams under complex environments because it addresses the bias and variance dilemma better than its single model counterpart and features a reconfigurable structure, which is well suited to the given context. While various extensions of ensemble learning for mining non-stationary data streams can be found in the literature, most of them are crafted under a static base classifier and revisits preceding samples in the sliding window for a retraining step. This feature causes computationally prohibitive complexity and is not flexible enough to cope with rapidly changing environments. Their complexities are often demanding because it involves a large collection of offline classifiers due to the absence of structural complexities reduction mechanisms and lack of an online feature selection mechanism. A novel evolving ensemble classifier, namely Parsimonious Ensemble pENsemble, is proposed in this paper. pENsemble differs from existing architectures in the fact that it is built upon an evolving classifier from data streams, termed Parsimonious Classifier pClass. pENsemble is equipped by an ensemble pruning mechanism, which estimates a localized generalization error of a base classifier. A dynamic online feature selection scenario is integrated into the pENsemble. This method allows for dynamic selection and deselection of input features on the fly. pENsemble adopts a dynamic ensemble structure to output a final classification decision where it features a novel drift detection scenario to grow the ensemble structure. The efficacy of the pENsemble has been numerically demonstrated through rigorous numerical studies with dynamic and evolving data streams where it delivers the most encouraging performance in attaining a tradeoff between accuracy and complexity.Comment: this paper has been published by IEEE Transactions on Fuzzy System

    Proposed algorithm for image classification using regression-based pre-processing and recognition models

    Get PDF
    Image classification algorithms can categorise pixels regarding to image attributes with the pre-processing of learner’s trained samples. The precision and classification accuracy are complex to compute due to the variable size of pixels (different image width and height) and numerous characteristics of image per se. This research proposes an image classification algorithm based on regression-based pre-processing and the recognition models. The proposed algorithm focuses on an optimization of pre-processing results such as accuracy and precision. To evaluate and validate, recognition model is mapped in order to cluster the digital images which are developing the problem of a multidimensional state space. Simulation results show that compared to existing algorithms, the proposed method outperforms with the optimal number of precision and accuracy in classification as well as results higher matching percentage based upon image analytics

    Qualitative Comparative Analysis as a Tool for Concept Clarification, Typology Building, and Contextualized Comparisons in Gender and Feminist Research

    Get PDF
    Qualitative Comparative Analysis (QCA) is a method for the systematic analysis of cases. A holistic view of cases and an approach to causality emphasizing complexity are some of its core features. Over the last decades, QCA has found application in many fields of the social sciences. In spite of this, its use in feminist research has been slower, and only recently QCA has been applied to topics related to social care, the political representation of women, and reproductive politics. Feminist researchers still privilege qualitative methods, in particular case studies, and are often sceptical of quantitative techniques (Spierings 2012). These studies show that the meaning and measurement of many gender concepts differ across countries and that the factors leading to feminist success and failure are context-specific. However, this scholarship struggles to systematically account for the ways in which these forces operate in different locations. The aim of this article is to demonstrate that QCA and related techniques contribute to enhance comparative analysis in ways which aligns with core ideas in gender and feminist studies. I begin by describing the main principles of QCA as a research strategy. The following sections draw on recent contributions in comparative social policy and politics literature to illustrate how it is used to deal with issues of concept clarification and measurement, policy complexity, the presence of hybrids and the development of normative types and context-sensitive causal analysis. Finally, this article concludes by discussing promising avenues for future applications of QCA in feminist research

    DATA MINING: A SEGMENTATION ANALYSIS OF U.S. GROCERY SHOPPERS

    Get PDF
    Consumers make choices about where to shop based on their preferences for a shopping environment and experience as well as the selection of products at a particular store. This study illustrates how retail firms and marketing analysts can utilize data mining techniques to better understand customer profiles and behavior. Among the key areas where data mining can produce new knowledge is the segmentation of customer data bases according to demographics, buying patterns, geographics, attitudes, and other variables. This paper builds profiles of grocery shoppers based on their preferences for 33 retail grocery store characteristics. The data are from a representative, nationwide sample of 900 supermarket shoppers collected in 1999. Six customer profiles are found to exist, including (1) "Time Pressed Meat Eaters", (2) "Back to Nature Shoppers", (3) "Discriminating Leisure Shoppers", (4) "No Nonsense Shoppers", (5) "The One Stop Socialites", and (6) "Middle of the Road Shoppers". Each of the customer profiles is described with respect to the underlying demographics and income. Consumer shopping segments cut across most demographic groups but are somewhat correlated with income. Hierarchical lists of preferences reveal that low price is not among the top five most important store characteristics. Experience and preferences for internet shopping shows that of the 44% who have access to the internet, only 3% had used it to order food.Consumer/Household Economics, Food Consumption/Nutrition/Food Safety,

    Conditioning the Estimating Ultimate Recovery of Shale Wells to Reservoir and Completion Parameters

    Get PDF
    In the last years, gas production from shale has increased significantly in the United States. Therefore, many studies have been focused on shale formation in different areas such as fracturing, reservoir simulation, forecasting and so on. Forecasting production or estimating ultimate recovery (EUR) is considered to be one of the most important items in the production development planning. The certainty in EUR calculation is questionable because there are different parameters that impact production and consequently the EUR such as rock properties and well completion design.;Different methods to calculate EUR have been used in the industry. Traditionally, the decline curve analysis method by Arps (1945) was considered to be the best common tool for estimating ultimate recovery (EUR) and reserves. However, the Arps\u27 equations over estimate of reserves when they are applied to unconventional reservoirs (extremely low permeability formation). The reason is that Arps\u27 equations only work for Boundary Dominated Flow (BDF) decline. On the other hand, many research papers show that the production from the unconventional tight reservoirs is distinguished by an extended period of late transient flow, until reaching the boundary-dominated flow. To overcome these problems and improve the unconventional reservoir\u27s production forecast, researchers have developed new empirical methods which are being implemented in all flow regimes.;These new and traditional methods have been applied in this research to calculate the EUR for more than 200 shale wells. The results of EUR will be subjected to study and condition with rock properties, well characteristics and completion\u27s design parameters. The porosity, total organic carbon, net thickness and water saturation are the main rock properties that are considered in this research. Furthermore, the impact of different well design configurations (for instance, well trajectories, completion and hydraulic fracturing variable) on EUR will be inspected this study. In addition, it will be determined from this research whether reservoir or completion parameters have the most impact on EUR. This study will provide the natural gas professionals insight and clarification regarding the effects of rock properties and well design configurations on estimating the ultimate recovery for gas shale

    Developmental constraints on vertebrate genome evolution

    Get PDF
    Constraints in embryonic development are thought to bias the direction of evolution by making some changes less likely, and others more likely, depending on their consequences on ontogeny. Here, we characterize the constraints acting on genome evolution in vertebrates. We used gene expression data from two vertebrates: zebrafish, using a microarray experiment spanning 14 stages of development, and mouse, using EST counts for 26 stages of development. We show that, in both species, genes expressed early in development (1) have a more dramatic effect of knock-out or mutation and (2) are more likely to revert to single copy after whole genome duplication, relative to genes expressed late. This supports high constraints on early stages of vertebrate development, making them less open to innovations (gene gain or gene loss). Results are robust to different sources of data-gene expression from microarrays, ESTs, or in situ hybridizations; and mutants from directed KO, transgenic insertions, point mutations, or morpholinos. We determine the pattern of these constraints, which differs from the model used to describe vertebrate morphological conservation ("hourglass" model). While morphological constraints reach a maximum at mid-development (the "phylotypic" stage), genomic constraints appear to decrease in a monotonous manner over developmental time

    Preschool and maternal labour market outcomes: evidence from a regression discontinuity design

    Get PDF
    Expanding preschool education has the dual goals of improving child outcomes and work incentives for mothers. This paper provides evidence on the second, identifying the impact of preschool attendance on maternal labor market outcomes in Argentina. A major challenge in identifying the causal effect of preschool attendance on parental outcomes is non-random selection into early education. We address this by relying on plausibly exogenous variation in preschool attendance that is induced when children are born on either side of Argentina's enrollment cutoff date of July 1. Because of enrollment cutoff dates, 4 year-olds born just before July 1 are 0.3 more likely to attend preschool. Our regression-discontinuity estimates compare maternal employment outcomes of 4 year-old children on either side of this cutoff, identifying effects among the subset of complying households (who are perhaps more likely to face constraints on their level 2 preschool attendance). Our findings suggest that, on average, 13 mothers start to work for every 100 youngest children in the household that start preschool (though, in our preferred specification, this estimate is not statistically significant at conventional levels). Furthermore, mothers are 19.1 percentage points more likely to work for more than 20 hours a week (i.e., more time than their children spend in school) and they work, on average, 7.8 more hours per week as consequence of their youngest offspring attending preschool. We find no effect on maternal labor outcomes when a child that is not the youngest in the household attends preschool. Finally, we find that at the point of transition from kindergarten to primary school some employment effects persist. Our preferred estimates condition on mother's schooling and other exogenous covariates, given evidence that mothers' schooling is unbalanced in the vicinity of the July 1 cutoff in the sample of 4 year-olds. Using a large set of natality records, we found no evidence that this is due to precise birth date manipulation by parents. Other explanations, like sample selection, are also not fully consistent with the data, and we must remain agnostic on this point. Despite this shortcoming, the credibility of the estimates is partly enhanced by the consistency of point estimates with Argentine research using a different EPH sample and sources of variation in preschool attendance (Berlinski and Galiani 2007). A growing body of research suggests that pre-primary school can improve educational outcomes for children in the short and long run (Blau and Currie 2006; Schady 2006). This paper provides further evidence that, ceteris paribus, an expansion in preschool education may enhance the employment prospects of mothers of children in preschool age

    What are the true clusters?

    Get PDF
    Constructivist philosophy and Hasok Chang's active scientific realism are used to argue that the idea of "truth" in cluster analysis depends on the context and the clustering aims. Different characteristics of clusterings are required in different situations. Researchers should be explicit about on what requirements and what idea of "true clusters" their research is based, because clustering becomes scientific not through uniqueness but through transparent and open communication. The idea of "natural kinds" is a human construct, but it highlights the human experience that the reality outside the observer's control seems to make certain distinctions between categories inevitable. Various desirable characteristics of clusterings and various approaches to define a context-dependent truth are listed, and I discuss what impact these ideas can have on the comparison of clustering methods, and the choice of a clustering methods and related decisions in practice

    3rd Workshop in Symbolic Data Analysis: book of abstracts

    Get PDF
    This workshop is the third regular meeting of researchers interested in Symbolic Data Analysis. The main aim of the event is to favor the meeting of people and the exchange of ideas from different fields - Mathematics, Statistics, Computer Science, Engineering, Economics, among others - that contribute to Symbolic Data Analysis
    corecore