209,507 research outputs found

    A Note on Batch and Incremental Learnability

    Get PDF
    AbstractAccording to Gold's criterion of identification in the limit, a learner, presented with data about a concept, is allowed to make a finite number of incorrect hypotheses before converging to a correct hypothesis. If, on the other hand, the learner is allowed to make only one conjecture which has to be correct, the resulting criterion of success is known as finite identification Identification in the limit may be viewed as an idealized model for incremental learning whereas finite identification may be viewed as an idealized model for batch learning. The present paper establishes a surprising fact that the collections of recursively enumerable languages that can be finite identified (batch learned in the ideal case) from both positive and negative data can also be identified in the limit (incrementally learned in the ideal case) from only positive data. It is often difficult to extract insights about practical learning systems from abstract theorems in inductive inference. However, this result may be seen as carrying a moral for the design of learning systems, as it yields, in theidealcase of no inaccuracies, an algorithm for converting batch systems that learn from both positive and negative data into incremental systems that learn from only positive data without any loss in learning power. This is achieved by the incremental system simulating the batch system in incremental fashion and using the heuristic of “localized closed-world assumption” to generate negative data

    Keep it simple:External resource utilisation and incremental product innovation in resource-challenged South African manufacturing firms

    Get PDF
    This paper examines how firms in an emerging economy cope with resource challenges by implementing compensation strategies for incremental product innovations. The model is empirically tested using firm-level survey data from 497 South African manufacturing firms. Results show that higher diversity among a specific set of external knowledge sources is associated with a higher likelihood of incremental product innovation. Stronger embeddedness in non-domestic inter-organisational networks increases this likelihood as well. The positive effect of external knowledge diversity is more positive for higher levels of localised ties. Recommendations to enhance incremental product innovation concern the development of external relationships with domestic and international partners while limiting knowledge source diversity to a specific actor set. This paper shows that in an emerging economy firms have agency with which they can use contact learning leading to product innovations tailored to local market needs and opportunities

    Incremental rule learning based on example nearness from numerical data streams

    Get PDF
    Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up-to-date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbor algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another

    Data streams classification by incremental rule learning with parameterized generalization

    Get PDF
    Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up--to--date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbor algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another

    Incremental Rule Learning and Border Examples Selection from Numerical Data Streams

    Get PDF
    Mining data streams is a challenging task that requires online systems based on incremental learning approaches. This paper describes a classification system based on decision rules that may store up–to–date border examples to avoid unnecessary revisions when virtual drifts are present in data. Consistent rules classify new test examples by covering and inconsistent rules classify them by distance as the nearest neighbour algorithm. In addition, the system provides an implicit forgetting heuristic so that positive and negative examples are removed from a rule when they are not near one another

    Information Technology and Organizational Learning: An Empirical Analysis

    Get PDF
    Organizational learning theory suggests that organizations “learn from experience” and are thus able to adapt their range of potential behaviors through the processing of information. Our research integrates this perspective with information systems economics theory and empirically tests whether new information technology investments contribute to an organization’s ability to learn from experience. Based on a cross- sectional time series analysis of data spanning 48 months and six independently operated payment processing facilities owned by a major international financial institution, our results indicate that IT has a significant positive impact on the rate at which organizations can translate learning from cumulative experience into incremental productivity gains

    Incremental Predictive Process Monitoring: How to Deal with the Variability of Real Environments

    Full text link
    A characteristic of existing predictive process monitoring techniques is to first construct a predictive model based on past process executions, and then use it to predict the future of new ongoing cases, without the possibility of updating it with new cases when they complete their execution. This can make predictive process monitoring too rigid to deal with the variability of processes working in real environments that continuously evolve and/or exhibit new variant behaviors over time. As a solution to this problem, we propose the use of algorithms that allow the incremental construction of the predictive model. These incremental learning algorithms update the model whenever new cases become available so that the predictive model evolves over time to fit the current circumstances. The algorithms have been implemented using different case encoding strategies and evaluated on a number of real and synthetic datasets. The results provide a first evidence of the potential of incremental learning strategies for predicting process monitoring in real environments, and of the impact of different case encoding strategies in this setting
    corecore