1,929,662 research outputs found

    The Enhancement of Mathematical Communication and Self Regulated Learning of Senior High School Students Through PQ4R Strategy Accompanied by Refutation Text Reading

    Get PDF
    This study is experiment research with control group pretest-posttest design and aimed to examine the influence of PQ4R strategy and Refutation Text, school level, and student’s mathematical early knowledge toward achievement and enhancement of student’s mathematical communication ability and Self Regulated Learning. Subject of study as much as 241 students of class X from three Public Senior High School from high, medium, and low school level. Research instrument consist of one set of student’s mathematical communication, and one set of student’s Self Regulated Learning scale. Data analysis use Kosmogorov-Smirnov Test (Test-Z), Level Test, Test-t, one-way and two-way ANOVA, Post Hoc Test (Scheffe) and also Chi-Square Test. Study found that learning with PR4R strategy accompanied by Refutation Text Reading give consistent influence compared with conventional learning as viewed as a whole, based on school level and also mathematical early knowledge. In addition, study also found: (1) there is no interaction between learning (PQ4R) accompanied by Refutation Text reading and conventional and school level toward (a) student’s mathematical communication and (b) student’s Self Regulated Learning; (2) there is no significant interaction between learning and student’s mathematical early knowledge toward (a) student’s mathematical communication ability and (b) student’s Self Regulated Learning; and (3) there is association between student’s mathematical communication ability and student’s Self Regulated Learning. Keywords: PQ4R, Refutation Text, Mathematical Communication, and Self Regulated Learning

    An Easy to Use Repository for Comparing and Improving Machine Learning Algorithm Usage

    Full text link
    The results from most machine learning experiments are used for a specific purpose and then discarded. This results in a significant loss of information and requires rerunning experiments to compare learning algorithms. This also requires implementation of another algorithm for comparison, that may not always be correctly implemented. By storing the results from previous experiments, machine learning algorithms can be compared easily and the knowledge gained from them can be used to improve their performance. The purpose of this work is to provide easy access to previous experimental results for learning and comparison. These stored results are comprehensive -- storing the prediction for each test instance as well as the learning algorithm, hyperparameters, and training set that were used. Previous results are particularly important for meta-learning, which, in a broad sense, is the process of learning from previous machine learning results such that the learning process is improved. While other experiment databases do exist, one of our focuses is on easy access to the data. We provide meta-learning data sets that are ready to be downloaded for meta-learning experiments. In addition, queries to the underlying database can be made if specific information is desired. We also differ from previous experiment databases in that our databases is designed at the instance level, where an instance is an example in a data set. We store the predictions of a learning algorithm trained on a specific training set for each instance in the test set. Data set level information can then be obtained by aggregating the results from the instances. The instance level information can be used for many tasks such as determining the diversity of a classifier or algorithmically determining the optimal subset of training instances for a learning algorithm.Comment: 7 pages, 1 figure, 6 table

    A Winnow-Based Approach to Context-Sensitive Spelling Correction

    Full text link
    A large class of machine-learning problems in natural language require the characterization of linguistic context. Two characteristic properties of such problems are that their feature space is of very high dimensionality, and their target concepts refer to only a small subset of the features in the space. Under such conditions, multiplicative weight-update algorithms such as Winnow have been shown to have exceptionally good theoretical properties. We present an algorithm combining variants of Winnow and weighted-majority voting, and apply it to a problem in the aforementioned class: context-sensitive spelling correction. This is the task of fixing spelling errors that happen to result in valid words, such as substituting "to" for "too", "casual" for "causal", etc. We evaluate our algorithm, WinSpell, by comparing it against BaySpell, a statistics-based method representing the state of the art for this task. We find: (1) When run with a full (unpruned) set of features, WinSpell achieves accuracies significantly higher than BaySpell was able to achieve in either the pruned or unpruned condition; (2) When compared with other systems in the literature, WinSpell exhibits the highest performance; (3) The primary reason that WinSpell outperforms BaySpell is that WinSpell learns a better linear separator; (4) When run on a test set drawn from a different corpus than the training set was drawn from, WinSpell is better able than BaySpell to adapt, using a strategy we will present that combines supervised learning on the training set with unsupervised learning on the (noisy) test set.Comment: To appear in Machine Learning, Special Issue on Natural Language Learning, 1999. 25 page

    On classifying images using Keras and Tensorflow in Python

    Get PDF
    This hands-on presentation will be focused on practical, essential aspects that are necessary in order to build a custom classifier. The tutorial will start from prerequisites, like the libraries that are necessary to install, to the step-by-step procedure for classifying new classes, which have not been previously learnt, by a pre-trained model using transfer learning. Such a separation of new classes of objects in images starts with the building of the novel image data set, its separation into training, validation and test sets. The model will learn to distinguish the objects from the images in the training set, it will be tuned on a validation set and finally it will face images from the previously unseen test set.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa

    Using Machine Learning for Handover Optimization in Vehicular Fog Computing

    Full text link
    Smart mobility management would be an important prerequisite for future fog computing systems. In this research, we propose a learning-based handover optimization for the Internet of Vehicles that would assist the smooth transition of device connections and offloaded tasks between fog nodes. To accomplish this, we make use of machine learning algorithms to learn from vehicle interactions with fog nodes. Our approach uses a three-layer feed-forward neural network to predict the correct fog node at a given location and time with 99.2 % accuracy on a test set. We also implement a dual stacked recurrent neural network (RNN) with long short-term memory (LSTM) cells capable of learning the latency, or cost, associated with these service requests. We create a simulation in JAMScript using a dataset of real-world vehicle movements to create a dataset to train these networks. We further propose the use of this predictive system in a smarter request routing mechanism to minimize the service interruption during handovers between fog nodes and to anticipate areas of low coverage through a series of experiments and test the models' performance on a test set

    Do We Train on Test Data? Purging CIFAR of Near-Duplicates

    Full text link
    The CIFAR-10 and CIFAR-100 datasets are two of the most heavily benchmarked datasets in computer vision and are often used to evaluate novel methods and model architectures in the field of deep learning. However, we find that 3.3% and 10% of the images from the test sets of these datasets have duplicates in the training set. These duplicates are easily recognizable by memorization and may, hence, bias the comparison of image recognition techniques regarding their generalization capability. To eliminate this bias, we provide the "fair CIFAR" (ciFAIR) dataset, where we replaced all duplicates in the test sets with new images sampled from the same domain. We then re-evaluate the classification performance of various popular state-of-the-art CNN architectures on these new test sets to investigate whether recent research has overfitted to memorizing data instead of learning abstract concepts. We find a significant drop in classification accuracy of between 9% and 14% relative to the original performance on the duplicate-free test set. The ciFAIR dataset and pre-trained models are available at https://cvjena.github.io/cifair/, where we also maintain a leaderboard.Comment: Journal of Imagin
    • …
    corecore