276 research outputs found

    Trends in LN-embedding practices at Waikato Institute of Technology (Wintec) in 2019

    Get PDF
    In this report, we describe the trends in literacy-embedding practices of level-2 and level-3 tutors who worked in vocational contexts at Waikato Institute of Technology (Wintec), and who completed the New Zealand Certificate in Adult Literacy and Numeracy Education (NZCALNE[Voc]) in 2019. We analysed 19 observations, following constructivist grounded theory methodology (Charmaz, 2014), to produce 1302 descriptive labels that highlight literacy and numeracy practices integrated into tutors’ teaching intentionally pursued in a collaborative and mentored training process. Of the initial 12 categories, we conflated the mapping of LN course demands and identifying learners’ LN needs to arrive at a final 11. We then used these categories in an axial analysis (Saldaňa, 2013), categorising the 1302 labels as binaries (i.e. if the label was related to the category, 1 was coded; if not 0 [zero]). The matrix of 14322 ratings of 1s and 0s was then analysed. We calculated the frequency of 1s by category. We argued that the axial analysis allowed us to develop a more holistic perspective which showed how the 1302 labels were configured in relation to the 11 categories of analysis. We concluded that the 11 categories represented key aspects of vocational teaching and training emphasising that LN-embedding practices have to be seamlessly integrated into general pedagogical approaches. A key construct for new tutors is to shape their understanding of seamlessly integrated versus bolted-on LN practices. Our recommendations remain within the whole-of-organisation perspective proposed in the 2017-2018 report (Greyling, 2019)

    Comparative Analysis of Feature Extraction Techniques for Event Detection from News Channels' Facebook Page

    Get PDF
    Event detection from the Social Network sites (SNs) has attracted significant attention of many researchers to understand user perceptions and opinions on certain incidents that have occurred. Facebook is the most famous SNs among internet users to express their opinions, emotions and thoughts. Due to its popularity, many news channels such as BBC have created a Facebook page to allow reader to comment on news reported, which has led to an explosion of user-generated data posted on the Internet. Monitoring and analyzing this rich and continuous user-generated content can yield unprecedentedly valuable information, enabling users and organizations to acquire actionable knowledge. Previously, in the context of text mining research, various feature extraction techniques have been proposed to extract relevant key features that could be used to detect the news posts into corresponding event. However, these techniques are separately tested on different data. Moreover, analyzing large number of news posts over a period of time is a challenging task due to its complex properties and unstructured data. Thus, this paper has proposed a comparative analysis on various types of feature extraction techniques on three different classifiers, namely Support Vector Machine (SVM), Naïve Bayes (NB) and KNearest Neighbor (kNN). The aim of this research is to discover the appropriate feature extraction technique and classifier that could correct detect event and offer optimal accuracy result. This analysis has been tested on three news channels datasets, namely, BBC, Aljazeera, and Al-Arabiya news channels. The experimental results have shown that Chisquare and SVM has proven to be a better extraction and classifier technique compared to other techniques with optimal accuracy of 92.29%, 87.12%, 87.00% have been observed in BBC, Aljazeera, and Al-Arabiya news channels respectivel

    Numeracy gains at the Waikato Institute of Technology Wintec) for 2018

    Get PDF
    This report tracks numeracy gains achieved by targeted 2018 students at Waikato Institute of Technology. In collating data, we applied the multi-year testing requirement referred to by the Tertiary Education Commission (TEC, 2012, 2017a, b) as the sequence concept. To be able to compare initial and progress assessment scores, we were required to set up a multivariate layout manually. We report on learners’ step-based progress to exemption levels for numeracy. Of the targeted numeracy cohort (N=591), 44.2% of learners (n=261) progressed to exemption-level scores (step 5 or higher). We used cross-tabulations to report on numeracy progress by ethnicity and Centre of Study at the institute. To establish whether learners showed statistically significant gain in numeracy, we used a matched-pairs t-test to compare initial and progress scale scores for the full cohort, followed by repeated measures analysis of variance (ANOVA) to investigate gains for within-subjects differences for two fixed factors, ethnicities and Centre of Study. To explore between-group and between-Centre differences, we performed a two-way ANOVA on Initial and Progress Scale scores for the two fixed factors. To complete the picture, we replicated TEC’s (2012) algorithm for calculating gain to illustrate that these results under-reported learners’ numeracy progress. The findings showed that within-subjects gains were statistically significant, while between-subjects gains for ethnicity categories were not statistically significant. For the TEC’s (2012) algorithm, we found that approximately 22.7% (n=134) of learners (n=591) who had achieved step 5 (or higher) on numeracy were classified as not having achieved statistically significant gain. We continue to view the TEC’s algorithm as under-reporting success, noting the disparate impact of the algorithm in calculating progress. We concluded that current embedded numeracy instruction practices, though successful, could still be improved. We recommend that findings on numeracy progress be considered within a joined-up system of organisational practice that takes literacy and numeracy (LN) progress data, classroom observation analyses and module completions into account. The challenge will be to develop innovations for numeracy development that align with changing approaches and practices in vocational pedagogy. A whole-of-organisation approach would require that the LN team pursue close ties with other support teams such as Student Learning Services, Te Kete Kōnae and the Wintec learning coaches

    Developing a whole-of-organisation perspective on literacy-embedding practices at Wintec: Multiple perspectives on a selection of 2019 Wintec cohorts

    Get PDF
    This report deals with the reading and numeracy performance of four cohorts of learners, taught by candidates who successfully completed the New Zealand Certificate in Adult Literacy and Numeracy Education (Vocational) in 2019. We show how these selected cohorts’ performance can be compared to the overall Wintec performance reported in Greyling, Ahmad and Wallace (2020a, b). We also investigate the links between initial reading and numeracy scores and module completions, defined as either a categorical Pass/Fail binary, or a continuous variable (i.e. the percentage of modules each student completes in any given year). Although the findings show that the targeted cohorts exceeded the mean performance for Wintec students on both reading and numeracy, we point out the limitations and ambiguities associated with such a finding. We recommend that a multifactorial model be developed to explain the complexities of student performance, the pursuit of a whole-of-organisation perspective remain a priority goal, a larger sample of NZCALNE(Voc) candidates’ students be tracked, and other methodologies and/or interventions be considered to lift outcomes for students

    Reading assessment as developmental tracking: A Vygotskyan perspective

    Get PDF
    In this report, we outline how intent statements can be used to identify high-frequency reading needs for a cohort of learners whose performance has been measured by a complex-adaptive reading assessment. Working from the assumption that intent statements, associated with incorrect item responses, represent a random sample of learner needs beyond their current level of knowledge and skill, we analysed the composite set of intent statements for incorrect items for a cohort of 39 learners. We outline the sub-components associated with the top-4 intent statements, followed by cross-tabulations to show the step level at which learning activities could be pitched to ensure that the distance between current and undeveloped skills and knowledge was not too great. Our approach, aligned with Vygotsky’s (1978) notion of the zone of proximal development, derives from the complex-adaptive test (CAT) functionality associated with the online version of the Literacy and Numeracy Assessment Tool (LNAT) in use in the tertiary sector in New Zealand

    Multi label ranking based on positive pairwise correlations among labels

    Get PDF
    Multi-Label Classification (MLC) is a general type of classification that has attracted many researchers in the last few years. Two common approaches are being used to solve the problem of MLC: Problem Transformation Methods (PTMs) and Algorithm Adaptation Methods (AAMs). This Paper is more interested in the first approach; since it is more general and applicable to any domain. In specific, this paper aims to meet two objectives. The first objective is to propose a new multi-label ranking algorithm based on the positive pairwise correlations among labels, while the second objective aims to propose new simple PTMs that are based on labels correlations, and not based on labels frequency as in conventional PTMs. Experiments showed that the proposed algorithm overcomes the existing methods and algorithms on all evaluation metrics that have been used in the experiments. Also, the proposed PTMs show a superior performance when compared with the existing PTMs

    Expanding the data capacity of QR codes using multiple compression algorithms and base64 encode/decode

    Get PDF
    The Quick Response (QR) code is an enhancement from one dimensional barcode which was used to store limited capacity of information. The QR code has the capability to encode various data formats and languages. Several techniques were suggested by researchers to increase the data contents. One of the technique to increase data capacity is by compressing the data and encode it with a suitable data encoder. This study focuses on the selection of compression algorithms and use base64 encoder/decoder to increase the capacity of data which is to be stored in the QR code. The result will be compared with common technique to get the efficiency among the selected compression algorithm after the data was encoded with base64 encoder/decoder

    A comparative study on gene selection methods for tissues classification on large scale gene expression data

    Get PDF
    Deoxyribonucleic acid (DNA) microarray technology is the recent invention that provided colossal opportunities to measure a large scale of gene expressions simultaneously.However, interpreting large scale of gene expression data remain a challenging issue due to their innate nature of “high dimensional low sample size”.Microarray data mainly involved thousands of genes, n in a very small size sample, p which complicates the data analysis process.For such a reason, feature selection methods also known as gene selection methods have become apparently need to select significant genes that present the maximum discriminative power between cancerous and normal tissues.Feature selection methods can be structured into three basic factions; a) filter methods; b) wrapper methods and c) embedded methods.Among these methods, filter gene selection methods provide easy way to calculate the informative genes and can simplify reduce the large scale microarray datasets.Although filter based gene selection techniques have been commonly used in analyzing microarray dataset, these techniques have been tested separately in different studies.Therefore, this study aims to investigate and compare the effectiveness of these four popular filter gene selection methods namely Signal-to-Noise ratio (SNR), Fisher Criterion (FC), Information Gain (IG) and t-Test in selecting informative genes that can distinguish cancer and normal tissues.In this experiment, common classifiers, Support Vector Machine (SVM) is used to train the selected genes.These gene selection methods are tested on three large scales of gene expression datasets, namely breast cancer dataset, colon dataset, and lung dataset.This study has discovered that IG and SNR are more suitable to be used with SVM.Furthermore, this study has shown SVM performance remained moderately unaffected unless a very small size of genes was selected

    An Intelligent Model To Control Preemption Rate Of Instantaneous Request Calls In Networks With Book-Ahead Reservation

    Get PDF
    Resource sharing between book-ahead (BA) and instantaneous request (IR) reservation often results in high preemption rate of on-going IR calls. High IR call preemption rate causes interruption to service continuity which is considered as detrimental in a QoS-enabled network. A number of call admission control models have been proposed in literature to reduce the preemption rate of on-going IR calls. Many of these models use a tuning parameter to achieve certain level of preemption rate. This paper presents an artificial neural network (ANN) model to dynamically control the preemption rate of on-going calls in a QoS-enabled network. The model maps network traffic parameters and desired level of preemption rate into appropriate tuning parameter. Once trained, this model can be used to automatically estimate the tuning parameter value necessary to achieve the desired level of preemption rate. Simulation results show that the preemption rate attained by the model closely matches with the target rate

    Representing Semantics of Text by Acquiring its Canonical Form

    Get PDF
    Canonical form is a notion stating that related idea should have the same meaning representation. It is a notion that greatly simplifies task by dealing with a single meaning representation for a wide range of expression. The issue in text representation is to generate a formal approach of capturing meaning or semantics in sentences. These issues include heterogeneity and inconsistency in text. Polysemous, synonymous, morphemes and homonymous word poses serious drawbacks when trying to capture senses in sentences. This calls for a need to capture and represent senses in order to resolve vagueness and improve understanding of senses in documents for knowledge creation purposes. We introduce a simple and straightforward method to capture canonical form of sentences. The proposed method first identifies the canonical forms using the Word Sense Disambiguation (WSD) technique and later applies the First Order Predicate Logic (FOPL) scheme to represent the identified canonical forms. We adopted two algorithms in WSD, which are Lesk and Selectional Preference Restriction. These algorithms concentrate mainly on disambiguating senses in words, phrases and sentences. Also we adopted the First order Predicate Logic scheme to analyse argument predicate in sentences, employing the consequence logic theorem to test for satisfiability, validity and completeness of information in sentences
    corecore