2 research outputs found

    A model for automated topic spotting in a mobile chat based mathematics tutoring environment

    Get PDF
    Systems of writing have existed for thousands of years. The history of civilisation and the history of writing are so intertwined that it is hard to separate the one from the other. These systems of writing, however, are not static. They change. One of the latest developments in systems of writing is short electronic messages such as seen on Twitter and in MXit. One novel application which uses these short electronic messages is the Dr Math® project. Dr Math is a mobile online tutoring system where pupils can use MXit on their cell phones and receive help with their mathematics homework from volunteer tutors around the world. These conversations between pupils and tutors are held in MXit lingo or MXit language – this cryptic, abbreviated system 0f ryting w1ch l0ks lyk dis. Project μ (pronounced mu and indicating MXit Understander) investigated how topics could be determined in MXit lingo and Project μ's research outputs spot mathematics topics in conversations between Dr Math tutors and pupils. Once the topics are determined, supporting documentation can be presented to the tutors to assist them in helping pupils with their mathematics homework. Project μ made the following contributions to new knowledge: a statistical and linguistic analysis of MXit lingo provides letter frequencies, word frequencies, message length statistics as well as linguistic bases for new spelling conventions seen in MXit based conversations; a post-stemmer for use with MXit lingo removes suffixes from the ends of words taking into account MXit spelling conventions allowing words such as equashun and equation to be reduced to the same root stem; a list of over ten thousand stop words for MXit lingo appropriate for the domain of mathematics; a misspelling corrector for MXit lingo which corrects words such as acount and equates it to account; and a model for spotting mathematical topics in MXit lingo. The model was instantiated and integrated into the Dr Math tutoring platform. Empirical evidence as to the effectiveness of the μ Topic Spotter and the other contributions is also presented. The empirical evidence includes specific statistical tests with MXit lingo, specific tests of the misspelling corrector, stemmer, and feedback mechanism, and an extensive exercise of content analysis with respect to mathematics topics

    A mathematics rendering model to support chat-based tutoring

    Get PDF
    Dr Math is a math tutoring service implemented on the chat application Mxit. The service allows school learners to use their mobile phones to discuss mathematicsrelated topics with human tutors. Using the broad user-base provided by Mxit, the Dr Math service has grown to consist of tens of thousands of registered school learners. The tutors on the service are all volunteers and the learners far outnumber the available tutors at any given time. School learners on the service use a shorthand language-form called microtext, to phrase their queries. Microtext is an informal form of language which consists of a variety of misspellings and symbolic representations, which emerge spontaneously as a result of the idiosyncrasies of a learner. The specific form of microtext found on the Dr Math service contains mathematical questions and example equations, pertaining to the tutoring process. Deciphering the queries, to discover their embedded mathematical content, slows down the tutoring process. This wastes time that could have been spent addressing more learner queries. The microtext language thus creates an unnecessary burden on the tutors. This study describes the development of an automated process for the translation of Dr Math microtext queries into mathematical equations. Using the design science research paradigm as a guide, three artefacts are developed. These artefacts take the form of a construct, a model and an instantiation. The construct represents the creation of new knowledge as it provides greater insight into the contents and structure of the language found on a mobile mathematics tutoring service. The construct serves as the basis for the creation of a model for the translation of microtext queries into mathematical equations, formatted for display in an electronic medium. No such technique currently exists and therefore, the model contributes new knowledge. To validate the model, an instantiation was created to serve as a proof-of-concept. The instantiation applies various concepts and techniques, such as those related to natural language processing, to the learner queries on the Dr Math service. These techniques are employed in order to translate an input microtext statement into a mathematical equation, structured by using mark-up language. The creation of the instantiation thus constitutes a knowledge contribution, as most of these techniques have never been applied to the problem of translating microtext into mathematical equations. For the automated process to have utility, it should perform on a level comparable to that of a human performing a similar translation task. To determine how closely related the results from the automated process are to those of a human, three human participants were asked to perform coding and translation tasks. The results of the human participants were compared to the results of the automated process, across a variety of metrics, including agreement, correlation, precision, recall and others. The results from the human participants served as the baseline values for comparison. The baseline results from the human participants were compared with those of the automated process. Krippendorff’s α was used to determine the level of agreement and Pearson’s correlation coefficient to determine the level of correlation between the results. The agreement between the human participants and the automated process was calculated at a level deemed satisfactory for exploratory research and the level of correlation was calculated as moderate. These values correspond with the calculations made as the human baseline. Furthermore, the automated process was able to meet or improve on all of the human baseline metrics. These results serve to validate that the automated process is able to perform the translation at a level comparable to that of a human. The automated process is available for integration into any requesting application, by means of a publicly accessible web service
    corecore