7 research outputs found
Design of Single-User-Mode Computer-Based Examination System for Senior Secondary Schools in Onitsha North Local Government Area of Anambra State, Nigeria
This work is concerned with the design of a single-user-mode computer-based examination system. It focused on trends in online computer based examination and carried out a critical review of current paper-based test system employed in Senior Secondary Schools in Nigeria. An alternative system to provide solutions to the current challenges identified in the existing system was then proposed. The system was designed using the object SSADM methodology and implemented using rapid PHP IDE on a Windows 10 system, using PHP, HTML, CSS and MySQL technologies and Apache server as the application server. Source Code folder for the designed CBT system is available at https://villagemath.net/journals/ver/v3i1/ABAH-HONMANE-AGE-OGBULE-CBT.zip with the Installation Guide and Setup information provided in Appendix A
Evaluering van rekenaarpakkette as hulpmiddel by die opstel van vraestelle by die technikon Vrystaat
ThesisLecturers at the Technikon Free State who participated in a survey felt that
too much time was spent on compiling questionnaires and memoranda. The
purpose of the study was to identify a cQmputer package that lecturers could
use to compile Questionnaires and me!l1oranda in order to save time.
Most of the respondents indicated that they were computer literate and that
they were personally responsible for the electronic preparation of their
questionnaires. These preparations are done by means of a word processor.
It was found that lecturers present an average of 3 subjects. The average
time spent to compile a questionnaire was 10 hours and the average number
of assessments per subject per year was 12.
Most of the respondents made use of items previously used. The burden of
compiling questionnaires and memoranda could be taken off lecturers if these
items were stored in an item bank, provided that the item bank was regularly
updated with new items. The majority of respondents indicated that if a
computer package that suits their requirements existed, they would utilize it.
The starting point of this investigation was to determine what should be
present in a questionnaire and memorandum. The questions in a
questionnaire can be divided into two types namely selection response items
and construction response items. Selection response items can be divided
into true/false -, multiple-choice - and cross match items. Construction
response items can be divided into short answer -, completion - and essay items. Respondents indicated that all the above mentioned question types are
used in questionnaires.
According to the regulations of the Technikon Free State the score of each
question and section must be displayed on the questionnaire. The grand total
of the questionnaire should be indicated clearly. The questionnaire should
also be analyzed to illustrate its compilation pertaining to knowledge, insight,
application and creativeness. Each questionnaire should be compiled in
English and Afrikaans, either combined into one questionnaire or alternatively
two separate questionnaires. Most of the respondents indicated that the
questions are numbered and therefore a need for sections and subsections in
a questionnaire were identified.
Existing computer packages were investigated to see what facilities were
available to compile questionnaires and memoranda. Lecturers were provided
with a second questionnaire to determine what should, according to them, be
present in a computer package to compile questionnaires and memoranda.
The questions in the questionnaire were based on the facilities that were
present in existing computer packages, the regulations of the Technikon Free
State, as well as literature.
The respondents rated security as important. A need to work in a Windows
environment was identified, mainly because of familiarity. Further it was found
that it should also be possible to export the questionnaire and memoranda to
a word processing program to do final adjustments before the questionnaire
is printed on paper. A measuring instrument was developed to evaluate computer packages that
compile questionnaires and memoranda. The measuring instrument was
divided into 6 sections namely: registration of items, editing of saved items,
compiling a questionnaire, additional information about computer packages,
general selection criteria and specific selection criteria. According to the
importance of the item, certain weights were assigned to the items on the
measuring instrument.
Existing computer packages were measured against the measuring
instrument. The strong characteristics and the shortcomings of each
computer package were highlighted. A recommendation was made as to
what computer package to use at the Technikon Free State. Because no
computer package fulfils all the needs of the user, it remains the user's
responsibility to make use of different methods to overcome the shortcomings
of a computer package
Formative computer based assessment in diagram based domains
This research argues that the formative assessment of student coursework in free-form, diagram-based domains can be automated using CBA techniques in a way which is both feasible and useful. Formative assessment is that form of assessment in which the objective is to assist the process of learning undertaken by the student. The primary deliverable associated with formative assessment is feedback. CBA courseware provides facilities to implement the full lifecycle of an exercise through an integrated, online system. This research demonstrates that CBA offers unique opportunities for student learning through formative assessment, including allowing students to correct their solutions over a larger number of submissions than it would be feasible to allow within the context of traditional assessment forms.
The approach to research involves two main phases. The first phase involves designing and implementing an assessment course using the CourseMarker / DATsys CBA system. This system, in common with may other examples of CBA courseware, was intended primarily to conduct summative assessment. The benefits and limitations of the system are identified. The second phase identifies three extensions to the architecture which encapsulate the difference in requirements between summative assessment and formative assessment, presents a design for the extensions, documents their implementation as extensions to the CourseMarker / DATsys architecture and evaluates their contribution.
The three extensions are novel extensions for free-form CBA which allow the assessment of the aesthetic layout of student diagrams, the marking of student solutions where multiple model solutions are acceptable and the prioritisation and truncation of feedback prior to its presentation to the student.
Evaluation results indicate that the student learning process can be assisted through formative assessment which is automated using CBA courseware. The students learn through an iterative process in which feedback upon a submitted student coursework solution is used by the student to improve their solution, after which they may re-submit and receive further feedback
Formative computer based assessment in diagram based domains
This research argues that the formative assessment of student coursework in free-form, diagram-based domains can be automated using CBA techniques in a way which is both feasible and useful. Formative assessment is that form of assessment in which the objective is to assist the process of learning undertaken by the student. The primary deliverable associated with formative assessment is feedback. CBA courseware provides facilities to implement the full lifecycle of an exercise through an integrated, online system. This research demonstrates that CBA offers unique opportunities for student learning through formative assessment, including allowing students to correct their solutions over a larger number of submissions than it would be feasible to allow within the context of traditional assessment forms.
The approach to research involves two main phases. The first phase involves designing and implementing an assessment course using the CourseMarker / DATsys CBA system. This system, in common with may other examples of CBA courseware, was intended primarily to conduct summative assessment. The benefits and limitations of the system are identified. The second phase identifies three extensions to the architecture which encapsulate the difference in requirements between summative assessment and formative assessment, presents a design for the extensions, documents their implementation as extensions to the CourseMarker / DATsys architecture and evaluates their contribution.
The three extensions are novel extensions for free-form CBA which allow the assessment of the aesthetic layout of student diagrams, the marking of student solutions where multiple model solutions are acceptable and the prioritisation and truncation of feedback prior to its presentation to the student.
Evaluation results indicate that the student learning process can be assisted through formative assessment which is automated using CBA courseware. The students learn through an iterative process in which feedback upon a submitted student coursework solution is used by the student to improve their solution, after which they may re-submit and receive further feedback
Performing automatic exams
In this paper we describe a tool to build software systems which replace the role of the examiner during a typical Italian academic exam in technical/scientific subjects. Such systems are designed so as to exploit the advantages of self-adapted testing (SAT) for reducing the effect of anxiety and of computerised adaptive testing (CAT) for increasing the assessment efficiency. A SAT-like pre-exam determines the starting difficulty level of the following CAT. The exam can thus be dynamically adapted to suit the ability of the student, i.e. by making it more difficult or easier as required.
The examiner simply needs to associate the level of difficulty with a suitable number of initial queries. After posing these queries to a sample group of students and collecting statistics, the,tool can automatically associate a level of difficulty with each subsequent query by submitting it to sample groups of students. In addition, the tool can automatically assign a score to the levels and to the queries. Finally, the systems collect statistics so as to measure the easiness and selectivity of each query and to evaluate the validity and reliability of an exam