721 research outputs found
Is comprehension or application the more important skill for first-year computer science students?
Time and performance data was collected on a class of 147 Computer Science 1B students, where students carried out a design and programming task based on one that had been seen in a previous examination. Given that students had previously worked through the task, we assessed their comprehension of that material in this assignment. We were then able to collect the performance data and correlate this with the examination marks for the student to determine if there was a relationship between performance in the examination and performance in this practical. We were also able to correlate the performance in this practical with the time taken to complete the practical, and with the studentâs statement as to whether they remembered how they had solved it in their previous attempt. By doing this, we discovered that the students who remembered having solved it previously had a significantly higher mean examination mark than those students who claimed not to remember it. Unsurprisingly, students also performed better in this assignment if they had performed better in the examination. The mean time to complete the task was significantly less for those students who claimed to remember the task. In this task, the comprehension of the original material and the ability to recall it was of more importance than the ability to apply knowledge to an unseen problem.Nickolas J. G. Falkne
Using ontologies to support customisation and maintain interoperability in distributed information systems with application to the Domain Name System
©2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.Global distributed systems must be standards-based to allow interoperability between all of their components. While this guarantees interoperability, it often causes local inflexibility and an inability to adapt to specialised local requirements. We show how local flexibility and global consistency can coexist by changing the way that we represent these systems. The proven technologies already in use in the Semantic Web, to support and interpret metadata annotation, provide a well-tested starting point. We can use OWL ontologies and RDF to describe distributed systems using a knowledge-based approach. This allows us to maintain separate local and global operational spaces which, in turn, gives us local flexibility and global consistency. The annotated and well-defined data is better structured, more easily maintained and less prone to errors since its purpose can be clearly determined prior to use. To illustrate the application of our approach in distributed systems, we present our implementation of an ontologically-based Domain Name System (DNS) server and client. We also present performance figures to demonstrate that the use of this approach does not add significant overhead to system performance.Nickolas J. G. Falkner, Paul D. Coddington, Andrew L. Wendelbor
Bridging the gap between the semantic web and existing network services
©2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.This paper presents an overview of a mechanism for bridging the gaps between the Semantic Web data and services, and existing network-based services that are not semantically-annotated or do not meet the requirements of Semantic Web-based applications. The Semantic Web is a relatively new set of technologies that mutually interoperate well but often require mediation, translation or wrapping to interoperate with existing network-based services. Seen as an extension of network-based services and the WWW, the Semantic Web constitutes an expanding system that can require significant effort to integrate and develop services while still providing seamless service to users. New components in a system must interoperate with the existing components and their use of protocols and shared data must be structurally and semantically equivalent. The new system must continue to meet the original system requirements as well as providing the new features or facilities. We propose a new model of network services using a knowledge-based approach that defines services and their data in terms of an ontology that can be shared with other components.Nickolas J. G. Falkner, Paul D. Coddington, Andrew L. Wendelbor
Optimising performance in network-based information systems: Virtual organisations and customised views
©2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.Network-based information systems use well-defined standards to ensure interoperability and also have a tightly coupled relationship between their internal data representation and the external network representation. Virtual organisations (VOs), where members share a problem-solving purpose rather than a location-based or formal organisation, constitute an environment where user requirements may not be met by these standards. A virtual organisation has no formal body to manage change requests for these standards so the user requirements cannot be met. We show how the decoupling of the internal and external representations, through the use of ontologies, can enhance the operation of these systems by enabling flexibility and extensibility. We illustrate this by demonstrating a system that implements and enhances the Domain Name System, a global network-based information system. Migrating an existing system to a decoupled, knowledge-driven system is neither simple nor effortless but can provide significant benefits.Nickolas J. G. Falkner, Paul D. Coddington, Andrew L. Wendelbor
Developing an ontology for the domain name system
©2005 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.Ontologies provide a means of modelling and representing a knowledge domain. Such representation, already used in purpose-built distributed information systems, can also be of great value when applied to existing distributed information systems. The domain name system (DNS) provides a wide-area distributed name resolution system which is used extensively across the Internet. Changing the type and nature of resource records stored in the DNS currently requires an extensive request for comment procedure which takes a substantial amount of time, as the change has to be made globally. We propose an ontology for a DNS zone file, to provide a machine readable codification of the DNS and a mechanism for allowing local changes to the stored and represented structure of DNS records, using the extensible nature of OWL to allow local variations without having to go through the manual RFC procedure. This ontologically based system replaces a slow manual procedure with a rapid, machine-realisable procedure based on a uniform ontological representation of significant DNS knowledge. This paper discusses the application of ontologies to the DNS and how such an application can be built using OWL, the Web ontology language.Nickolas J. G. Falkner, Paul D. Coddington, Andrew L. Wendelbor
Addressing the challenges of a new digital technologies curriculum: MOOCs as a scalable solution for teacher professional development
England and Australia have introduced new learning areas, teaching computer science to children from the first year of school. This is a significant milestone that also raises a number of big challenges: the preparation of teachers and the development of resources at a national scale. Curriculum change is not easy for teachers, in any context, and to ensure teachers are supported, scaled solutions are required. One educational approach that has gained traction for delivering content to large-scale audiences are massively open online courses (MOOCs); however, little is known about what constitutes effective MOOC design, particularly within professional development contexts. To prepare teachers in Australia, we decided to ride the wave of MOOCs, developing a MOOC to deliver free computing content and pedagogy to teachers with the integration of social media to support knowledge exchange and resource building. The MOOC was designed to meet teacher needs, allowing for flexibility, ad-hoc interactions, support and the open sharing of resources. In this paper, we describe the process of developing our initiative, participant engagement and experiences, so that others encountering similar changes and reforms may learn from our experience.Rebecca Vivian, Katrina Falkner and Nickolas Falkne
Evaluation of concept importance in concept maps mined from lecture notes: computer vs human
Concept maps are commonly used tools for organising and representing knowledge in order to assist meaningful learning. Although the process of constructing concept maps improves learnersâ cognitive structures, novice students typically need substantial assistance from experts. Alternatively, expert-constructed maps may be given to students, which increase the workload of academics. To overcome this issue, automated concept map extraction has been introduced. One of the key limitations is the lack of an evaluation framework to measure the quality of machine-extracted concept maps. At present, researchers in this area utilise human expertsâ judgement or expert-constructed maps as the gold standard to measure the relevancy of extracted knowledge components. However, in the educational context, particularly in course materials, the majority of knowledge presented is relevant to the learner, resulting in a large amount of information that has to be organised. Therefore, this paper introduces a machine-based approach which studies the relative importance of knowledge components and organises them hierarchically. We compare machine-extracted maps with human judgment, based on expert knowledge and perception. This paper describes three ranking models to organise domain concepts. The results show that the auto-generated map positively correlates with human judgment (rs~1) for well-structured courses with rich grammar (well-fitted contents).Thushari Atapattu, Katrina Falkner and Nickolas Falkne
Batch matching of conjunctive triple patterns over linked data streams in the internet of things
The Internet of Things (IoT) envisions smart objects col lecting and sharing data at a global scale via the Internet. One challenging issue is how to disseminate data to relevant consumers efficiently. This paper leverages semantic technologies, such as Linked Data, which can facilitate machine- to-machine (M2M) communications to build an efficient information dissemination system for semantic IoT. The system integrates Linked Data streams generated from various data collectors and disseminates matched data to relevant data consumers based on conjunctive triple pattern queries registered in the system by the consumers. We also design a new data structure, CTP-automata, to meet the high performance needs of Linked Data dissemination. We evaluate our system using a real-world dataset generated from a Smart Building Project. With CTP-automata, the proposed system can disseminate Linked Data at a speed of an order of magnitude faster than the existing approach with thousands of registered conjunctive queries.Yongrui Qin, Quan Z. Sheng, Nickolas J.G. Falkner, Ali Shemshadi, Edward Curr
A Probabilistic Analysis of Kademlia Networks
Kademlia is currently the most widely used searching algorithm in P2P
(peer-to-peer) networks. This work studies an essential question about Kademlia
from a mathematical perspective: how long does it take to locate a node in the
network? To answer it, we introduce a random graph K and study how many steps
are needed to locate a given vertex in K using Kademlia's algorithm, which we
call the routing time. Two slightly different versions of K are studied. In the
first one, vertices of K are labelled with fixed IDs. In the second one,
vertices are assumed to have randomly selected IDs. In both cases, we show that
the routing time is about c*log(n), where n is the number of nodes in the
network and c is an explicitly described constant.Comment: ISAAC 201
- âŠ