2,304 research outputs found

    Harnessing Technology: new modes of technology-enhanced learning: opportunities and challenges

    Get PDF
    A report commissioned by Becta to explore the potential impact on education, staff and learners of new modes of technology enhanced learning, envisaged as becoming available in subsequent years. A generative framework, developed by the researchers is described, which was used as an analytical tool to relate the possibilities of the technology described to learning and teaching activities. This report is part of the curriculum and pedagogy strand of Becta's programme of managed research in support of the development of Harnessing Technology: Next Generation Learning 2008-14. A system-wide strategy for technology in education and skills. Between April 2008 and March 2009, the project carried out research, in three iterative phases, into the future of learning with technology. The research has drawn from, and aims to inform, all UK education sectors

    Unsupervised clinical skills training in nursing education: Active student involvement in the development of a technology-based learning tool

    Get PDF
    PhD thesis in Health, medicine and welfareThis thesis describes the process of active student involvement in development of a technology-based learning tool for clinical skills training. The thesis also explores how technology-based learning tool can facilitate unsupervised learning and discusses how students can become increasingly self-directed learners. Acquiring clinical skills is an especially demanding activity for nursing students, where they need to combine components from psychomotor, cognitive, and affective learning domains. Clinical skills are traditionally taught using a combination of real-life rehearsals during practical placements and simulation of different clinical nursing activities in clinical skills laboratories (CSL). Claims of diminished learning opportunities during practical placements has led to a growing emphasis on the importance of clinical skills training at the faculties CSLs. Accordingly, there has been increasing interest in methods that can help students obtain necessary skills in the CSL. In line with general technological advancements in society, these methods have increasingly involved different technological components. New policy initiatives and growing literature within higher education are calling for students not only to be consulted during the development of learning strategies, but also to become actively involved in creation of their own learning experiences. Consequently, a frequent training method for clinical skills learning within nursing education and for higher education in general is unsupervised training activities where students must initiate their own learning processes. Based on this, studies of active student involvement in development of a technology-based learning tool for unsupervised clinical skill training would be a valuable contribution to nursing education research. The aim of the thesis has been twofold: (I) To explore the process of active student involvement in the development of a technology-based learning tool, and (II) to explore how this technology-based learning tool can facilitate unsupervised clinical skills learning. To pursue this aim, this thesis has adopted a qualitative research design with an explorative approach. Since end users and active student involvement is a key element, the thesis follows a participatory design approach entailing four different stages (exploration of work, discovery process, prototyping and investigation of utilization). The exploration of work stage is described in Paper I, where the aim was to explore student perception of current clinical skills training. The findings describe the students’ current perceptions of the physical, organizational and psychosocial learning environment. In summary, students report that they seek, lack and crave more instructions concerning what and how to learn clinical skills procedures. [...

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience

    TOWARDS BUILDING INTELLIGENT COLLABORATIVE PROBLEM SOLVING SYSTEMS

    Get PDF
    Historically, Collaborative Problem Solving (CPS) systems were more focused on Human Computer Interaction (HCI) issues, such as providing good experience of communication among the participants. Whereas, Intelligent Tutoring Systems (ITS) focus both on HCI issues as well as leveraging Artificial Intelligence (AI) techniques in their intelligent agents. This dissertation seeks to minimize the gap between CPS systems and ITS by adopting the methods used in ITS researches. To move towards this goal, we focus on analyzing interactions with textual inputs in online learning systems such as DeepTutor and Virtual Internships (VI) to understand their semantics and underlying intents. In order to address the problem of assessing the student generated short text, this research explores firstly data driven machine learning models coupled with expert generated as well as general text analysis features. Secondly it explores method to utilize knowledge graph embedding for assessing student answer in ITS. Finally, it also explores a method using only standard reference examples generated by human teacher. Such method is useful when a new system has been deployed and no student data were available.To handle negation in tutorial dialogue, this research explored a Long Short Term Memory (LSTM) based method. The advantage of this method is that it requires no human engineered features and performs comparably well with other models using human engineered features.Another important analysis done in this research is to find speech acts in conversation utterances of multiple players in VI. Among various models, a noise label trained neural network model performed better in categorizing the speech acts of the utterances.The learners\u27 professional skill development in VI is characterized by the distribution of SKIVE elements, the components of epistemic frames. Inferring the population distribution of these elements could help to assess the learners\u27 skill development. This research sought a Markov method to infer the population distribution of SKIVE elements, namely the stationary distribution of the elements.While studying various aspects of interactions in our targeted learning systems, we motivate our research to replace the human mentor or tutor with intelligent agent. Introducing intelligent agent in place of human helps to reduce the cost as well as scale up the system

    EDM 2011: 4th international conference on educational data mining : Eindhoven, July 6-8, 2011 : proceedings

    Get PDF

    An evaluation of identity in online social networking: distinguishing fact from fiction

    Get PDF
    Online social networks are understood to replicate the real life connections between people. As the technology matures, more people are joining social networking communities such as MySpace (www.myspace.com) and Facebook (www.facebook.com). These online communities provide the opportunity for individuals to present themselves and maintain social interactions through their profiles. Such traces in profiles can be used as evidence in deciding the level of trust with which to imbue individuals in making access control decisions. However, online profiles have serious implications over the reality of identity disclosure. There are many reasons why someone may choose not to reveal their true self, which sometimes leads to misidentification or deception. On one hand, the structure of online profiles allows anonymity, which gives users the opportunity to create a persona that may not represent their true identity. On the other hand, we often play multiple identities in different contexts where such behaviour is acceptable. However, realizing the context for each identity representation depends on the individual. As a result, some represented identities will be essentially real, if edited for public view, some will be disguised, and others will be fictitious or humorous. The millions of social network profiles, and billions of connections between them, make it difficult to formalize an automated approach to differentiate fact from fiction in online self-described identities. How can we be sure with whom we are interacting, and whether these individuals or groups are being truthful with the online identities they present to the rest of the community? What tools and techniques can be used to gather, organize, and explore the available data for informing the level of honesty that should be entrusted to an individual? Can we verify the validity of the identity automatically, based on the available information online? We aim to evaluate identity representation online and examine how identity can be verified in a less trusted online community. We propose a personality classifier model to identify a user‟s personality (such as expressive, valid, active, positive, popular, sociable and traceable) using traces of 2.2 million profile features collected from MySpace. We use data mining techniques and social network analysis to extract significant patterns in the data and network structure, and improve the classifier during the cycle of development. We evaluate our classifier model on profiles with known identities such as „real‟ and „fake‟. Our results indicate that by utilizing people‟s online, self-reported information, personality, and their network of friends and interactions, we are able to provide evidence for validating the type of identity in a manner that is both accurate and scalable

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Blending MOOC in Face-to-Face Teaching and Studies

    Get PDF

    The Practitioner’s Panacea for Measuring Learner-Centeredness?

    Get PDF
    The Decibel Analysis for Research in Teaching (DART; Owens et al., 2017), a sound-based metric of learner-centeredness, is highly accessible, requires no training, and can be conducted with minimal classroom observations; yet, DART has not been evaluated in comparison with other validated metrics or in consideration of potentially confounding classroom characteristics (e.g. enrollment, classroom size, number of doors). We analyzed recordings from 42 class sessions of an undergraduate biology course with DART, the Reformed Teaching Observation Protocol (RTOP), and nine classroom characteristics. We found that enrollment was the best single predictor of the DART output of learner-centeredness, percent Multiple Voice

    A Computational Academic Integrity Framework

    Get PDF
    L'abast creixent i la naturalesa canviant dels programes acadĂšmics constitueixen un repte per a la integritat dels protocols tradicionals de proves i exĂ mens. L'objectiu d'aquesta tesi Ă©s introduir una alternativa als enfocaments tradicionals d'integritat acadĂšmica, per a cobrir la bretxa del buit de l'anonimat i donar la possibilitat als instructors i administradors acadĂšmics de fer servir nous mitjans que permetin mantenir la integritat acadĂšmica i promoguin la responsabilitat, accessibilitat i eficiĂšncia, a mĂ©s de preservar la privadesa i minimitzin la interrupciĂł en el procĂ©s d'aprenentatge. Aquest treball tĂ© com a objectiu començar un canvi de paradigma en les prĂ ctiques d'integritat acadĂšmica. La recerca en l'Ă rea de la identitat de l'estudiant i la garantia de l'autoria sĂłn importants perquĂš la concessiĂł de crĂšdits d'estudi a entitats no verificades Ă©s perjudicial per a la credibilitat institucional i la seguretat pĂșblica. Aquesta tesi es basa en la nociĂł que la identitat de l'alumne es compon de dues capes diferents, fĂ­sica i de comportament, en les quals tant els criteris d'identitat com els d'autoria han de ser confirmats per a mantenir un nivell raonable d'integritat acadĂšmica. Per a aixĂČ, aquesta tesi s'organitza en tres seccions, cadascuna de les quals aborda el problema des d'una de les perspectives segĂŒents: (a) teĂČrica, (b) empĂ­rica i (c) pragmĂ tica.El creciente alcance y la naturaleza cambiante de los programas acadĂ©micos constituyen un reto para la integridad de los protocolos tradicionales de pruebas y exĂĄmenes. El objetivo de esta tesis es introducir una alternativa a los enfoques tradicionales de integridad acadĂ©mica, para cubrir la brecha del vacĂ­o anonimato y dar la posibilidad a los instructores y administradores acadĂ©micos de usar nuevos medios que permitan mantener la integridad acadĂ©mica y promuevan la responsabilidad, accesibilidad y eficiencia, ademĂĄs de preservar la privacidad y minimizar la interrupciĂłn en el proceso de aprendizaje. Este trabajo tiene como objetivo iniciar un cambio de paradigma en las prĂĄcticas de integridad acadĂ©mica. La investigaciĂłn en el ĂĄrea de la identidad del estudiante y la garantĂ­a de la autorĂ­a son importantes porque la concesiĂłn de crĂ©ditos de estudio a entidades no verificadas es perjudicial para la credibilidad institucional y la seguridad pĂșblica. Esta tesis se basa en la nociĂłn de que la identidad del alumno se compone de dos capas distintas, fĂ­sica y de comportamiento, en las que tanto los criterios de identidad como los de autorĂ­a deben ser confirmados para mantener un nivel razonable de integridad acadĂ©mica. Para ello, esta tesis se organiza en tres secciones, cada una de las cuales aborda el problema desde una de las siguientes perspectivas: (a) teĂłrica, (b) empĂ­rica y (c) pragmĂĄtica.The growing scope and changing nature of academic programmes provide a challenge to the integrity of traditional testing and examination protocols. The aim of this thesis is to introduce an alternative to the traditional approaches to academic integrity, bridging the anonymity gap and empowering instructors and academic administrators with new ways of maintaining academic integrity that preserve privacy, minimize disruption to the learning process, and promote accountability, accessibility and efficiency. This work aims to initiate a paradigm shift in academic integrity practices. Research in the area of learner identity and authorship assurance is important because the award of course credits to unverified entities is detrimental to institutional credibility and public safety. This thesis builds upon the notion of learner identity consisting of two distinct layers (a physical layer and a behavioural layer), where the criteria of identity and authorship must both be confirmed to maintain a reasonable level of academic integrity. To pursue this goal in organized fashion, this thesis has the following three sections: (a) theoretical, (b) empirical, and (c) pragmatic
    • 

    corecore