18,114 research outputs found
Assessing context-based learning: Not only rigorous but also relevant
Economic factors are driving significant change in higher education. There is increasing responsiveness to market demand for vocational courses and a growing appreciation of the importance of procedural (tacit) knowledge to service the needs of the Knowledge Economy; the skills in demand are information analysis, collaborative working and 'just-in-time learning'. New pedagogical methods go some way to accommodate these skills, situating learning in context and employing information and communications technology to present realistic simulations and facilitate collaborative exchange. However, what have so far proved resistant to change are the practices of assessment. This paper endorses the case for a scholarship of assessment and proposes the development of technology-supported tools and techniques to assess context-based learning. It also recommends a fundamental rethink of the norm-referenced and summative assessment of propositional knowledge as the principal criterion for student success in universities
Recommended from our members
Developments in information technology and their implications for psychological research: Disruptive or diffusive change?
The notion of technology-induced disruptive change has generally been applied within academia to teaching and learning. Less explored is the disruption that occurs to research as mainstream technology develops. This article examines the effects of technological change on research in psychology, in particular focussing on the development of web-based empirical research procedures over the past 15 years or so. I discuss the history, challenges and potential of these developments, and put forward some qualified suggestions for some of the future directions that technology will allow research in psychology to take
WEB SERVICES QUALITY PERCEPTION OF STUDENT DIGITAL LOCKER BASED-ON DEMOGRAPHY AND INTERNET BEHAVIOR OF UNIVERSITY STUDENTS
Student who own a computer experiences more benefit of using student digital locker than student who does not own a
computer. Discriminant analysis results showed that 4 variables can be used as predictors of personal computer adoption rate
among the students, but the highest differentiation factor is significant only for the variable usability
Recommended from our members
4D Technologies: appropriating handheld computers to serve the needs of teachers and learners in rural African settings
Recommended from our members
Evaluating Demographic Websites: Toward Webometric Criteria
The conventional criteria of website evaluation are widely applied in evaluating online information, which is an important component of information literacy instruction in academic institutions. However, mainly from the users' angle and inherently bibliographic, these criteria tend to be general in nature and fail to differentiate the qualities of websites at similar quality levels. Thus, evaluation criteria from webometric perspectives that utilize measurable data and tangible information are needed for more informed assessment. The purpose of this article is to introduce and apply essential webometric criteria to supplement the conventional criteria to improve information literacy instruction. The article first synthesizes the widely used conventional criteria into Six C's for the sake of simplicity and applicability. Then, important webometric criteria of popularity, profundity, luminosity, and error-checking are introduced. Next, the webometric data collected from leading demography research institutions' websites in the U. S. are analyzed. The article concludes that while conventional criteria continue to be convenient and useful, particularly for novel web users, a basic set of webometric criteria can serve as a supplementary tool to provide additional insights into evaluating online resources
Supporting local data users in the UK academic community.
Data collection in the UK can be traced back to Roman times with the introduction of 5-yearly population censuses however it is only in recent history that the acquisition, distribution and analysis of quantitative data in digital format has been possible. 1967 saw the establishment of the SSRC Data Bank at the University of Essex. The 1970s and 1980s saw the emergence of ‘data laboratories’ within a number of UK tertiary education institutions. This evolution continued with the formation of Edinburgh University Data Library (1983) and Oxford Data Library (1985) and more recently the London School of Economics (LSE) Data Library and the LSE Research Laboratory Data Service. Based at tertiary education institutions these specialised libraries have developed independently to assist researchers and teachers in the use of quantitative data for analysis and research purposes. With Web technology and advances in telecommunications this role has continued to develop to include support for a whole range of digital data resources via National Data Centres. Thus in this digital age with increased IT literacy, technological exposure and expectancy the data librarian’s role is ever more confusing and difficult to identify. This paper will discuss the differing areas of expertise within the UK data libraries with particular reference to their relationship with National Data Centres, the role of the Data Information Specialists Committee – UK (DISC-UK), in addition to the role played by other information staff which identify them as potential data librarians from ‘non-data library’ institutions
Assessing evaluation procedures for individual researchers: the case of the Italian National Scientific Qualification
The Italian National Scientific Qualification (ASN) was introduced as a
prerequisite for applying for tenured associate or full professor positions at
state-recognized universities. The ASN is meant to attest that an individual
has reached a suitable level of scientific maturity to apply for professorship
positions. A five member panel, appointed for each scientific discipline, is in
charge of evaluating applicants by means of quantitative indicators of impact
and productivity, and through an assessment of their research profile. Many
concerns were raised on the appropriateness of the evaluation criteria, and in
particular on the use of bibliometrics for the evaluation of individual
researchers. Additional concerns were related to the perceived poor quality of
the final evaluation reports. In this paper we assess the ASN in terms of
appropriateness of the applied methodology, and the quality of the feedback
provided to the applicants. We argue that the ASN is not fully compliant with
the best practices for the use of bibliometric indicators for the evaluation of
individual researchers; moreover, the quality of final reports varies
considerably across the panels, suggesting that measures should be put in place
to prevent sloppy practices in future ASN rounds
- …