33,837 research outputs found
Moving Usability Testing onto the Web
Abstract: In order to remotely obtain detailed usability data by tracking user behaviors
within a given web site, a server-based usability testing environment has been
created. Web pages are annotated in such a way that arbitrary user actions (such as
"mouse over link" or "click back button") can be selected for logging. In addition,
the system allows the experiment designer to interleave interactive questions into
the usability evaluation, which for instance could be triggered by a particular sequence
of actions. The system works in conjunction with clustering and visualization
algorithms that can be applied to the resulting log file data. A first version of
the system has been used successfully to carry out a web usability evaluation
Language design for a personal learning environment design language
Approaching technology-enhanced learning from the perspective of a learner, we foster the idea of learning environment design, learner interactions, and tool interoperability. In this paper, we shortly summarize the motivation for our personal learning environment approach and describe the development of a domain-specific language for this purpose as well as its realization in practice. Consequently, we examine our learning environment design language according to its lexis and syntax, the semantics behind it, and pragmatical aspects within a first prototypic implementation. Finally, we discuss strengths, problematic aspects, and open issues of our approach
Constructing experimental indicators for Open Access documents
The ongoing paradigm change in the scholarly publication system ('science is
turning to e-science') makes it necessary to construct alternative evaluation
criteria/metrics which appropriately take into account the unique
characteristics of electronic publications and other research output in digital
formats. Today, major parts of scholarly Open Access (OA) publications and the
self-archiving area are not well covered in the traditional citation and
indexing databases. The growing share and importance of freely accessible
research output demands new approaches/metrics for measuring and for evaluating
of these new types of scientific publications. In this paper we propose a
simple quantitative method which establishes indicators by measuring the
access/download pattern of OA documents and other web entities of a single web
server. The experimental indicators (search engine, backlink and direct access
indicator) are constructed based on standard local web usage data. This new
type of web-based indicator is developed to model the specific demand for
better study/evaluation of the accessibility, visibility and interlinking of
open accessible documents. We conclude that e-science will need new stable
e-indicators.Comment: 9 pages, 3 figure
Security in online learning assessment towards an effective trustworthiness approach to support e-learning teams
(c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.This paper proposes a trustworthiness model for the design of secure learning assessment in on-line collaborative learning groups. Although computer supported collaborative learning has been widely adopted in many educational institutions over the last decade, there exist still drawbacks which limit their potential in collaborative learning activities. Among these limitations, we investigate information security requirements in on-line assessment, (e-assessment), which can be developed in collaborative learning contexts. Despite information security enhancements have been developed in recent years, to the best of our knowledge, integrated and holistic security models have not been completely carried out yet. Even when security advanced methodologies and technologies are deployed in Learning Management Systems, too many types of vulnerabilities still remain opened and unsolved. Therefore, new models such as trustworthiness approaches can overcome these lacks and support e-assessment requirements for e-Learning. To this end, a trustworthiness model is designed in order to conduct the guidelines of a holistic security model for on-line collaborative learning through effective trustworthiness approaches. In addition, since users' trustworthiness analysis involves large amounts of ill-structured data, a parallel processing paradigm is proposed to build relevant information modeling trustworthiness levels for e-Learning.Peer ReviewedPostprint (author's final draft
Sciunits: Reusable Research Objects
Science is conducted collaboratively, often requiring knowledge sharing about
computational experiments. When experiments include only datasets, they can be
shared using Uniform Resource Identifiers (URIs) or Digital Object Identifiers
(DOIs). An experiment, however, seldom includes only datasets, but more often
includes software, its past execution, provenance, and associated
documentation. The Research Object has recently emerged as a comprehensive and
systematic method for aggregation and identification of diverse elements of
computational experiments. While a necessary method, mere aggregation is not
sufficient for the sharing of computational experiments. Other users must be
able to easily recompute on these shared research objects. In this paper, we
present the sciunit, a reusable research object in which aggregated content is
recomputable. We describe a Git-like client that efficiently creates, stores,
and repeats sciunits. We show through analysis that sciunits repeat
computational experiments with minimal storage and processing overhead.
Finally, we provide an overview of sharing and reproducible cyberinfrastructure
based on sciunits gaining adoption in the domain of geosciences
- …