47,187 research outputs found
Progressive Analytics: A Computation Paradigm for Exploratory Data Analysis
Exploring data requires a fast feedback loop from the analyst to the system,
with a latency below about 10 seconds because of human cognitive limitations.
When data becomes large or analysis becomes complex, sequential computations
can no longer be completed in a few seconds and data exploration is severely
hampered. This article describes a novel computation paradigm called
Progressive Computation for Data Analysis or more concisely Progressive
Analytics, that brings at the programming language level a low-latency
guarantee by performing computations in a progressive fashion. Moving this
progressive computation at the language level relieves the programmer of
exploratory data analysis systems from implementing the whole analytics
pipeline in a progressive way from scratch, streamlining the implementation of
scalable exploratory data analysis systems. This article describes the new
paradigm through a prototype implementation called ProgressiVis, and explains
the requirements it implies through examples.Comment: 10 page
Recommended from our members
Using mobile RE tools to give end-users their own voice
Researchers highlight end-user involvement in system design as an important concept for developing useful and usable solutions. However, end-user involvement in software engineering is still an open-ended topic. Novel paradigms such as service-oriented computing strengthen the need for more active end-user involvement in order to provide systems that are tailored to individual end-user needs. Our work is based on the fact that the majority of end-users are familiar with mobile devices and use an increasing number of mobile applications. A mobile tool enabling end-user led requirements elicitation could be just one of many applications installed on end-users' mobile devices. In this paper, we present a framework of end-user involvement in requirements elicitation which motivates our research. The main contribution of our research is a tool-supported requirements elicitation approach allowing end-users to document needs in situ. Furthermore, we present first evaluation results to highlight the feasibility of on-site end-user led requirements elicitation
Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies
A systematic search of the research literature from 1996 through July 2008 identified more than a thousand empirical studies of online learning. Analysts screened these studies to find those that (a) contrasted an online to a face-to-face condition, (b) measured student learning outcomes, (c) used a rigorous research design, and (d) provided adequate information to calculate an effect size. As a result of this screening, 51 independent effects were identified that could be subjected to meta-analysis. The meta-analysis found that, on average, students in online learning conditions performed better than those receiving face-to-face instruction. The difference between student outcomes for online and face-to-face classesâmeasured as the difference between treatment and control means, divided by the pooled standard deviationâwas larger in those studies contrasting conditions that blended elements of online and face-to-face instruction with conditions taught entirely face-to-face. Analysts noted that these blended conditions often included additional learning time and instructional elements not received by students in control conditions. This finding suggests that the positive effects associated with blended learning should not be attributed to the media, per se. An unexpected finding was the small number of rigorous published studies contrasting online and face-to-face learning conditions for Kâ12 students. In light of this small corpus, caution is required in generalizing to the Kâ12 population because the results are derived for the most part from studies in other settings (e.g., medical training, higher education)
The institutional character of computerized information systems
We examine how important social and technical choices become part of the history of a computer-based information system (CB/SJ and embedded in the social structure which supports its development and use. These elements of a CBIS can be organized in specific ways to enhance its usability and performance. Paradoxically, they can also constrain future implementations and post-implementations.We argue that CBIS developed from complex, interdependent social and technical choices should be conceptualized in terms of their institutional characteristics, as well as their information-processing characteristics. The social system which supports the development and operation of a CBIS is one major element whose institutional characteristics can effectively support routine activities while impeding substantial innovation. Characterizing CBIS as institutions is important for several reasons: (1) the usability of CBIS is more critical than the abstract information-processing capabilities of the underlying technology; (2) CBIS that are well-used and have stable social structures are more difficult to replace than those with less developed social structures and fewer participants; (3) CBIS vary from one social setting to another according to the ways in which they are organized and embedded in organized social systems. These ideas are illustrated with the case study of a failed attempt to convert a complex inventory control system in a medium-sized manufacturing firm
Recommended from our members
Towards a Theory of Analytical Behaviour: A Model of Decision-Making in Visual Analytics
This paper introduces a descriptive model of the human-computer processes that lead to decision-making in visual analytics. A survey of nine models from the visual analytics and HCI literature are presented to account for different perspectives such as sense-making, reasoning, and low-level human-computer interactions. The survey examines the people and computers (entities) presented in the models, the divisions of labour between entities (both physical and role-based), the behaviour of both people and machines as constrained by their roles and agency, and finally the elements and processes which define the flow of data both within and between entities. The survey informs the identification of four observations that characterise analytical behaviour - defined as decision-making facilitated by visual analytics: bilateral discourse, divisions of labour, mixed-synchronicity information flows, and bounded behaviour. Based on these principles, a descriptive model is presented as a contribution towards a theory of analytical behaviour. The future intention is to apply prospect theory, a economic model of decision-making under uncertainty, to the study of analytical behaviour. It is our assertion that to apply prospect theory first requires a descriptive model of the processes that facilitate decision-making in visual analytics. We conclude it necessary to measure the perception of risk in future work in order to apply prospect theory to the study of analytical behaviour using our proposed model
Analyzing collaborative learning processes automatically
In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learnersâ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in
- âŠ