184,210 research outputs found
Recommended from our members
Narrative Visualization: Sharing Insights into Complex Data
This paper is a reflection on the emerging genre of narrative visualization, a creative response to the need to share complex data engagingly with the public. In it, we explain how narrative visualization offers authors the opportunity to communicate more effectively with their audience by reproducing and sharing an experience of insight similar to their own. To do so, we propose a two part model, derived from previous literature, in which insight is understood as both an experience and also the product of that experience. We then discuss how the design of narrative visualization should be informed by attempts elsewhere to track the provenance of insights and share them in a collaborative setting. Finally, we present a future direction for research that includes using EEG technology to record neurological patterns during episodes of insight experience as the basis for evaluation
Recommended from our members
Reinventing discovery learning: a field-wide research program
© 2017, Springer Science+Business Media B.V., part of Springer Nature. Whereas some educational designers believe that students should learn new concepts through explorative problem solving within dedicated environments that constrain key parameters of their search and then support their progressive appropriation of empowering disciplinary forms, others are critical of the ultimate efficacy of this discovery-based pedagogical philosophy, citing an inherent structural challenge of students constructing historically achieved conceptual structures from their ingenuous notions. This special issue presents six educational research projects that, while adhering to principles of discovery-based learning, are motivated by complementary philosophical stances and theoretical constructs. The editorial introduction frames the set of projects as collectively exemplifying the viability and breadth of discovery-based learning, even as these projects: (a) put to work a span of design heuristics, such as productive failure, surfacing implicit know-how, playing epistemic games, problem posing, or participatory simulation activities; (b) vary in their target content and skills, including building electric circuits, solving algebra problems, driving safely in traffic jams, and performing martial-arts maneuvers; and (c) employ different media, such as interactive computer-based modules for constructing models of scientific phenomena or mathematical problem situations, networked classroom collective âvideo games,â and intercorporeal masterâstudent training practices. The authors of these papers consider the potential generativity of their design heuristics across domains and contexts
Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure
Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.Comment: 59 page
Automated legal sensemaking: the centrality of relevance and intentionality
Introduction: In a perfect world, discovery would ideally be conducted by the senior litigator who is
responsible for developing and fully understanding all nuances of their clientâs legal strategy. Of
course today we must deal with the explosion of electronically stored information (ESI) that
never is less than tens-of-thousands of documents in small cases and now increasingly involves
multi-million-document populations for internal corporate investigations and litigations.
Therefore scalable processes and technologies are required as a substitute for the authorityâs
judgment. The approaches taken have typically either substituted large teams of surrogate
human reviewers using vastly simplified issue coding reference materials or employed
increasingly sophisticated computational resources with little focus on quality metrics to insure
retrieval consistent with the legal goal. What is required is a system (people, process, and
technology) that replicates and automates the senior litigatorâs human judgment.
In this paper we utilize 15 years of sensemaking research to establish the minimum acceptable
basis for conducting a document review that meets the needs of a legal proceeding. There is
no substitute for a rigorous characterization of the explicit and tacit goals of the senior litigator.
Once a process has been established for capturing the authorityâs relevance criteria, we argue
that literal translation of requirements into technical specifications does not properly account for
the activities or states-of-affairs of interest. Having only a data warehouse of written records, it
is also necessary to discover the intentions of actors involved in textual communications. We
present quantitative results for a process and technology approach that automates effective
legal sensemaking
Mining social network data for personalisation and privacy concerns: A case study of Facebookâs Beacon
This is the post-print version of the final published paper that is available from the link below.The popular success of online social networking sites (SNS) such as Facebook is a hugely tempting resource of data mining for businesses engaged in personalised marketing. The use of personal information, willingly shared between online friends' networks intuitively appears to be a natural extension of current advertising strategies such as word-of-mouth and viral marketing. However, the use of SNS data for personalised marketing has provoked outrage amongst SNS users and radically highlighted the issue of privacy concern. This paper inverts the traditional approach to personalisation by conceptualising the limits of data mining in social networks using privacy concern as the guide. A qualitative investigation of 95 blogs containing 568 comments was collected during the failed launch of Beacon, a third party marketing initiative by Facebook. Thematic analysis resulted in the development of taxonomy of privacy concerns which offers a concrete means for online businesses to better understand SNS business landscape - especially with regard to the limits of the use and acceptance of personalised marketing in social networks
Construction and abstraction: contrasting methods of supporting model building in learning science
No description supplie
Engineering simulations for cancer systems biology
Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions
- âŠ