9,324 research outputs found
Toolboxes and handing students a hammer: The effects of cueing and instruction on getting students to think critically
Developing critical thinking skills is a common goal of an undergraduate
physics curriculum. How do students make sense of evidence and what do they do
with it? In this study, we evaluated students' critical thinking behaviors
through their written notebooks in an introductory physics laboratory course.
We compared student behaviors in the Structured Quantitative Inquiry Labs
(SQILabs) curriculum to a control group and evaluated the fragility of these
behaviors through procedural cueing. We found that the SQILabs were generally
effective at improving the quality of students' reasoning about data and making
decisions from data. These improvements in reasoning and sensemaking were
thwarted, however, by a procedural cue. We describe these changes in behavior
through the lens of epistemological frames and task orientation, invoked by the
instructional moves
Designing as Construction of Representations: A Dynamic Viewpoint in Cognitive Design Research
This article presents a cognitively oriented viewpoint on design. It focuses
on cognitive, dynamic aspects of real design, i.e., the actual cognitive
activity implemented by designers during their work on professional design
projects. Rather than conceiving de-signing as problem solving - Simon's
symbolic information processing (SIP) approach - or as a reflective practice or
some other form of situated activity - the situativity (SIT) approach - we
consider that, from a cognitive viewpoint, designing is most appropriately
characterised as a construction of representations. After a critical discussion
of the SIP and SIT approaches to design, we present our view-point. This
presentation concerns the evolving nature of representations regarding levels
of abstraction and degrees of precision, the function of external
representations, and specific qualities of representation in collective design.
Designing is described at three levels: the organisation of the activity, its
strategies, and its design-representation construction activities (different
ways to generate, trans-form, and evaluate representations). Even if we adopt a
"generic design" stance, we claim that design can take different forms
depending on the nature of the artefact, and we propose some candidates for
dimensions that allow a distinction to be made between these forms of design.
We discuss the potential specificity of HCI design, and the lack of cognitive
design research occupied with the quality of design. We close our discussion of
representational structures and activities by an outline of some directions
regarding their functional linkages
Oikean datan löytämisen tärkeys: Case terveydenhuollon operaatioiden kehitysprojektit
The utilization of data in healthcare improvement projects is currently a very topical subject. Several public and private companies have shown the value of utilizing data to improve operational efficiency. Not all datasets are, however, equally useful – thus, understanding of the data quality is required to ensure correct decision-making. Currently, two streams of literature exist to guide the improvement teams: the literature on operational improvement, e.g. through methods such as Total Quality Management, Lean, and Six Sigma, and the literature on data quality. From the point-of-view of an improvement project team, a linkage between these two streams of literature is missing. This paper aims to bridge the gap between the two streams of literature by helping healthcare improvement teams to assess whether the data quality is sufficient to support decision-making.
The academic framework illustrates, how the viewpoint of data quality has transformed from an intrinsic focus on the 1970s, to fitness for use on the 1990s, finally to describing the specifics of the new trends, such as big data or unstructured data, in the 2010 onwards.
Using the case study method, the findings were expanded by observing an improvement project in a private Finnish healthcare company. Together with the project team, I went through an iterative process with five steps: each of which was guided by a distinctive, new set of data. Finally, the actual improvement was gained by gathering the data manually: a dataset which was highly relevant for the end users, but likely to be intrinsically less robust as the previous datasets.
As a conclusion, the current data quality literature can bring only modest guidance for the improvement teams in terms of choosing the right dataset. Rather, a new model for the data quality in healthcare operational improvement was created. The model suggests that the teams should first consider whether the dataset is relevant for the goal of the improvement project. After that, the improvement team should consider if the dataset can add value to reaching the goal of the project.
After these two steps, the other key data quality attributes linking to the following four dimensions come to play: accessibility, intrinsic, representational, and contextual quality.Datan käyttäminen terveydenhuollon prosessikehityksessä on laajaa kiinnostusta herättävä aihe. Kaksi pää kirjallisuussuuntaa on kehittynyt datan laadun tutkimiseksi: kirjallisuus operaatiokehityksestä eli aiheista, kuten TQM, Lean ja Six Sigma, ja kirjallisuus datan laadusta. Nämä kaksi suuntausta ovat kuitenkin usein riittämättömiä kehitystiimien päätöksenteon tueksi. Tämän diplomityön tarkoitus on yhdistää nämä kaksi kirjallisuussuuntausta frameworkiksi, joka auttaa tiimejä arvioimaan datan soveltuvuutta omaan kehitysprojektiinsa.
Työn kirjallisuuskatsaus kuvaa, miten käsitys datan laadusta on muuttunut 1970-luvulta nykypäivään. 1970-luvulla datalaadun kirjallisuuden fokus oli sisäisessä laadussa (intrinsic quality). 1990-luvulle siirtyessä painopiste siirtyi kuvailemaan datan laatua sen soveltuvuuden kautta (fitness for use), ja 2010-luvulle siirryttäessä kirjallisuuteen tuli mukaan uusia trendejä, kuten big data tai strukturoimaton data.
Tuloksien tueksi seurattiin kehitysprojektia, joka toteutettiin suomalaisessa yksityisessä terveydenhuollon yrityksessä. Yhdessä projektitiimin kanssa, kirjoittajan matka projektin edetessä voidaan tiivistää viiteen vaiheeseen, joista jokaisessa uusi datasetti näytteli tärkeää roolia. Lopulta suurin edistysaskel projektissa saatiin keräämällä data manuaalisesti. Manuaalisesti kerätty data oli erittäin relevantti projektille, mutta sisäisiltä ominaisuuksiltaan huonompi.
Tulosten pohjalta voidaan päätellä, että nykyinen kirjallisuus datan laadusta voi tuoda enintään keskinkertaista tukea kehitystiimien datan laadun arvioinnille. Tästä syystä uusi malli data laadun tutkimiselle terveydenhuollossa luotiin työn tuloksena. Malli ehdottaa, että projekti tiimien pitäisi ensimmäisenä arvioida datasetin relevanttiutta käyttötarkoitukselle. Toisena askeleena tiimin kannattaa miettiä onko data arvokasta vastaamaan projektin senhetkisiin haasteisiin. Näiden kahden askeleen jälkeen, tiimin kannattaa käyttää kirjallisuudessa laajasti tunnistettuja datalaadun tekijöitä oman datasetin laatunsa arviointiin
Solving Games with Functional Regret Estimation
We propose a novel online learning method for minimizing regret in large
extensive-form games. The approach learns a function approximator online to
estimate the regret for choosing a particular action. A no-regret algorithm
uses these estimates in place of the true regrets to define a sequence of
policies.
We prove the approach sound by providing a bound relating the quality of the
function approximation and regret of the algorithm. A corollary being that the
method is guaranteed to converge to a Nash equilibrium in self-play so long as
the regrets are ultimately realizable by the function approximator. Our
technique can be understood as a principled generalization of existing work on
abstraction in large games; in our work, both the abstraction as well as the
equilibrium are learned during self-play. We demonstrate empirically the method
achieves higher quality strategies than state-of-the-art abstraction techniques
given the same resources.Comment: AAAI Conference on Artificial Intelligence 201
The Effects of constructing domain-specific representations on coordination processes and learning in a CSCL-environment
Slof, B., Erkens, G., & Kirschner, P. A. (2012). The effects of constructing domain-specific representations on coordination processes and learning in a CSCL-environment. Computers in Human Behavior, 28, 1478-1489. doi:10.1016/j.chb.2012.03.011This study examined the effects of scripting learners’ use of two types of representational tools (i.e., causal and simulation) on their online collaborative problem-solving. Scripting sequenced the phase-related part-task demands and made them explicit. This entailed (1) defining the problem and proposing multiple solutions (i.e., problem-solution) and (2) evaluating solutions and coming to a definitive solution (i.e., solution-evaluation). The causal tool was hypothesized to be best suited for problem solution and the simulation tool for solution evaluation. Teams of learners in four experimental conditions carried out the part-tasks in a predefined order, but differed in the tools they received. Teams in the causal-only and simulation-only conditions received either a causal or a simulation tool for both part-tasks. Teams in the causal-simulation and simulation-causal conditions received both tools in suited and unsuited order respectively. Results revealed that teams using the tool suited to each part-task constructed more task appropriate representations and were better able to share and negotiate knowledge. As a consequence, they performed better on the complex learning-task. Although all learners individually gained more domain knowledge, no differences were obtained between conditions
A Machine Learning Approach for Classifying Textual Data in Crowdsourcing
Crowdsourcing represents an innovative approach that allows companies to engage a diverse network of people over the internet and use their collective creativity, expertise, or workforce for completing tasks that have previously been performed by dedicated employees or contractors. However, the process of reviewing and filtering the large amount of solutions, ideas, or feedback submitted by a crowd is a latent challenge. Identifying valuable inputs and separating them from low quality contributions that cannot be used by the companies is time-consuming and cost-intensive. In this study, we build upon the principles of text mining and machine learning to partially automatize this process. Our results show that it is possible to explain and predict the quality of crowdsourced contributions based on a set of textual features. We use these textual features to train and evaluate a classification algorithm capable of automatically filtering textual contributions in crowdsourcing
Understanding data quality issues in dynamic organisational environments – a literature review
Technology has been the catalyst that has facilitated an explosion of organisational data in terms of its velocity, variety, and volume, resulting in a greater depth and breadth of potentially valuable information, previously unutilised. The variety of data accessible to organisations extends beyond traditional structured data to now encompass previously unobtainable and difficult to analyse unstructured data. In addition to exploiting data, organisations are now facing an even greater challenge of assessing data quality and identifying the impacts of lack of quality. The aim of this research is to contribute to data quality literature, focusing on improving a current understanding of business-related Data Quality (DQ) issues facing organisations. This review builds on existing Information Systems literature, and proposes further research in this area. Our findings confirm that the current literature lags in recognising new types of data and imminent DQ impacts facing organisations in today’s dynamic environment of the so-called “Big Data”. Insights clearly identify the need for further research on DQ, in particular in relation to unstructured data. It also raises questions regarding new DQ impacts and implications for organisations, in their quest to leverage the variety of available data types to provide richer insights.<br /
An information assistant system for the prevention of tunnel vision in crisis management
In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions
- …