41,096 research outputs found
Recommended from our members
Rules and principles in cognitive diagnoses
Cognitive simulation is concerned with constructing process models of human cognitive behavior. Our work on the ACM system (Automated Cognitive Modeler) is an attempt to automate this process. The basic assumption is that all goal-oriented cognitive behavior involves search through some problem space. Within this framework, the task of cognitive diagnosis is to identify the problem space in which the subject is operating, identify solution paths used by the subject, and find conditions on the operators that explain those solution paths and that predict the subject's behavior on new problems. The work presented in this paper uses techniques from machine learning to automate the tasks of finding solution paths and operator conditions. We apply this method to the domain of multi-column subtraction and present results that demonstrate ACM's ability to model incorrect subtraction strategies. Finally, we discuss the difference between procedural bugs and misconceptions, proposing that errors due to misconceptions can be viewed as violations of principles for the task domain
Recommended from our members
Evaluating empowerment and control of HE e-learning in a secure environment
With the increased spread of HE distance learning into a wide variety of contexts it is important for us to understand the factors involved in its successful deployment for students. E-learning has a great potential to support effective and empowering HE distance learning (Wilson, 2007; Adams, 2005; Hughes, 2005). However, within two secure environments, prisons and health service, the factors involved are complex. This paper reviews HE e-learning technology perceptions within these two contrasting contexts from 225 students' and stakeholders' perspectives. Previous research has detailed literature limitations on obtaining students' perspectives of e-learning (Conole et al, 2006). These limitations are compounded when other stakeholder perceptions are not integrated (Sun et al, 2007; Adams et al, 2005; Millen at al, 2002). This paper developed and applied an e-learning framework for student and stakeholder perceptions. This social psychological framework, is based on previous practice based e-learning studies and is used here to synthesise two large-scale case studies. The framework focuses on three concepts learner Access (e.g. learning design, technology design, physical access), Awareness (e.g. of resources, their usage and support for e-learning tasks) and Acceptability (e.g. trust, privacy, aesthetics, engagement). Students' and stakeholders' perceptions identified high levels of students' empowerment through e-learning whilst still requiring a further pedagogical tailoring and an awareness of support. However, serious problems within these contexts have identified blocks to e-learning through stakeholders perceptions and fears of acceptability (i.e. issues of risk and trust). Ultimately, through understanding competing perceptions and needs within these complex environments we can support the effective technological development, pedagogical design and deployment of e-learning systems
Evaluating the End-User Experience of Private Browsing Mode
Nowadays, all major web browsers have a private browsing mode. However, the
mode's benefits and limitations are not particularly understood. Through the
use of survey studies, prior work has found that most users are either unaware
of private browsing or do not use it. Further, those who do use private
browsing generally have misconceptions about what protection it provides.
However, prior work has not investigated \emph{why} users misunderstand the
benefits and limitations of private browsing. In this work, we do so by
designing and conducting a three-part study: (1) an analytical approach
combining cognitive walkthrough and heuristic evaluation to inspect the user
interface of private mode in different browsers; (2) a qualitative,
interview-based study to explore users' mental models of private browsing and
its security goals; (3) a participatory design study to investigate why
existing browser disclosures, the in-browser explanations of private browsing
mode, do not communicate the security goals of private browsing to users.
Participants critiqued the browser disclosures of three web browsers: Brave,
Firefox, and Google Chrome, and then designed new ones. We find that the user
interface of private mode in different web browsers violates several
well-established design guidelines and heuristics. Further, most participants
had incorrect mental models of private browsing, influencing their
understanding and usage of private mode. Additionally, we find that existing
browser disclosures are not only vague, but also misleading. None of the three
studied browser disclosures communicates or explains the primary security goal
of private browsing. Drawing from the results of our user study, we extract a
set of design recommendations that we encourage browser designers to validate,
in order to design more effective and informative browser disclosures related
to private mode
Encouraging Privacy-Aware Smartphone App Installation: Finding out what the Technically-Adept Do
Smartphone apps can harvest very personal details
from the phone with ease. This is a particular privacy concern.
Unthinking installation of untrustworthy apps constitutes risky
behaviour. This could be due to poor awareness or a lack of knowhow:
knowledge of how to go about protecting privacy. It seems
that Smartphone owners proceed with installation, ignoring any
misgivings they might have, and thereby irretrievably sacrifice
their privacy
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Learning-based pattern classifiers, including deep networks, have shown
impressive performance in several application domains, ranging from computer
vision to cybersecurity. However, it has also been shown that adversarial input
perturbations carefully crafted either at training or at test time can easily
subvert their predictions. The vulnerability of machine learning to such wild
patterns (also referred to as adversarial examples), along with the design of
suitable countermeasures, have been investigated in the research field of
adversarial machine learning. In this work, we provide a thorough overview of
the evolution of this research area over the last ten years and beyond,
starting from pioneering, earlier work on the security of non-deep learning
algorithms up to more recent work aimed to understand the security properties
of deep learning algorithms, in the context of computer vision and
cybersecurity tasks. We report interesting connections between these
apparently-different lines of work, highlighting common misconceptions related
to the security evaluation of machine-learning algorithms. We review the main
threat models and attacks defined to this end, and discuss the main limitations
of current work, along with the corresponding future challenges towards the
design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201
Iowa Department for the Blind Performance Report, FY 2006
Agency Performance Repor
- …