19,405 research outputs found
Recommended from our members
Rules and principles in cognitive diagnoses
Cognitive simulation is concerned with constructing process models of human cognitive behavior. Our work on the ACM system (Automated Cognitive Modeler) is an attempt to automate this process. The basic assumption is that all goal-oriented cognitive behavior involves search through some problem space. Within this framework, the task of cognitive diagnosis is to identify the problem space in which the subject is operating, identify solution paths used by the subject, and find conditions on the operators that explain those solution paths and that predict the subject's behavior on new problems. The work presented in this paper uses techniques from machine learning to automate the tasks of finding solution paths and operator conditions. We apply this method to the domain of multi-column subtraction and present results that demonstrate ACM's ability to model incorrect subtraction strategies. Finally, we discuss the difference between procedural bugs and misconceptions, proposing that errors due to misconceptions can be viewed as violations of principles for the task domain
Recommended from our members
EASe : integrating search with learned episodes
Weak methods are insufficient to solve complex problems. Constrained weak methods, like hill-climbing, search too little of the problem space. Unconstrained weak methods, like breadth-first search, are intractable. Fortunately, through the integration of multiple weak methods more powerful problem solvers can be created. We demonstrate that augmenting a weak constrained search method with episodes provides a tractable method for solving a large class of problems. We demonstrate that these episodes can be generated using an unconstrained weak method while solving simple problems from a domain. We provide an analytical model of our approach and empirical results from the logic synthesis domain of VLSI design as well as the classic tile-sliding domain
B-LOG: A branch and bound methodology for the parallel execution of logic programs
We propose a computational methodology -"B-LOG"-, which offers the potential for an effective implementation of Logic Programming in a parallel computer. We also propose a weighting scheme to guide the search process through the graph and we apply the concepts of parallel "branch and bound" algorithms in order to perform a "best-first" search using an information theoretic bound. The concept of "session" is used to speed up the search process in a succession of similar queries. Within a session, we strongly modify the bounds in a local database, while bounds kept in a global database are weakly modified to provide a better initial condition for other sessions. We
also propose an implementation scheme based on a database
machine using "semantic paging", and the "B-LOG processor" based on a scoreboard driven controller
Recommended from our members
Interactive product catalogue with user preference tracking
In the context of m-commerce, small screen size poses serious difficulty for users to browse effectively through a product catalogue, given the limited number of products that may be presented on-screen. Despite the availability of search engines, filters and recommender systems to aid users, these techniques focus on a narrow segment of product offering. The users are thus denied the opportunity to do a more expansive exploration of the products available. This paper describes a novel approach to overcome the constraints of small screen size. Through integration of a product catalogue with a recommender system, an adaptive system has been created that guides users through the process of product browsing. An original technique has been developed to cluster similar positive examples together to identify areas of interest of a user. The performance of this technique has been evaluated and the results proved to be promising
Dynamic load balancing for the distributed mining of molecular structures
In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of
methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the
past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially
render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to
discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no
reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic
partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated
load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer
Instituteâs HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed
approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable
for large-scale, multi-domain, heterogeneous environments, such as computational grids
Designing intelligent computerâbased simulations: A pragmatic approach
This paper examines the design of intelligent multimedia simulations. A case study is presented which uses an approach based in part on intelligent tutoring system design to integrate formative assessment into the learning of clinical decisionâmaking skills for nursing students. The approach advocated uses a modular design with an integrated intelligent agent within a multimedia simulation. The application was created using an objectâorientated programming language for the multimedia interface (Delphi) and a logicâbased interpreted language (Prolog) to create an expert assessment system. Domain knowledge is also encoded in a Windows help file reducing some of the complexity of the expert system. This approach offers a method for simplifying the production of an intelligent simulation system. The problems developing intelligent tutoring systems are examined and an argument is made for a practical approach to developing intelligent multimedia simulation systems
Bringing tasks back in: an organizational theory of resource complementarity and partner selection
To progress beyond the idea that the value of inter-firm collaboration is largely determined by the complementarity of the resources held by partners, we build a theoretical framework that explains under which conditions a set of resources or capabilities can be considered as complementary and resulting in superior value creation. Specifically, we argue that the tasks that an inter-firm collaboration has to perform determine complementarities, and that complementarities arise from similar and dissimilar resources alike. We capture this relationship in the concept of task resource complementarity. Further, we examine factors that impact on the relevance of this construct as a predictor of partner selection. Finally, we discuss which implications arise for a theory of the firm when tasks are explicitly incorporated into the conceptualization of resource complementarity
- âŠ