116 research outputs found
Towards designing a knowledge-based tutoring system : SQL-tutor as an example
A Knowledge-Based Tutoring System, also sometimes called an Intelligent Tutoring System, is a computer based instructional system that uses artificial intelligence techniques to help people learn some subject. The goal of the system is to provide private tutoring to its students based on their different backgrounds, requests, and interests. The system knows what subject materials it should teach, when and how to teach them, and can diagnose the mistakes made by the students and help them correct the mistakes.
The major objective of this dissertation is to investigate and develop a generic framework upon which we can build a Knowledge-Based Tutoring System effectively. As an example, we have focused on developing SQL-TUTOR, a tutoring system for teaching SQL concepts and programming skills. The generic architecture of the system is rooted at the popular view that a tutoring process between a tutor (either a human being or a machine) and a student is a knowledge communication process. This process can be divided into a series of communication cycles and each communication cycle consists of four phases, namely, planning, discussing, evaluating, and remedying phases.
One major feature of the architecture proposed by us in this dissertation is its curriculum knowledge base which contains the knowledge about the course curriculum, We have developed a representation schema for describing the goal structure of the course, the prerequisite relationships among the course materials, and the multiple views to organize these materials. The inclusion of the curriculum knowledge in a KBTS allows the system to create different curricula for each individual student and to diagnose the student\u27s errors more effectively.
The system also provides a group of operators for the student to hand-tailor his/her curricula when he/she starts learning the course. The student can use these operators to select a specific path to go through the course materials, to pick a specific topic from the curricula to study, or to remove a particular topic from the curricula. Since the student can construct his/her own learning plans by these operators, he/she is relatively free to determine how to study the course materials and, as a result, he/she can become more active in the tutoring process.
The knowledge about a subject domain is stored in a set of topics and a sample database. The content of a topic consists of a set of related domain concepts. Each concept is described by both natural and formal forms. The relationships among the concepts are modeled a type of semantic network called the context network. The sample database contains a set of sample tables and an enhanced system catalog which contains the knowledge about the name, semantic meanings of the database objects. The built-in Problem Solver of the system allows the system to reason over the networks and the sample database and answer various kinds of questions raised by the student about the domain concepts and their relationships.
The knowledge of writing SQL queries is embodied in a set of examples attached to the topics. Each of such an example is carefully designed for one category of SQL query problems. An example in SQL-TUTOR is a packed knowledge chunk which can serve several important teaching purposes, including generating problem descriptions with different levels of details, formulating various SQL solutions for the given problem, explaining these solutions to the student, and evaluating SQL queries written by the student
Complex-Systems Approach to Simulating the Sea Urchin Ecology
Stocks of the native sea urchin (Strongylocentrotus droebachiensis) dropped dramatically during the peak of the urchin fishery in the early 1990’s and have not recovered. The current regulatory regime is based on analytic population models and two monolithic zones. Analytic models are insufficiently complex to capture many features that cause demise or survival of an urchin population. Scale, or granularity size, is too coarse. In contrast, a complex-systems-based model is able to capture these features. Presented here is a fine-scale simulation of a sea urchin fishery in the Gulf of Maine which behaves like a complex system, i.e. exhibits patchiness and nonlinear dynamics. Also presented is an alternative harvesting scheme which fosters sustainability. The model presented here is merely a hypothesis. Its predictions may not be verifiable until it either (1) becomes a part of a larger project, or (2) is paired with fine-scale data
Recommended from our members
Eliciting informal specifications from scientific modelers for evaluation and debugging
Professional software engineers have an arsenal of techniques such as unit testing and assertions to check their specifications, but these techniques require tools, motivation, experience and training that programmers without professional software engineering training may not have. As a result, professionals in other fields, such as scientific modelers, face greater hurdles in debugging and validating the programs they write. This thesis introduces the concept of "evaluation abstractions" as a framework for tool designers to think about this kind of support. Evaluation abstractions are the patterns of data in program traces and outputs that programmers examine in order to evaluate software behavior. The thesis provides two intellectual contributions aimed at helping tool designers: (1) A theory of evaluation abstraction support (EAST) that describes at a granular scale the factors contributing to a modeler's decision to use or not use an evaluation abstraction support feature; (2) a new user-centered design methodology, Natural Programming Plus (NP+), specialized for the design of interactive languages aimed at experienced users, in a way that allows for validation early in the process. Using EAST and NP+ I built and evaluated an evaluation abstraction support tool for cognitive modelers (psychologists who study human cognition by writing simulations of cognition), with features that (1) elicit and persist a database of a modeler's evaluation abstractions, in a piecemeal, just-in-time fashion as their questions about model behavior arise, and (2) use the modeler's unique set of evaluation abstractions to structure visualizations, listings, and regression tests, as the modeler continues to maintain and develop the project. Using this tool modelers were able to repeatedly answer questions about model behavior that would have been time-consuming and error-prone to check in state-of-the-art cognitive modeling tools. This dissertation includes formative investigation of modelers' evaluation abstractions, iterative development and testing of interaction designs for elicitation and use of evaluation abstractions, a description of a domain-specific language for representing and transforming evaluation abstractions, and two summative studies showing the usability and generalizability of the technique
Supporting Scholarly Research Ideation through Web Semantics
We develop new methods and technologies for supporting scholarly research ideation, the tasks in which researchers develop new ideas for their work, through web semantics, computational representations of information found on the web, capturing meaning involving people’s experiences of things of interest. To do so, we first conducted a qualitative study with established researchers on their practices, using sensitizing concepts from information science, creative cognition, and art as a basis for framing and deriving findings. We found that participants engage in and combine a wide range of activities, including citation chaining, exploratory browsing, and curation, to achieve their goals of creative ideation. We derived a new, interdisciplinary model to depict their practices. Our study and findings address a gap in existing research: the creative nature of what researchers do has been insufficiently investigated. The model is expected to guide future investigations.
We then use in-context presentations of dynamically extracted semantic information to (1) address the issues of digression and disorientation, which arise in citation chaining and exploratory browsing, and (2) provide contextual information in researchers’ prior work curation. The implemented interface, Metadata In-Context Explorer (MICE), maintains context while allowing new information to be brought into and integrated with the current context, reducing the needs for switching between documents and webpages. Study shows that MICE supports participants in their citation chaining processes, thus supports scholarly research ideation. MICE is implemented with BigSemantics, a metadata type system and runtime integrating data models, extraction rules, and presentation hints into types. BigSemantics operationalizes type-specific, dynamic extraction and rich presentation of semantic information (a.k.a. metadata) found on the web. The metadata type system, runtime, and MICE are expected to help build interfaces supporting dynamic exploratory search, browsing, and other creative tasks involving complex and interlinked semantics
Correlation decay and decentralized optimization in graphical models
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 213-229) and index.Many models of optimization, statistics, social organizations and machine learning capture local dependencies by means of a network that describes the interconnections and interactions of different components. However, in most cases, optimization or inference on these models is hard due to the dimensionality of the networks. This is so even when using algorithms that take advantage of the underlying graphical structure. Approximate methods are therefore needed. The aim of this thesis is to study such large-scale systems, focusing on the question of how randomness affects the complexity of optimizing in a graph; of particular interest is the study of a phenomenon known as correlation decay, namely, the phenomenon where the influence of a node on another node of the network decreases quickly as the distance between them grows. In the first part of this thesis, we develop a new message-passing algorithm for optimization in graphical models. We formally prove a connection between the correlation decay property and (i) the near-optimality of this algorithm, as well as (ii) the decentralized nature of optimal solutions. In the context of discrete optimization with random costs, we develop a technique for establishing that a system exhibits correlation decay. We illustrate the applicability of the method by giving concrete results for the cases of uniform and Gaussian distributed cost coefficients in networks with bounded connectivity. In the second part, we pursue similar questions in a combinatorial optimization setting: we consider the problem of finding a maximum weight independent set in a bounded degree graph, when the node weights are i.i.d. random variables.(cont.) Surprisingly, we discover that the problem becomes tractable for certain distributions. Specifically, we construct a PTAS for the case of exponentially distributed weights and arbitrary graphs with degree at most 3, and obtain generalizations for higher degrees and different distributions. At the same time we prove that no PTAS exists for the case of exponentially distributed weights for graphs with sufficiently large but bounded degree, unless P=NP. Next, we shift our focus to graphical games, which are a game-theoretic analog of graphical models. We establish a connection between the problem of finding an approximate Nash equilibrium in a graphical game and the problem of optimization in graphical models. We use this connection to re-derive NashProp, a message-passing algorithm which computes Nash equilibria for graphical games on trees; we also suggest several new search algorithms for graphical games in general networks. Finally, we propose a definition of correlation decay in graphical games, and establish that the property holds in a restricted family of graphical games. The last part of the thesis is devoted to a particular application of graphical models and message-passing algorithms to the problem of early prediction of Alzheimer's disease. To this end, we develop a new measure of synchronicity between different parts of the brain, and apply it to electroencephalogram data. We show that the resulting prediction method outperforms a vast number of other EEG-based measures in the task of predicting the onset of Alzheimer's disease.by Théophane Weber.Ph.D
University of Windsor Undergraduate Calendar 2023 Winter
https://scholar.uwindsor.ca/universitywindsorundergraduatecalendars/1020/thumbnail.jp
University of Windsor Undergraduate Calendar 2023 Spring
https://scholar.uwindsor.ca/universitywindsorundergraduatecalendars/1023/thumbnail.jp
A study of the design expertise for plants handling hazardous materials
A study of the design expertise for plants handling hazardous material
University of Windsor Undergraduate Calendar 2021 Spring
https://scholar.uwindsor.ca/universitywindsorundergraduatecalendars/1015/thumbnail.jp
- …