103 research outputs found

    A design tool for novice programmers: Working paper series--00-01

    Get PDF
    Most program design methods are intended for experienced programmers. Beginner friendly program design methods date back to procedural languages, such as Pascal and Basic. These methods lack connections to objects and events since the languages contained neither objects nor events. This paper presents a summary table and a sketch to get novice programmers started in the process of designing a program. The table organizes information about the program requirements and aides in creating a design for a program that may contain events and objects. The sketch represents the calling relationships among the modules in the program. The table and the sketch can be use with an existing method, such as pseudocode. The tools enhance existing methods of design. A new method is not proposed. The most important philosophies in developing the tools were simplicity and guidance. The table guides the student's design efforts and is simple. The columns collect data about what the program does, when it does its tasks, and what data it uses. The rows relate tasks, events, and objects. The table prompts identification of objects and events and makes high-level functionality stand out. The high-level functional design captured by the table is made explicit in the relations sketch

    IDEF5 Ontology Description Capture Method: Concept Paper

    Get PDF
    The results of research towards an ontology capture method referred to as IDEF5 are presented. Viewed simply as the study of what exists in a domain, ontology is an activity that can be understood to be at work across the full range of human inquiry prompted by the persistent effort to understand the world in which it has found itself - and which it has helped to shape. In the contest of information management, ontology is the task of extracting the structure of a given engineering, manufacturing, business, or logistical domain and storing it in an usable representational medium. A key to effective integration is a system ontology that can be accessed and modified across domains and which captures common features of the overall system relevant to the goals of the disparate domains. If the focus is on information integration, then the strongest motivation for ontology comes from the need to support data sharing and function interoperability. In the correct architecture, an enterprise ontology base would allow th e construction of an integrated environment in which legacy systems appear to be open architecture integrated resources. If the focus is on system/software development, then support for the rapid acquisition of reliable systems is perhaps the strongest motivation for ontology. Finally, ontological analysis was demonstrated to be an effective first step in the construction of robust knowledge based systems

    Computationally Modeling an Incremental Learning Account of Semantic Interference through Phonological Influence

    Get PDF
    Computer models play a vital role in providing ways to effectively simulate complex systems and to test scientific theories and hypotheses. One major area of success for neural network models in particular has been in cognitive neuroscience for modeling semantic interference effects in memory. When a person sees a picture of an object such as a car multiple times, the memory of that object is primed so that it can be retrieved more effectively. When a picture of a similar object is seen, such as a truck, sharing semantic features with the primed object, then the primed memory of a car would interfere with the retrieval of a truck. This is known as semantic interference. A recent hypothesis by Preusse et al. (2013) puts forward that semantic interference is further increased by the sharing of phonemes among two words. In this thesis a new phonological computer model of lexical retrieval is developed based on this hypothesis using a two layer feedforward Artificial Neural Network (ANN). The new model can represent semantic interference effects through increased lexical activation by phonological features. Simulations were performed in a MATLAB environment each using a different variant of the phonological model. The simulations tested three conditions of activating semantic and phonological features. Results demonstrated that semantic interference is significantly increased when phonological features are activated alongside semantic features versus activating semantic features alone thus supporting the hypothesis by Preusse et al. (2013). The characteristics of the new ANN model could make it useful in studying other phenomena related to memory and learning

    A perceptron based neural network data analytics architecture for the detection of fraud in credit card transactions in financial legacy systems

    Get PDF
    Credit card fraud, a significant and growing problem in commerce that costs the global economy billions of dollars each year, has kept up with technological advancements as criminals devise new and innovative methods to defraud account holders, merchants, and financial institutions. While traditional fraudulent methods involved card cloning, skimming, and counterfeiting during transactional processes, the rapid adoption and evolution of Internet technologies aimed at facilitating trade has given rise to new digitally initiated illegitimate transactions, with online credit card fraud beginning to outpace physical world transactions. According to the literature, the financial industry has used statistical methods and Artificial Intelligence (AI) to keep up with fraudulent card patterns, but there appears to be little effort to provide neural network architectures with proven results that can be adapted to financial legacy systems. The paper examines the feasibility and practicality of implementing a proof-of-concept Perceptron-based Artificial Neural Network (ANN) architecture that can be directly plugged into a legacy paradigm financial system platform that has been trained on specific fraudulent patterns. When using a credit checking subscription service, such a system could act as a backup

    Resolving the Anti-Antievolutionism Dilemma: A Brief for Relational Evolutionary Thinking in Anthropology

    Get PDF
    Anthropologists often disagree about whether, or in what ways, anthropology is “evolutionary.” Anthropologists defending accounts of primate or human biological development and evolution that conflict with mainstream “neo-Darwinian” thinking have sometimes been called “creationists” or have been accused of being “antiscience.” As a result, many cultural anthropologists struggle with an “anti-antievolutionism” dilemma: they are more comfortable opposing the critics of evolutionary biology, broadly conceived, than they are defending mainstream evolutionary views with which they disagree. Evolutionary theory, however, comes in many forms. Relational evolutionary approaches such as Developmental Systems Theory, niche construction, and autopoiesis–natural drift augment mainstream evolutionary thinking in ways that should prove attractive to many anthropologists who wish to affirm evolution but are dissatisfied with current “neo-Darwinian” hegemony. Relational evolutionary thinking moves evolutionary discussion away from reductionism and sterile nature–nurture debates and promises to enable fresh approaches to a range of problems across the subfields of anthropology

    Bayesian statistical inference applied to reservoir modelling and earthquake scaling

    Get PDF

    The Future of General Systems Research: Obstacles, Potentials, Case Studies

    Get PDF
    This paper attempts to provide an evaluative and prescriptive overview of the young field of systems science as exemplified by one of its 'specialties' general systems theory (GST). Subjective observation and some data on seven vital signs are presented to measure the progress of the field over the last two decades. Thirty-three specific obstacles inhibiting current research in systems science are presented. Suggestions for overcoming these obstacles are cited as a prescription for improved progress in the field. A sampling of some of the potential near-term developments that may be expected in the three rather distinct areas of research on systems isomorphics, improvement of systems methodologies, and the utility of systems applications are illustrated with mini-case studies. Throughout, there is an attempt to identify 'key' questions and practical mechanisms that might serve as a stimulus for research. Finally, a set of criteria defining a general theory of systems is suggested and illustrated with a case study. The paper concludes with a projection of the long-term contributions that systems science may make toward a resolution of the growing chasm between high-tech solutions and high-value needs in human systems

    Multi-heuristic theory assessment with iterative selection

    Get PDF
    Modern day machine learning is not without its shortcomings. To start with, the heuristic accuracy, which is the standard assessment criteria for machine learning, is not always the best heuristic to gauge the performance of machine learners. Also machine learners many times produce theories that are unintelligible by people and must be assessed as automated classifiers through machines. Theses theories are either too large or not properly formatted for human interpretation. Furthermore, our studies have identified that most of the data sets we have encountered are satiated with worthless data that actually leads to the degradation of the accuracy of machine learners. Therefore, simpler learning is more optimal. This necessitates a simpler classifier that is not confused with highly correlated data. Lastly, existing machine learners are not sensitive to domains. That is, they are not tunable to search for theories that are most beneficial to specific domains

    A Bayesian framework for concept learning

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1999.Includes bibliographical references (p. 297-314).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Human concept learning presents a version of the classic problem of induction, which is made particularly difficult by the combination of two requirements: the need to learn from a rich (i.e. nested and overlapping) vocabulary of possible concepts and the need to be able to generalize concepts reasonably from only a few positive examples. I begin this thesis by considering a simple number concept game as a concrete illustration of this ability. On this task, human learners can with reasonable confidence lock in on one out of a billion billion billion logically possible concepts, after seeing only four positive examples of the concept, and can generalize informatively after seeing just a single example. Neither of the two classic approaches to inductive inference hypothesis testing in a constrained space of possible rules and computing similarity to the observed examples can provide a complete picture of how people generalize concepts in even this simple setting. This thesis proposes a new computational framework for understanding how people learn concepts from examples, based on the principles of Bayesian inference. By imposing the constraints of a probabilistic model of the learning situation, the Bayesian learner can draw out much more information about a concept's extension from a given set of observed examples than either rule-based or similarity-based approaches do, and can use this information in a rational way to infer the probability that any new object is also an instance of the concept. There are three components of the Bayesian framework: a prior probability distribution over a hypothesis space of possible concepts; a likelihood function, which scores each hypothesis according to its probability of generating the observed examples; and the principle of hypothesis averaging, under which the learner computes the probability of generalizing a concept to new objects by averaging the predictions of all hypotheses weighted by their posterior probability (proportional to the product of their priors and likelihoods). The likelihood, under the assumption of randomly sampled positive examples, embodies the size principle for scoring hypotheses: smaller consistent hypotheses are more likely than larger hypotheses, and they become exponentially more likely as the number of observed examples increases. The principle of hypothesis averaging allows the Bayesian framework to accommodate both rule-like and similarity-like generalization behavior, depending on how peaked the posterior probability is. Together, the size principle plus hypothesis averaging predict a convergence from similarity-like generalization (due to a broad posterior distribution) after very few examples are observed to rule-like generalization (due to a sharply peaked posterior distribution) after sufficiently many examples have been observed. The main contributions of this thesis are as follows. First and foremost, I show how it is possible for people to learn and generalize concepts from just one or a few positive examples (Chapter 2). Building on that understanding, I then present a series of case studies of simple concept learning situations where the Bayesian framework yields both qualitative and quantitative insights into the real behavior of human learners (Chapters 3-5). These cases each focus on a different learning domain. Chapter 3 looks at generalization in continuous feature spaces, a typical representation of objects in psychology and machine learning with the virtues of being analytically tractable and empirically accessible, but the downside of being highly abstract and artificial. Chapter 4 moves to the more natural domain of learning words for categories of objects and shows the relevance of the same phenomena and explanatory principles introduced in the more abstract setting of Chapters 1-3 for real-world learning tasks like this one. In each of these domains, both similarity-like and rule-like generalization emerge as special cases of the Bayesian framework in the limits of very few or very many examples, respectively. However, the transition from similarity to rules occurs much faster in the word learning domain than in the continuous feature space domain. I propose a Bayesian explanation of this difference in learning curves that places crucial importance on the density or sparsity of overlapping hypotheses in the learner's hypothesis space. To test this proposal, a third case study (Chapter 5) returns to the domain of number concepts, in which human learners possess a more complex body of prior knowledge that leads to a hypothesis space with both sparse and densely overlapping components. Here, the Bayesian theory predicts and human learners produce either rule-based or similarity-based generalization from a few examples, depending on the precise examples observed. I also discusses how several classic reasoning heuristics may be used to approximate the much more elaborate computations of Bayesian inference that this domain requires. In each of these case studies, I confront some of the classic questions of concept learning and induction: Is the acquisition of concepts driven mainly by pre-existing knowledge or the statistical force of our observations? Is generalization based primarily on abstract rules or similarity to exemplars? I argue that in almost all instances, the only reasonable answer to such questions is, Both. More importantly, I show how the Bayesian framework allows us to answer much more penetrating versions of these questions: How does prior knowledge interact with the observed examples to guide generalization? Why does generalization appear rule-based in some cases and similarity-based in others? Finally, Chapter 6 summarizes the major contributions in more detailed form and discusses how this work ts into the larger picture of contemporary research on human learning, thinking, and reasoning.by Joshua B. Tenenbaum.Ph.D
    • 

    corecore