9 research outputs found

    Curriculum renewal and achievement assessment via computer management systems in mathematics

    Get PDF
    This study was conducted with the purpose of determining the feasibility of bridging from the existing and renewed curriculum of a district to the curriculum and assessment materials based on research by the School Improvement Model (SIM) at Iowa State University\u27s College of Education. The research provides a direction and samples of each sequential step needed to incorporate goals and assessment components between the two curriculums. A bridging document was developed for this purpose. The steps included: (1) the assessment of current status of content and curriculum goals for the target subject, fifth and sixth grade mathematics, (2) staff training to provide a foundation for further curriculum and test development, (3) computerization of all curriculum management activities and assessments, (4) identification of strengths and limitations to the software package adopted for future users, (5) test all aspects of the strands, program goals, taxonomy levels, and learner outcomes, (6) documentation of all the activities and their sequence used through the use of a diary and a log of historical events and, (7) assessment of the effectiveness of the model and make suggestions for revisions;The study presents the steps relevant to the goal of: (1) providing a model of procedures for future use to facilitate the movement towards results-based education, as the term relates to the development of skills, concepts, and mastery learning; (2) providing a rubric useful to nation-wide practitioners in implementing those outcomes; and (3) provide an initiative for implementing mastery teaching and learning. The procedures delineated in this study should prove invaluable as a rubric for future practitioners of curriculum driven assessment with the assistance of a computer management system;This study utilized both case study and feasibility methodology

    Proceedings of Monterey Workshop 2001 Engineering Automation for Sofware Intensive System Integration

    Get PDF
    The 2001 Monterey Workshop on Engineering Automation for Software Intensive System Integration was sponsored by the Office of Naval Research, Air Force Office of Scientific Research, Army Research Office and the Defense Advance Research Projects Agency. It is our pleasure to thank the workshop advisory and sponsors for their vision of a principled engineering solution for software and for their many-year tireless effort in supporting a series of workshops to bring everyone together.This workshop is the 8 in a series of International workshops. The workshop was held in Monterey Beach Hotel, Monterey, California during June 18-22, 2001. The general theme of the workshop has been to present and discuss research works that aims at increasing the practical impact of formal methods for software and systems engineering. The particular focus of this workshop was "Engineering Automation for Software Intensive System Integration". Previous workshops have been focused on issues including, "Real-time & Concurrent Systems", "Software Merging and Slicing", "Software Evolution", "Software Architecture", "Requirements Targeting Software" and "Modeling Software System Structures in a fastly moving scenario".Office of Naval ResearchAir Force Office of Scientific Research Army Research OfficeDefense Advanced Research Projects AgencyApproved for public release, distribution unlimite

    Integration of Prior Biological Knowledge into Support Vector Machines

    Get PDF
    Ein Ziel der klinischen Krebsforschung ist es, neue, prognostische Gensignaturen zu finden, die den klinischen Verlauf der Krankheit vorhersagen können. Um neue Gensignaturen oder Biomarker zu identizieren, nutzt man in der Bioinformatik oft Klassikationsmethoden. Allerdings verwenden die üblicherweise eingesetzten Verfahren ausschließlich Genexpressionsdaten und sehen Gene als unabhängig an. Mehrere, vor kurzem veröffentlichte, Studien konnten jedoch zeigen, dass sich die Qualität der Klassikation steigern lässt, wenn man Netzwerkwissen in den Klassikationsprozess einfließen lässt. Neben einem verbesserten Klassikationsergebnis wurde auch gezeigt, dass die ausgewählten Gene besser zu interpretieren sind und dass die Selektion der Gene stabiler wird. Aus diesen Gründen beschäftigt sich die vorliegende Arbeit mit Methoden, die die Vorhersagegenauigkeit verbessern indem sie neben Genexpressionsdaten auch Netzwerkwissen für die Klassikation berücksichtigen. Die Arbeit gibt einen Überblick über bestehende Methoden, die in der Lage sind, Netzwerkwissen in die Klassikation einfließen zu lassen sowie über Datenbanken die solches Wissen speichern. Außerdem beschreibt die Arbeit die Entwicklung einer neuen, netzwerkbasierten Klassikationsmethode, die in der Lage ist, die Konnektivität der Gene zu berücksichtigen. Die 'Support Vector Machine' (SVM) wurde als Grundlage des neuen Algorithmus ausgewählt. Normalerweise ist die SVM nicht in der Lage eine Genselektion durchzuführen, d.h. sie nutzt immer alle Gene um einen bestimmten Endpunkt vorherzusagen. Man kann die SVM allerdings mit dem 'Recursive Feature Elimination' (RFE) Algorithmus kombinieren, um eine Genselektion zu ermöglichen. RFE selektiert Gene anhand ihres Einflusses auf die, von der SVM gefundenen, Hyperebene. Das Sortierkriterium von RFE wurde mit einer modizierten Version von Google's PageRank-Algorithmus verändert. Die abgewandelte Version von PageRank nennt sich GeneRank und errechnet, basierend auf einem Graphen der aus einer Protein-Protein Interaktionsdatenbank erstellt wurde, ein Gewicht für jedes Gen. Dieses Gewicht wurde mit dem Sortierkriterium von RFE kombiniert, um das Netzwerkwissen in die Sortierung der Gene und damit in die Klassifikation zu integrieren. Wegen dieser Neugewichtung wurde der neuentwickelte Algorithmus 'Reweighted Recursive Feature Elimination' (RRFE) genannt. RRFE verfolgt die Annahme, dass Gene, die nur eine geringe Änderung in ihrer Expression aufweisen, die Chance haben sollten einen gesteigerten Einfluss auf die Klassikation zu nehmen, wenn sie stark vernetzt sind. Diese Annahme wurde durch die Kombination von GeneRank und RFE umgesetzt. Dadurch hilft RRFE den zugrundeliegenden, biologischen Vorgang besser zu verstehen. Außerdem trägt RRFE dazu bei, den Anteil an ungenutzen Informationen in den Daten zu verringern und funktionell wichtige Gene zu identifizieren. RRFE wurde auf einem integrierten und vier unabhängigen Brustkrebsdatensätzen getestet. Die Datensätze bestehen zusammen aus fast 800 Patienten. RRFE wurde verwendet, um den ERBB2-Status sowie das Risiko eines Brustkrebsrückfalls vorherzusagen. In den Analysen zeigte sich eine verbesserte Interpretierbarkeit und Stabilität der selektierten Gene. Desweiteren konnte auch die Genauigkeit der Klassikation gegenüber standard- sowie netzwerkbasierten Klassifikatoren gesteigert werden. Neben den theoretischen Grundlagen von RRFE stellt die Arbeit auch ein neues R-Paket vor, welches die Implementierungen von RRFE und weiterer netzwerkbasierter Klassikationsmethoden enthält. Ziel war es, die Nutzung von RRFE und anderen Methoden zu vereinfachen, um Entwicklern die Möglichkeit zu geben, die Güte ihrer neuentwickelten Algorithmen mit bereits bestehenden Verfahren zu vergleichen. Das Software-Paket beinhaltet Funktionen, welche zum Vergleichen von Klassikationsmethoden, dem Erstellen von Grafiken und zur Indentifizierung von Genen, die maßgeblich zur Klassikation beigetragen haben, nötig sind

    Advances in scalable learning and sampling of unnormalised models

    Get PDF
    We study probabilistic models that are known incompletely, up to an intractable normalising constant. To reap the full benefit of such models, two tasks must be solved: learning and sampling. These two tasks have been subject to decades of research, and yet significant challenges still persist. Traditional approaches often suffer from poor scalability with respect to dimensionality and model-complexity, generally rendering them inapplicable to models parameterised by deep neural networks. In this thesis, we contribute a new set of methods for addressing this scalability problem. We first explore the problem of learning unnormalised models. Our investigation begins with a well-known learning principle, Noise-contrastive Estimation, whose underlying mechanism is that of density-ratio estimation. By examining why existing density-ratio estimators scale poorly, we identify a new framework, telescoping density-ratio estimation (TRE), that can learn ratios between highly dissimilar densities in high-dimensional spaces. Our experiments demonstrate that TRE not only yields substantial improvements for the learning of deep unnormalised models, but can do the same for a broader set of tasks including mutual information estimation and representation learning. Subsequently, we explore the problem of sampling unnormalised models. A large literature on Markov chain Monte Carlo (MCMC) can be leveraged here, and in continuous domains, gradient-based samplers such as Metropolis-adjusted Langevin algorithm (MALA) and Hamiltonian Monte Carlo are excellent options. However, there has been substantially less progress in MCMC for discrete domains. To advance this subfield, we introduce several discrete Metropolis-Hastings samplers that are conceptually inspired by MALA, and demonstrate their strong empirical performance across a range of challenging sampling tasks

    Matrix Decomposition and Applications

    Full text link
    In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.Comment: arXiv admin note: substantial text overlap with arXiv:2107.02579, arXiv:2105.0424

    A mathematics rendering model to support chat-based tutoring

    Get PDF
    Dr Math is a math tutoring service implemented on the chat application Mxit. The service allows school learners to use their mobile phones to discuss mathematicsrelated topics with human tutors. Using the broad user-base provided by Mxit, the Dr Math service has grown to consist of tens of thousands of registered school learners. The tutors on the service are all volunteers and the learners far outnumber the available tutors at any given time. School learners on the service use a shorthand language-form called microtext, to phrase their queries. Microtext is an informal form of language which consists of a variety of misspellings and symbolic representations, which emerge spontaneously as a result of the idiosyncrasies of a learner. The specific form of microtext found on the Dr Math service contains mathematical questions and example equations, pertaining to the tutoring process. Deciphering the queries, to discover their embedded mathematical content, slows down the tutoring process. This wastes time that could have been spent addressing more learner queries. The microtext language thus creates an unnecessary burden on the tutors. This study describes the development of an automated process for the translation of Dr Math microtext queries into mathematical equations. Using the design science research paradigm as a guide, three artefacts are developed. These artefacts take the form of a construct, a model and an instantiation. The construct represents the creation of new knowledge as it provides greater insight into the contents and structure of the language found on a mobile mathematics tutoring service. The construct serves as the basis for the creation of a model for the translation of microtext queries into mathematical equations, formatted for display in an electronic medium. No such technique currently exists and therefore, the model contributes new knowledge. To validate the model, an instantiation was created to serve as a proof-of-concept. The instantiation applies various concepts and techniques, such as those related to natural language processing, to the learner queries on the Dr Math service. These techniques are employed in order to translate an input microtext statement into a mathematical equation, structured by using mark-up language. The creation of the instantiation thus constitutes a knowledge contribution, as most of these techniques have never been applied to the problem of translating microtext into mathematical equations. For the automated process to have utility, it should perform on a level comparable to that of a human performing a similar translation task. To determine how closely related the results from the automated process are to those of a human, three human participants were asked to perform coding and translation tasks. The results of the human participants were compared to the results of the automated process, across a variety of metrics, including agreement, correlation, precision, recall and others. The results from the human participants served as the baseline values for comparison. The baseline results from the human participants were compared with those of the automated process. Krippendorff’s α was used to determine the level of agreement and Pearson’s correlation coefficient to determine the level of correlation between the results. The agreement between the human participants and the automated process was calculated at a level deemed satisfactory for exploratory research and the level of correlation was calculated as moderate. These values correspond with the calculations made as the human baseline. Furthermore, the automated process was able to meet or improve on all of the human baseline metrics. These results serve to validate that the automated process is able to perform the translation at a level comparable to that of a human. The automated process is available for integration into any requesting application, by means of a publicly accessible web service

    Pomobabble: Postmodern Newspeak and Constitutional Meaning for the Uninitiated

    Get PDF
    A parody of postmodern writing

    Catalog for 1947-48, Announcements 1948-49

    Get PDF
    This University of Maine catalog for the year 1947-49 includes a list of the Board of Trustees, a section on the Brunswick Campus, calendar general information about the design of the institution, faculty, admission, courses of instruction, and expenses. It also provides lists of honors and prizes awarded, the commencement program, degrees conferred, students enrolled and a summary of statistics related to student enrollment

    Accounting practice management handbook;

    Get PDF
    https://egrove.olemiss.edu/aicpa_guides/1004/thumbnail.jp
    corecore