4,705 research outputs found

    Empirical Development of an Instructional Product and its Impact on Mastery of Geometry Concepts

    Get PDF
    Problem Relatively poor levels of mathematical thinking among American school children have been identified as a major issue over the past half century. Many efforts have been made to increase the mathematics performance of children in schools. Additionally, out-of-school-time programs have attempted to address this issue as well. Holistic development is one of the distinguishing features of Seventh-day Adventist instructional programs. Yet, as of 2007, the Pathfinder program, an informal educational program operated by the world-wide Seventh-day Adventist church, had no instructional product designed to foster participants’ cognitive development in mathematics. This study focused on the empirical development of an out-of-school-time geometry curriculum and looked at its impact on mastery of geometric concepts. This program was entitled Geometry in Real-life Application Curriculum Experiences (G.R.A.C.E.). Method The instructional product development procedure of Baker and Schutz was employed in this study. First the need for an empirically developed geometry education product for Pathfinders was established. Then behavioral objectives were written, based on the standards developed by the National Council of Teachers of Mathematics and geometry education literature. Instructional activities were prepared to help meet each objective and organized in logical sequence. Bloom’s Revised Taxonomy served as a resource during initial instructional development. The Baker and Schutz process stipulated that the instructional product undergo repeated tryouts with members of the target population. The development process would be considered complete when a minimum of 80% of G.R.A.C.E. Project participants scored at least 80% on each of the stated objectives. Accordingly, the instructional product was subjected to repeated revision during its developmental stages. Appropriate adjustments were made to eliminate specific weaknesses. Both developer’s and participant’s manuals were created in their final forms. Results The completed G.R.A.C.E. Project consists of a developer’s manual, pre- and post-tests for participants, and a participant’s manual. The developer’s manual covers both relevant content and detailed procedures for project presentation and test administration. The participants’ manual presents mathematics content to be mastered by participants. In addition, review questions and answers, diagrams, and charts are included to facilitate mastery of project contents. The pre-/post-test inventory consists of a 25-item cognitive instrument combined with a 20-item affective instrument. After three field trials and revisions of the curriculum, the product was delivered to 25 subjects. These subjects were able to achieve cognitive mastery at the level specified for the 25 objectives. Based on the assumptions of the Baker and Schutz model the percentage difference between affective post- and pre-test scores was expected to be positive, yielding a moderate effect size. However, the average effect size for all four groups was .868, indicating a high impact of program on subjects’ interest in and appreciation of geometry concepts. Conclusions This study provided insight into the role of curriculum developers as they engage in the process of empirical development. It also provided a resource for instructors in Pathfinder instructional programs in the Lake Union Conference of Seventh-day Adventists. Other Seventh-day Adventist audiences may also utilize a modified version of the instrument in their instructional programs for Pathfinders

    Is Quantitative Research Ethical? Tools for Ethically Practicing, Evaluating, and Using Quantitative Research

    Get PDF
    This editorial offers new ways to ethically practice, evaluate, and use quantitative research (QR). Our central claim is that ready-made formulas for QR, including 'best practices' and common notions of 'validity' or 'objectivity,' are often divorced from the ethical and practical implications of doing, evaluating, and using QR for specific purposes. To focus on these implications, we critique common theoretical foundations for QR and then recommend approaches to QR that are 'built for purpose,' by which we mean designed to ethically address specific problems or situations on terms that are contextually relevant. For this, we propose a new tool for evaluating the quality of QR, which we call 'relational validity.' Studies, including their methods and results, are relationally valid when they ethically connect researchers' purposes with the way that QR is oriented and the ways that it is done—including the concepts and units of analysis invoked, as well as what its 'methods' imply more generally. This new way of doing QR can provide the liberty required to address serious worldly problems on terms that are both practical and ethically informed in relation to the problems themselves rather than the confines of existing QR logics and practices.cited By

    Methodology of Algorithm Engineering

    Full text link
    Research on algorithms has drastically increased in recent years. Various sub-disciplines of computer science investigate algorithms according to different objectives and standards. This plurality of the field has led to various methodological advances that have not yet been transferred to neighboring sub-disciplines. The central roadblock for a better knowledge exchange is the lack of a common methodological framework integrating the perspectives of these sub-disciplines. It is the objective of this paper to develop a research framework for algorithm engineering. Our framework builds on three areas discussed in the philosophy of science: ontology, epistemology and methodology. In essence, ontology describes algorithm engineering as being concerned with algorithmic problems, algorithmic tasks, algorithm designs and algorithm implementations. Epistemology describes the body of knowledge of algorithm engineering as a collection of prescriptive and descriptive knowledge, residing in World 3 of Popper's Three Worlds model. Methodology refers to the steps how we can systematically enhance our knowledge of specific algorithms. The framework helps us to identify and discuss various validity concerns relevant to any algorithm engineering contribution. In this way, our framework has important implications for researching algorithms in various areas of computer science

    Approximate Inference for Determinantal Point Processes

    Get PDF
    In this thesis we explore a probabilistic model that is well-suited to a variety of subset selection tasks: the determinantal point process (DPP). DPPs were originally developed in the physics community to describe the repulsive interactions of fermions. More recently, they have been applied to machine learning problems such as search diversification and document summarization, which can be cast as subset selection tasks. A challenge, however, is scaling such DPP-based methods to the size of the datasets of interest to this community, and developing approximations for DPP inference tasks whose exact computation is prohibitively expensive. A DPP defines a probability distribution over all subsets of a ground set of items. Consider the inference tasks common to probabilistic models, which include normalizing, marginalizing, conditioning, sampling, estimating the mode, and maximizing likelihood. For DPPs, exactly computing the quantities necessary for the first four of these tasks requires time cubic in the number of items or features of the items. In this thesis, we propose a means of making these four tasks tractable even in the realm where the number of items and the number of features is large. Specifically, we analyze the impact of randomly projecting the features down to a lower-dimensional space and show that the variational distance between the resulting DPP and the original is bounded. In addition to expanding the circumstances in which these first four tasks are tractable, we also tackle the other two tasks, the first of which is known to be NP-hard (with no PTAS) and the second of which is conjectured to be NP-hard. For mode estimation, we build on submodular maximization techniques to develop an algorithm with a multiplicative approximation guarantee. For likelihood maximization, we exploit the generative process associated with DPP sampling to derive an expectation-maximization (EM) algorithm. We experimentally verify the practicality of all the techniques that we develop, testing them on applications such as news and research summarization, political candidate comparison, and product recommendation

    Preservice Secondary School Mathematics Teachers\u27 Current Notions of Proof in Euclidean Geometry

    Get PDF
    Much research has been conducted in the past 25 years related to the teaching and learning of proof in Euclidean geometry. However, very little research has been done focused on preservice secondary school mathematics teachers’ notions of proof in Euclidean geometry. Thus, this qualitative study was exploratory in nature, consisting of four case studies focused on identifying preservice secondary school mathematics teachers’ current notions of proof in Euclidean geometry, a starting point for improving the teaching and learning of proof in Euclidean geometry. The unit of analysis (i.e., participant) in each case study was a preservice mathematics teacher. The case studies were parallel as each participant was presented with the same Euclidean geometry content in independent interview sessions. The content consisted of six Euclidean geometry statements and a Euclidean geometry problem appropriate for a secondary school Euclidean geometry course. For five of the six Euclidean geometry statements, three justifications for each statement were presented for discussion. For the sixth Euclidean geometry statement and the Euclidean geometry problem, participants constructed justifications for discussion. A case record for each case study was constructed from an analysis of data generated from interview sessions, including anecdotal notes from the playback of the recorded interviews, the review of the interview transcripts, document analyses of both previous geometry course documents and any documents generated by participants via assigned Euclidean geometry tasks, and participant emails. After the four case records were completed, a cross-case analysis was conducted to identify themes that traverse the individual cases. From the analyses, participants’ current notions of proof in Euclidean geometry were somewhat diverse, yet suggested that an integration of justifications consisting of empirical and deductive evidence for Euclidean geometry statements could improve both the teaching and learning of Euclidean geometry

    Graph Neural Networks for Natural Language Processing: A Survey

    Full text link
    Deep learning has become the dominant approach in coping with various tasks in Natural LanguageProcessing (NLP). Although text inputs are typically represented as a sequence of tokens, there isa rich variety of NLP problems that can be best expressed with a graph structure. As a result, thereis a surge of interests in developing new deep learning techniques on graphs for a large numberof NLP tasks. In this survey, we present a comprehensive overview onGraph Neural Networks(GNNs) for Natural Language Processing. We propose a new taxonomy of GNNs for NLP, whichsystematically organizes existing research of GNNs for NLP along three axes: graph construction,graph representation learning, and graph based encoder-decoder models. We further introducea large number of NLP applications that are exploiting the power of GNNs and summarize thecorresponding benchmark datasets, evaluation metrics, and open-source codes. Finally, we discussvarious outstanding challenges for making the full use of GNNs for NLP as well as future researchdirections. To the best of our knowledge, this is the first comprehensive overview of Graph NeuralNetworks for Natural Language Processing.Comment: 127 page

    Effects of training datasets on both the extreme learning machine and support vector machine for target audience identification on twitter

    Get PDF
    The ability to identify or predict a target audience from the increasingly crowded social space will provide a company some competitive advantage over other companies. In this paper, we analyze various training datasets, which include Twitter contents of an account owner and its list of followers, using features generated in different ways for two machine learning approaches - the Extreme Learning Machine (ELM) and Support Vector Machine (SVM). Various configurations of the ELM and SVM have been evaluated. The results indicate that training datasets using features generated from the owner tweets achieve the best performance, relative to other feature sets. This finding is important and may aid researchers in developing a classifier that is capable of identifying a specific group of target audience members. This will assist the account owner to spend resources more effectively, by sending offers to the right audience, and hence maximize marketing efficiency and improve the return on investment

    Data Mining Techniques to Understand Textual Data

    Get PDF
    More than ever, information delivery online and storage heavily rely on text. Billions of texts are produced every day in the form of documents, news, logs, search queries, ad keywords, tags, tweets, messenger conversations, social network posts, etc. Text understanding is a fundamental and essential task involving broad research topics, and contributes to many applications in the areas text summarization, search engine, recommendation systems, online advertising, conversational bot and so on. However, understanding text for computers is never a trivial task, especially for noisy and ambiguous text such as logs, search queries. This dissertation mainly focuses on textual understanding tasks derived from the two domains, i.e., disaster management and IT service management that mainly utilizing textual data as an information carrier. Improving situation awareness in disaster management and alleviating human efforts involved in IT service management dictates more intelligent and efficient solutions to understand the textual data acting as the main information carrier in the two domains. From the perspective of data mining, four directions are identified: (1) Intelligently generate a storyline summarizing the evolution of a hurricane from relevant online corpus; (2) Automatically recommending resolutions according to the textual symptom description in a ticket; (3) Gradually adapting the resolution recommendation system for time correlated features derived from text; (4) Efficiently learning distributed representation for short and lousy ticket symptom descriptions and resolutions. Provided with different types of textual data, data mining techniques proposed in those four research directions successfully address our tasks to understand and extract valuable knowledge from those textual data. My dissertation will address the research topics outlined above. Concretely, I will focus on designing and developing data mining methodologies to better understand textual information, including (1) a storyline generation method for efficient summarization of natural hurricanes based on crawled online corpus; (2) a recommendation framework for automated ticket resolution in IT service management; (3) an adaptive recommendation system on time-varying temporal correlated features derived from text; (4) a deep neural ranking model not only successfully recommending resolutions but also efficiently outputting distributed representation for ticket descriptions and resolutions

    Applying science of learning in education: Infusing psychological science into the curriculum

    Get PDF
    The field of specialization known as the science of learning is not, in fact, one field. Science of learning is a term that serves as an umbrella for many lines of research, theory, and application. A term with an even wider reach is Learning Sciences (Sawyer, 2006). The present book represents a sliver, albeit a substantial one, of the scholarship on the science of learning and its application in educational settings (Science of Instruction, Mayer 2011). Although much, but not all, of what is presented in this book is focused on learning in college and university settings, teachers of all academic levels may find the recommendations made by chapter authors of service. The overarching theme of this book is on the interplay between the science of learning, the science of instruction, and the science of assessment (Mayer, 2011). The science of learning is a systematic and empirical approach to understanding how people learn. More formally, Mayer (2011) defined the science of learning as the “scientific study of how people learn” (p. 3). The science of instruction (Mayer 2011), informed in part by the science of learning, is also on display throughout the book. Mayer defined the science of instruction as the “scientific study of how to help people learn” (p. 3). Finally, the assessment of student learning (e.g., learning, remembering, transferring knowledge) during and after instruction helps us determine the effectiveness of our instructional methods. Mayer defined the science of assessment as the “scientific study of how to determine what people know” (p.3). Most of the research and applications presented in this book are completed within a science of learning framework. Researchers first conducted research to understand how people learn in certain controlled contexts (i.e., in the laboratory) and then they, or others, began to consider how these understandings could be applied in educational settings. Work on the cognitive load theory of learning, which is discussed in depth in several chapters of this book (e.g., Chew; Lee and Kalyuga; Mayer; Renkl), provides an excellent example that documents how science of learning has led to valuable work on the science of instruction. Most of the work described in this book is based on theory and research in cognitive psychology. We might have selected other topics (and, thus, other authors) that have their research base in behavior analysis, computational modeling and computer science, neuroscience, etc. We made the selections we did because the work of our authors ties together nicely and seemed to us to have direct applicability in academic settings
    • …
    corecore