1,997,725 research outputs found

    Analisis Tingkat Kesulitan Soal Pemecahan Masalah Dalam Buku Siswa Pelajaran Matematika SMP Kelas VIII Kurikulum 2013

    Get PDF
    This study aims to analyze and describe the level of difficulty of problem solving items mathematics in terms of the type of problem, the type of number, type of operation, the number of operations, the number of questions, the adequacy of the data and the similarity with the previous problem. This study uses qualitative research. The object of this study is the mathematics textbook for eight-grade of curriculum 2013 published by the Ministry of Education and Culture. Data accumulation techniques by determine of problem solving items in student textbooks. The data are analyzed using workflow stage data reduction, data presentation, and conclusion. The results showed: 1) There are 297 problem solving or approximately 23,35% of the 1.272 items; 2) Types of problems metter solving 80,81% dominant non-routine; 3) Types the number of dominant numbers count with of 63,39%; 4) Types of operations used multiplication with a of 28,93%; 5) Many operations more than one operation (>0) 83,16%; 6) Many of the questions used 71,72% have one question; 7) The adequacy of the complete data 96,63%; 8) Resemblance is a matter not similar to the previous of 76,77%. So about the problem solving is categorized having medium difficulty level task

    Julia: A Fresh Approach to Numerical Computing

    Get PDF
    Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast. Julia questions notions generally held as "laws of nature" by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment, and 3. There are parts of a system for the programmer, and other parts best left untouched as they are built by the experts. We introduce the Julia programming language and its design --- a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can have machine performance without sacrificing human convenience.Comment: 37 page

    Generating Bijections between HOAS and the Natural Numbers

    Full text link
    A provably correct bijection between higher-order abstract syntax (HOAS) and the natural numbers enables one to define a "not equals" relationship between terms and also to have an adequate encoding of sets of terms, and maps from one term family to another. Sets and maps are useful in many situations and are preferably provided in a library of some sort. I have released a map and set library for use with Twelf which can be used with any type for which a bijection to the natural numbers exists. Since creating such bijections is tedious and error-prone, I have created a "bijection generator" that generates such bijections automatically together with proofs of correctness, all in the context of Twelf.Comment: In Proceedings LFMTP 2010, arXiv:1009.218

    Program representation size in an intermediate language with intersection and union types

    Full text link
    The CIL compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and flow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce customized data representations. However, duplication incurs compile-time space costs that are potentially much greater than are incurred in TILs employing type-level abstraction or quantification. In this paper, we present empirical data on the compile-time space costs of using CIL as an intermediate language. The data shows that these costs can be made tractable by using sufficiently fine-grained flow analyses together with standard hash-consing techniques. The data also suggests that non-duplicating formulations of intersection (and union) types would not achieve significantly better space complexity.National Science Foundation (CCR-9417382, CISE/CCR ESS 9806747); Sun grant (EDUD-7826-990410-US); Faculty Fellowship of the Carroll School of Management, Boston College; U.K. Engineering and Physical Sciences Research Council (GR/L 36963, GR/L 15685

    Milestones: Supporting learners with complex additional support needs

    Get PDF

    Representation Learning for Attributed Multiplex Heterogeneous Network

    Full text link
    Network embedding (or graph embedding) has been widely used in many real-world applications. However, existing methods mainly focus on networks with single-typed nodes/edges and cannot scale well to handle large networks. Many real-world networks consist of billions of nodes and edges of multiple types, and each node is associated with different attributes. In this paper, we formalize the problem of embedding learning for the Attributed Multiplex Heterogeneous Network and propose a unified framework to address this problem. The framework supports both transductive and inductive learning. We also give the theoretical analysis of the proposed framework, showing its connection with previous works and proving its better expressiveness. We conduct systematical evaluations for the proposed framework on four different genres of challenging datasets: Amazon, YouTube, Twitter, and Alibaba. Experimental results demonstrate that with the learned embeddings from the proposed framework, we can achieve statistically significant improvements (e.g., 5.99-28.23% lift by F1 scores; p<<0.01, t-test) over previous state-of-the-art methods for link prediction. The framework has also been successfully deployed on the recommendation system of a worldwide leading e-commerce company, Alibaba Group. Results of the offline A/B tests on product recommendation further confirm the effectiveness and efficiency of the framework in practice.Comment: Accepted to KDD 2019. Website: https://sites.google.com/view/gatn

    XRay: Enhancing the Web's Transparency with Differential Correlation

    Get PDF
    Today's Web services - such as Google, Amazon, and Facebook - leverage user data for varied purposes, including personalizing recommendations, targeting advertisements, and adjusting prices. At present, users have little insight into how their data is being used. Hence, they cannot make informed choices about the services they choose. To increase transparency, we developed XRay, the first fine-grained, robust, and scalable personal data tracking system for the Web. XRay predicts which data in an arbitrary Web account (such as emails, searches, or viewed products) is being used to target which outputs (such as ads, recommended products, or prices). XRay's core functions are service agnostic and easy to instantiate for new services, and they can track data within and across services. To make predictions independent of the audited service, XRay relies on the following insight: by comparing outputs from different accounts with similar, but not identical, subsets of data, one can pinpoint targeting through correlation. We show both theoretically, and through experiments on Gmail, Amazon, and YouTube, that XRay achieves high precision and recall by correlating data from a surprisingly small number of extra accounts.Comment: Extended version of a paper presented at the 23rd USENIX Security Symposium (USENIX Security 14
    corecore